Jump to content

Are ACL injuries no big deal anymore? The data suggest they are


Rubes

Recommended Posts

Well Tom Brady had an ACL,PCL, and MCL all tear in 2007 (aka the terrible triad) and he is playing 15 years later so I would say it all depends on whether you can a) play with a brace and b) commit constantly to having strong leg muscles surrounding that injured area. Basketball, tennis, and football are tough sports for that ACL because of all the pivoting where your foot can get stuck planted and that same knee can get twisted or stepped on again. I worry about Tre because a CB isn’t in control of where the play will go so that is harder on the knees IMO. I had triad surgery and it took a long time to trust it but before it all went I only had an ACL tear and I was still able to run and even push weight with the leg. So for a solid pro, an ACL isn’t a career ender at all. Reality is someone like Tre could also end up being moved to safety in the future if his change of direction is impacted.   To some of the prior posts, this isn’t the 80’s or 90’s where tears often ended careers. The world of medicine is way different now. They have much better techniques. 

Link to comment
Share on other sites

10 hours ago, Rubes said:

 

Not exactly. Using a person as their own control is an appropriate design if you think all of the external factors that could impact the outcome are the same before the exposure (injury) vs. after. If you're just measuring, for instance, speed or leg strength or something like that, then that's a reasonable thing to do. I think the point that many people are making here is that this is not the case—when players are injured and are lost for a year (more or less), there are other factors that can impact the outcomes of interest here: the number of starts a player has, the number of snaps they play, etc. Being injured and missing a lot or all of a season can result in other players taking over starting roles, teams deciding to move on to cheaper players, and so on. It may depend on age, on whether they were an entrenched starter or a backup, whether new draft picks have come along, and so on.

 

The purpose of including a control or comparator group is to make sure that the observations seen—a change in starts, a change in snaps, or other change in performance—is due specifically to the exposure (injury). If you do a study as you say to measure performance at the same task after an intervention, you can't really say for sure that the intervention is the cause of any changes seen (eg, differences could be due to various things that change over time). That's why you include a control group made up of similar people with the same features measured at the same time, with presumably the only difference being the absence of the intervention. Then you do things like measure the average change in the intervention group and compare that to the average change in the control group. The difference, presumably, is due to the main difference between the groups—the intervention.

 

Same thing with the benzo study. In order to know that the changes in memory observed were due to the benzos (and not to a placebo effect), they would need a separate but otherwise similar control group who is given an injection of a placebo. Then compare the average changes seen in the benzo group vs. the control group, with the difference (if any) thus attributable to the benzos. It's true for surgical trials, too—in some studies looking at the effect of a surgical intervention, they are often compared to a control group given a "sham" surgical procedure, since just the act of undergoing surgery could result in changes observed in the outcomes.

 

But the latter is addressing prospective randomized trial studies, the gold standard for evidence. What these guys did here is a retrospective observational study. In order to design an observational study to be as similar to a prospective randomized trial as possible, you do the work to choose a historical control group that is as similar to the intervention group as possible, and otherwise has (presumably) the same distribution of "unmeasured" variables. It's the analogy of randomizing in a controlled trial, the purpose of which is to try and ensure that the two comparison groups are identical other than the intervention.

 

 

Sham surgical trials aren't done for obvious ethical reasons, for the most part.

 

The benzo test wasn't asking how they affect performance of the skills games, but the effect on the games as it relates to the concurrent MRI images.  Comparing to a cohort not given benzos would have not been meaningful, as that was also me before I was drugged.

 

Certainly being injured leads to missed games and losing starting roles and teams moving on.  That would be reflected in the decreased performance metrics they listed.  The inference is that, if the player isn't back to preinjury performance level, the team will likely move on.  

 

But let's say you matched with an uninjured cohort by age, position and number of snaps before injury.  The null hypothesis is that the dropoff in performance for the injured player is no different than the noninjured controls?  That the injured players were as likely to have a dropoff if they were not injured?

 

While randomized controlled studies are "gold standard", not all queries lend themselves to this type of study.  And certainly uncontrolled "before and after" studies are well known and accepted in the medical literature.  Faulting the editorial review of this paper by this journal (some have here) therefore, is not appropriate.  They understand the limitations of such a study--but they obviously see it as valid and congruent with other published similar studies on the same topic (referenced)

Link to comment
Share on other sites

2 hours ago, Mr. WEO said:

 

Sham surgical trials aren't done for obvious ethical reasons, for the most part.

 

The benzo test wasn't asking how they affect performance of the skills games, but the effect on the games as it relates to the concurrent MRI images.  Comparing to a cohort not given benzos would have not been meaningful, as that was also me before I was drugged.

 

Certainly being injured leads to missed games and losing starting roles and teams moving on.  That would be reflected in the decreased performance metrics they listed.  The inference is that, if the player isn't back to preinjury performance level, the team will likely move on.  

 

But let's say you matched with an uninjured cohort by age, position and number of snaps before injury.  The null hypothesis is that the dropoff in performance for the injured player is no different than the noninjured controls?  That the injured players were as likely to have a dropoff if they were not injured?

 

While randomized controlled studies are "gold standard", not all queries lend themselves to this type of study.  And certainly uncontrolled "before and after" studies are well known and accepted in the medical literature.  Faulting the editorial review of this paper by this journal (some have here) therefore, is not appropriate.  They understand the limitations of such a study--but they obviously see it as valid and congruent with other published similar studies on the same topic (referenced)

 

Sham surgeries are most definitely performed, on animals for animal studies, which of course are relevant for our understanding of similar scientific questions in humans. The sham surgeries are done because of the reasons I stated.

 

Of course, not all queries lend themselves to a randomized trial. You can't do a prospective randomized trial of ACL injuries, for instance. Studies like that are best done as controlled observational studies the way I described. You can certainly do an uncontrolled before-after study, and lots of people publish those, but by no means are those studies considered to be high quality evidence. The main criticism of an uncontrolled before-after study is that the results are untrustworthy—you have no idea if the observed effects are truly significant or not. In many cases it's very difficult to identify a control group for a before-after study, and that's okay, you can't always have what you want. But by accepting that and publishing an uncontrolled before-after study, you're basically admitting that your results, while interesting, may or may not have real-world significance.

 

You choose a study design based on the question you're trying to answer. If the question is: what is the impact of an ACL injury on an NFL player's career? then you know you'll be doing an observational study, but the real question you're trying to answer is: how does what happened to those injured players compare to what would have happened if they had never been injured? Since you can't do that directly, you do the next best thing—compare what happened to those injured players to what happened to a similar group of non-injured players.

 

Imagine if the main outcome you were trying to test is a player's maximum running speed. So the main question is: what is the impact of an ACL injury and repair on a player's maximum running speed? Let's say you have all of the data on player's maximum speeds from the NFL combine, and now you identify players who had ACL injuries during their NFL career, so you test them again for their max running speed. You could just do a simple before-after study and compare their running speeds now vs. their running speeds then, and you'd probably see a decent difference. You could, for instance, say that those with an ACL injury saw an average loss of 1MPH in their max running speed. Is that a valid conclusion? Not really.

 

Of course, the reason is that everyone slows down as they age, so what you'd really want to do is compare the average speed loss in those who are injured with the average speed loss in those who were never injured. Then you'd know the impact of the ACL injury on max running speed. You need the control group to know how significant the loss of speed observed is.

 

And yes, the null hypothesis would be that the ACL injury had no effect on max running speed. And that may very well be the case. Or, there may be an effect, but it's not statistically significant. Or, the loss may be statistically significant, but it may not be significant in the real world. For instance, you could show a statistically significant loss of 0.1MPH to their speed, but how that impacts play in real games may not be a big deal.

 

But you still need the control group to understand whether the observed differences pre- and post-injury are statistically above and beyond what you would have seen, on average, in the absence of the injury.

 

Link to comment
Share on other sites

17 minutes ago, Rubes said:

 

Sham surgeries are most definitely performed, on animals for animal studies, which of course are relevant for our understanding of similar scientific questions in humans. The sham surgeries are done because of the reasons I stated.

 

Of course, not all queries lend themselves to a randomized trial. You can't do a prospective randomized trial of ACL injuries, for instance. Studies like that are best done as controlled observational studies the way I described. You can certainly do an uncontrolled before-after study, and lots of people publish those, but by no means are those studies considered to be high quality evidence. The main criticism of an uncontrolled before-after study is that the results are untrustworthy—you have no idea if the observed effects are truly significant or not. In many cases it's very difficult to identify a control group for a before-after study, and that's okay, you can't always have what you want. But by accepting that and publishing an uncontrolled before-after study, you're basically admitting that your results, while interesting, may or may not have real-world significance.

 

You choose a study design based on the question you're trying to answer. If the question is: what is the impact of an ACL injury on an NFL player's career? then you know you'll be doing an observational study, but the real question you're trying to answer is: how does what happened to those injured players compare to what would have happened if they had never been injured? Since you can't do that directly, you do the next best thing—compare what happened to those injured players to what happened to a similar group of non-injured players.

 

Imagine if the main outcome you were trying to test is a player's maximum running speed. So the main question is: what is the impact of an ACL injury and repair on a player's maximum running speed? Let's say you have all of the data on player's maximum speeds from the NFL combine, and now you identify players who had ACL injuries during their NFL career, so you test them again for their max running speed. You could just do a simple before-after study and compare their running speeds now vs. their running speeds then, and you'd probably see a decent difference. You could, for instance, say that those with an ACL injury saw an average loss of 1MPH in their max running speed. Is that a valid conclusion? Not really.

 

Of course, the reason is that everyone slows down as they age, so what you'd really want to do is compare the average speed loss in those who are injured with the average speed loss in those who were never injured. Then you'd know the impact of the ACL injury on max running speed. You need the control group to know how significant the loss of speed observed is.

 

And yes, the null hypothesis would be that the ACL injury had no effect on max running speed. And that may very well be the case. Or, there may be an effect, but it's not statistically significant. Or, the loss may be statistically significant, but it may not be significant in the real world. For instance, you could show a statistically significant loss of 0.1MPH to their speed, but how that impacts play in real games may not be a big deal.

 

But you still need the control group to understand whether the observed differences pre- and post-injury are statistically above and beyond what you would have seen, on average, in the absence of the injury.

 

 

I wouldn't have thought you referring to  animal studies.

 

 

You state that non controlled studies results "may or may not have real world significance" and yet also state that control matched studies with statistically significant results which "may also not be significant in the real world".  This is the basic truth of most of what is published all the time.  It doesn't make them untrustworthy results.

Link to comment
Share on other sites

15 minutes ago, Mr. WEO said:

You state that non controlled studies results "may or may not have real world significance" and yet also state that control matched studies with statistically significant results which "may also not be significant in the real world".  This is the basic truth of most of what is published all the time.  It doesn't make them untrustworthy results.

 

Poorly worded on my part. Non-controlled studies may produce results of undetermined meaning—you may show a statistical difference before and after the intervention, but without a control group you can't say if that result would have happened anyway, in the absence of the intervention.

 

Controlled studies can produce results with more confidence and meaning in their statistical significance—but the statistically significant value may not have much meaning in the real world, like if the study of max running speed had a large enough sample size to detect a tiny difference of 0.01MPH. You could conclude that ACLs had a significant impact on speed, but most people would look at 0.01MPH and say, "Who cares?"

 

The former is less trustworthy, because you can't draw a valid conclusion with much confidence. The latter is more trustworthy, because it was designed well, even though the more valid conclusion may not ultimately mean much.

 

  • Like (+1) 1
  • Agree 1
Link to comment
Share on other sites

On 4/6/2022 at 12:37 PM, FireChans said:

Most players are probably out of the league after an ACL because teams don’t want to keep a roster spot for a rehabbing player who isn’t an impact player.

 

If Dane Jackson tore his ACL instead of Tre, instead of “keeping his roster spot warm for him” we would just find a replacement player.

 

 

Agree...........Justin Zimmer is a good example of this...........he was a playmaker and seemed like he had a good chance of being in Buffalo for a couple more seasons.  

 

Injured his knee.......

 

200.gif

  • Agree 1
Link to comment
Share on other sites

So I ask,

  1. Will Tre play by October, and if so, what is his performance ceiling this year - 90% of last year?
  2. Is drafting Jameson Williams worth the risk?  Does he fully recover and do we need him playing early this year as much as next?

With Diggs locked up, and a need for a good outside #2-3 WR, I'd say the Bills lean towards a WR who can play early this year rather than go with Williams for the future - especially with this appearing to be an "all in" year.

Link to comment
Share on other sites

3 hours ago, BADOLBILZ said:

 

 

Agree...........Justin Zimmer is a good example of this...........he was a playmaker and seemed like he had a good chance of being in Buffalo for a couple more seasons.  

 

Injured his knee.......

 

200.gif

 

Hard to say if they would have kept him.  He played on only 1/3 of the snaps in the 12 appearances in 2020.  In the 6 games before he was injured last season he didn't do much.  He went on IR so no one knows what they would have done.  He has no post-injury data. He's a free agent. 

Link to comment
Share on other sites

23 hours ago, Mr. WEO said:

 

Hard to say if they would have kept him.  He played on only 1/3 of the snaps in the 12 appearances in 2020.  In the 6 games before he was injured last season he didn't do much.  He went on IR so no one knows what they would have done.  He has no post-injury data. He's a free agent. 

I think the point is he may have gotten some kind of a contract here or elsewhere had he not been injured. 
 

Instead, his entire career is on life support. The impact of the injury on performance doesn’t even matter.
 

 

Link to comment
Share on other sites

13 minutes ago, FireChans said:

I think the point is he may have gotten some kind of a contract here or elsewhere had he not been injured. 
 

Instead, his entire career is on life support. The impact of the injury on performance doesn’t even matter.
 

 

 

Hard to say-pretty fringe player.  He wouldn't have been included in the study cited by the OP.

Link to comment
Share on other sites

8 minutes ago, Mr. WEO said:

 

Hard to say-pretty fringe player.  He wouldn't have been included in the study cited by the OP.

No he wouldn't but he would be included in this part:

 

"Only 55.4% (n = 173/312) of players returned to play after ACLR."

 

I wouldn't be surprised if the 45% of players who didn't return were fringe players a la Zimmer. Best ability is availability and all that. And if a player like Zimmer was re-signed, we would probably be bringing in replacement level players who could easily take his job.

 

Put another way, if Zack Moss tore his ACL, we would likely sign another RB who would get his snaps.  If that RB was even mediocre, he would probably result in Moss losing snaps and AV. It's no surprise that positions that have a lot of "filler players" like RB, DL and LB experience the most drop off after injury.  There's a million of STers and 4th DLmen in the NFL.  Not so many QB's,

Edited by FireChans
Link to comment
Share on other sites

2 hours ago, FireChans said:

No he wouldn't but he would be included in this part:

 

"Only 55.4% (n = 173/312) of players returned to play after ACLR."

 

I wouldn't be surprised if the 45% of players who didn't return were fringe players a la Zimmer. Best ability is availability and all that. And if a player like Zimmer was re-signed, we would probably be bringing in replacement level players who could easily take his job.

 

Put another way, if Zack Moss tore his ACL, we would likely sign another RB who would get his snaps.  If that RB was even mediocre, he would probably result in Moss losing snaps and AV. It's no surprise that positions that have a lot of "filler players" like RB, DL and LB experience the most drop off after injury.  There's a million of STers and 4th DLmen in the NFL.  Not so many QB's,

 

 

59 of the original 135 included in the study were still in the league at least 3 years after the injury.   The positions with the highest % of post injury players in that group were QB, OL, FS, DE.

 

Another study (cited in this one) showed skill players drafted in highest rounds had the worst outcomes after injury.

 

 

All of the limitations of this study brought up in this thread were clearly discussed by the authors in this paper.

Link to comment
Share on other sites

This topic is OLD. A NEW topic should be started unless there is a very specific reason to revive this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...