The authors found that increased neck pain is 25% more likely with SMT than if you did nothing or stuck to safe and neutral treatments.
No. Wrong. Bzzz! Thank you for playing.
This morning I received a note from Lisa Carlesso, PT, MSc, first author of the paper, letting me know that I got it wrong: although her data showed that 25% number, it was not a statistically significant number. And that’s significant.
If there were any noteworthy increases in neck pain following this kind of neck treatment, presumably clearer data would have emerged. Thus the paper concluded that there is “strong evidence that neck manipulation or mobilization does not result in an increase in neck pain.” I’m not sure if I quite agree that a statistically insignificant number constitutes “strong evidence” so much as just generally low confidence in the results (and Carlesso acknowledges this in the paper as well, practically in the next sentence: I basically saw in the data what I wanted to see. Funny how that works. “However, the limitations of the Strunk study and the low GRADE rating remain, affecting confidence in the estimate.”)
But I am guilty of doing something I’ve accused others of doing: emphasizing a statistically insignificant number to make my point. When the numbers lean your way but fail to reach statistical significance, it’s called a “trend.” A trend in favour of a therapy is often trotted out as if it were supportive evidence. I did the opposite, and so I am doing the walk of shame now. Bad science writer, bad! I erroneously thought the number was statistically significant, probably because I’m a debunker by nature, and I basically saw in the data what I wanted to see. Funny how that works.
(Gosh, I wonder what system of knowledge-seeking could possibly compensate for that aspect of human nature? You get a gold star if you guessed “science.”)
What does it mean?
Not much, really. “Here be statistical dragons.” There are so many ways that all this stuff about treatment harms can be wrong that I can’t really walk away from this feeling like I’ve learned anything terribly important one way or the other. The hard statistical bottom line is that statistically insignificant means that no conclusion can actually be drawn — the data was no more than suggestive. Interestingly, Lisa Carlesso pointed out that she has written another paper about how difficult it is to study adverse effects.
Being overweight may not be as unhealthy as it was 40 years ago," BBC News reports. New research has found a body mass index (BMI) of 27 is linked to the lowest rate of death – but someone with a BMI of 27 is currently classed as being overweight. BMI is a score calculated by dividing your weight (usually in kilograms) by the square of your height (usually in metres and centimetres). Currently, a BMI of 25 to 29.9 is classified as being overweight. Researchers looked at 120,528 people from Copenhagen, recruited from 1976 to 2013, and separately compared those recruited during the 1970s, 1990s and 2000s. They were followed up until they died, emigrated, or the study finished. The BMI linked to the lowest risk of having died from any cause was 23.7 in the 1970s group, 24.6 in the 1990s group, and had further risen to 27 in the 2003-13 group. It may be the case that the suggested upward shift in optimal BMI is the result of improvements in preventative treatments for weight-rela...