dras knowledge

Monday, February 09, 2004

Studies always conclude that there needs to be more study

Pilot and phase I validation studies often exist to provide statistical data with which other studies can be designed. They're usually not designed to provide a high confidence level.
One thing I notice about clinical studies (and not just restricted to alt-med) is that the premise for the study often gets lost in the authors conclusions - or the news article headlines. There may be a publishers bias to "report the positive" or "dig out something interesting or unanticipated in the outcome data to report." There may also be an unintended preselection bias where the authors speculate treatment effect for any often less than meaningful, yet clinically significant difference in treatment arms, regardless of the stated and/or controlled outcomes criteria. This is similar to drawing the bullseye after the shots are fired. Reporting or developing an abstract in this manner isn't necessarily bad, and the studies aren't necessarily unethical or not valuable. In the least they can provide a platform for further, more meaningful research. From these, however, you can't draw conclusions about any treatment, and they usually only muddy a meta analysis.

Overall, I agree that it seems there is much data and observation reported in clinical studies from which no meaningful conclusions are made. Based on the abstract, this study reports that all the people treated had good outcomes, and more study is needed to analyze just how and why people who are treated have such good outcomes. The design sounds a little funny if there was an emphasis on understanding the singular impact of acupuncture. The whole article may shed more light on my summation of the conclusion.

0 Comments:

Post a Comment

<< Home