dras knowledge

Wednesday, August 20, 2003

The Governator's old friends could influence public policy?

"Franco Columbu, a former Mr. Olympis, training partner, confidant, and best friend of Arnold's during his bodybuilding heyday, is also a Chiropractor. It's probably much tougher to keep up any sense of objectivity under these circumstances, I would think."

Who Ahnold has in his circle of friends may be relevant to his capability to be governor. Concerns may well be unfounded, yet if any political agenda toting special interest group (i.e. Chiropractors) have a special "in" (i.e. old friend) at the governor's mansion, don't you think they'd use it? (BTW, I voted party lines despite Sen. Hatch's list of personal friends and despite what he's apparently done for "supplement" peddlers. A candidate's socio-economic agenda should carry more weight than whether or not the person is pro-chiropractic or not, or even things like pro-choice or not.

Tuesday, August 19, 2003

Dr. Ted Kaptchuk and "Interpretive Bias"

Dr. Ted Kaptchuk is a rather outspoken professor at Harvard. By his own admission, he studies the placebo effect. His ideas, studies and editorials often challenge common scientific thought - at least the kind of scientific thought one would typically associate with medical school learning. I was full into doing technology assessments and evaluating medical clinical trials at the time of this writting. I first paste Dr. K's summary remarks from his editorial in the British Medical Journal and follow it up with my own sincere observations about his conclusions. I didn't think much on the wisdom of my examinations until I recieved the following:
Dale,
May I forward your comments to Wally Sampson, who is working up an article
about this for his Scientific Review of Alternative Medicine? I think he
would enjoy reading them - we all seem to agree about what is going on here.
Thanks,
Harriet
To which I replied:

I'm flattered. I don't know of what benefit any comment of mine would have, but anyone is welcome to them. I must say, I don't claim to walk the walk, or talk the talk of a Harvard professor, so I hope anything I'd have to say about Prof. Kaptchuk's article would remain in context. I had wondered if Prof. Kaptchuk's presentation had some intentions to undermine an understanding of the scientific process, or was he just stimulating analytical thought. I'm disappointed it could be the former. At least he isn't in PEOPLE magazine writing "How the medical establishment fools itself, and fools you" - yet.


BMJ 2003;326:1453-1455 (28 June)

http://bmj.com/cgi/content/full/326/7404/1453

Summary points

Evidence does not speak for itself and must be interpreted for quality and likelihood of error

Interpretation is never completely independent of a scientist's beliefs, preconceptions, or theoretical commitments

On the cutting edge of science, scientific interpretation can lead to sound judgment or interpretative biases; the distinction can often be made only in retrospect

Common interpretative biases include confirmation bias, rescue bias, auxiliary hypothesis bias, mechanism bias, "time will tell" bias, and orientation bias

The interpretative process is a necessary aspect of science and represents an ignored subjective and human component of rigorous medical inquiry

Definitions of interpretation biases

Confirmation bias—evaluating evidence that supports one's preconceptions differently from evidence that challenges these convictions

Rescue bias—discounting data by finding selective faults in the experiment

Auxiliary hypothesis bias—introducing ad hoc modifications to imply that an unanticipated finding would have been otherwise had the experimental conditions been different

Mechanism bias—being less skeptical when underlying science furnishes credibility for the data

"Time will tell" bias—the phenomenon that different scientists need different amounts of confirmatory evidence

Orientation bias—the possibility that the hypothesis itself introduces prejudices and errors and becomes a determinate of experimental outcomes

Comments

This article is written from the perspective of philosophy of science. From a statistical point of view, the arguments presented are obviously compatible with a subjectivist or bayesian framework that formally incorporates previous beliefs in calculations of probability. But even if we accept that probabilities measure objective frequencies of events, the arguments still apply. After all, the overall experiment still has to be assessed.

I have argued that research data must necessarily undergo a tacit quality control system of scientific scepticism and judgment that is prone to bias. Nonetheless, I do not mean to reduce science to a naive relativism or argue that all claims to knowledge are to be judged equally valid because of potential subjectivity in science. Recognition of an interpretative process does not contradict the fact that the pressure of additional unambiguous evidence acts as a self regulating mechanism that eventually corrects systematic error. Ultimately, brute data are coercive. However, a view that science is totally objective is mythical, and ignores the human element of medical inquiry. Awareness of subjectivity will make assessment of evidence more honest, rational, and reasonable.21
----------------------------------------------------------------------------------------------------------------------------------------------------------------------

My response:

Regarding Kaptchuk's recent BMJ article on interpretive bias in research.

George Carlin asked: "Why is it that regardless of the speed we are going on the freeway, anyone going slower than us is a moron,…and anyone passing us is an *-hole?"

Reading through Kaptchuk's article several weeks ago I disagreed with the title "interpretive biases" to describe some of the essential tasks for analyzing data, albeit often subjectively influenced by our personal background, wisdom, and common sense. These are not biases when they are things we must do.

"Confirmation bias"? - If my preconception that bacteria does not commonly induce stomach ulcers is based on my knowledge that there has been no evidence for it, I must approach the first supportive report differently than say a report that stress instigates stomach ulcers.

"Rescue bias"? - My perceived degree of a selective bias in the report will accordingly lower my perceived degree of credibility of the conclusions, I cannot ignore study flaws.

"Auxiliary hypothesis bias"? - I am obligated to introduce "ad hoc" modifications to the experiment if my background, knowledge and understanding of the technology lead me to question study protocols, this is called peer review.

"Mechanism bias"? - Of course I would not be as skeptical about a study supporting a common theory as an uncommon one.

"'Time will tell' bias"? - It is my right to require a different degree of "proof" than the next person, hopefully, I am using more wisdom than vanity or passion when I set that degree.

Finally, an "orientation bias" is an unfortunate pit fall (and tool) for researchers, but even when it happens and "biased" data are collected, it doesn’t mean the theory or data are wrong. Conversely, just because all the data point to one theory, doesn't mean the data is biased.

I do not disagree with the author's summary points, or even comments. We should understand the subjective influences we carry into analysis. Using wisdom and understanding the influences behind our decisions and opinions is practicing what we preach on this list. Subjective interpretation is essential in the progression of any art, including medicine.

But, the author does provide terminology with plausable sounding reasons why I should believe nothing "mainstream medicine" is telling me about vaccinations, aspertame, amalgam, etc, etc, etc.