The discussions of alternative therapies in the June Phactum display some confusion about the nature of "evidence." Ken Barnes thinks that one piece of anecdotal evidence should be taken seriously. This may be true when you are dealing with a single person. For example, if I take a certain drug to relieve my allergy symptoms, then if I don't die or fall asleep and if the drug does relieve my symptoms, then I can say that the drug works for me. But does it work for the general population? I cannot say from one bit of evidence. The reason is that people are variable: what works for one person may not work for another.
As my former wife, the epidemiologist, used to say, "Statistics do not apply to individuals. Statistics apply to populations." Conversely, a single bit of evidence applies to one individual, but not to a population. For this reason, we need to obtain numerous bits of evidence in order to have "evidence" that a given drug is suitable for the entire population. Even then, most drugs, and other forms of treatment approved by the medical profession do not work on everybody. In the Physician's Desk Reference, a large volume listing the properties of all approved drugs, each drug comes with a chart showing its effectiveness. Prozac, a widely popular antidepressant, is listed as being "effective" or "highly effective" for only 50% of the population. 50% is better than nothing. When I was about to get an epidural injection to relieve sciatic pain, the doctor warned me that the treatment worked well on one third the population, worked somewhat on another third, and did not work at all for the final third. I belonged to the final third. So when I went to a surgeon for spinal surgery, he told me that his operation had a 90% effectiveness. That's the kind of statistics I like.
Conclusion: If alternative treatments can come up with statistics of this nature then I will have no difficulty in taking them seriously.
There is also some misunderstanding about double-blind experiments. The double-blind technique was developed by members of the Vienna Medical Society in 1844, over 150 years ago. The need for it arose because it was found that the results of clinical trials were skewed by the manner in which the doctors doing the trials presented the drugs being tested. The doctors giving the real drug tended to be optimistic because they were trying to prove that this drug was good. When they gave the placebo, on the other hand, they were not as optimistic. As a result, the patients getting the drug from the optimistic, cheerful doctor tended to say "Yes, I feel better." Not because of the drug but because of the power of suggestion, which should never be ignored. For that reason, it was decided that the doctors conducting the tests should not know which patients were getting the drug under test and which were getting the placebo. For further details about this and other errors in research see my book The Science Gap, chapter 11.
Moral: Those interested in alternative cures are likely to be enthusiasts who really want their tests to give positive results. Beware of a scientist who falls in love with his/her theories.