When Should I Be Skeptical?
The third of three blog posts addressing the question, “How can I find reliable information about autism and autism research?”
Posted by Ryan Butler, assistant director of operations for Autism Speaks Autism Genetic Resource Exchange.
In my last two blog posts, I discussed finding reliable information on autism research and extracting meaningful information from it. In this last installment of my “guide to autism research,” I’d like to discuss when it’s wise to be skeptical of research claims.
In the final analysis of a scientific claim, it is ultimately up to you to decide if something “rings true” or not. Trust your instincts, gather more information and make the best, informed decision you can. Here are some red flags that a claim warrants an extra-critical look.
1. The researchers pitch their findings directly to the media. The integrity of science rests on the willingness of scientists to expose their ideas and findings to the scrutiny of other experts in their field. This is the peer-review process that I described in my first blog. Occasionally, scientists bypass peer review and take new results directly to the media or another public forum. The skeptic in me questions whether such work will stand up to close examination by knowledgeable scientists.
I recall one classic example. In 1989, two chemists claimed they had discovered cold fusion — a cheap way to produce unlimited nuclear energy. Instead of submitting their claim to a scientific journal, the chemists held a news conference for the media. Their announcement dealt largely with the economic potential of the discovery. It lacked the details that would have enabled other scientists to judge the strength of the claim. Not surprisingly, the cold-fusion “discovery” turned out to be false.
2. The researchers claim that their findings are being hidden or suppressed. The peer-review process is not perfect. Still, when a research paper is rejected for publication, there are usually good reasons. Most often, these include unreliable methods or less than significant findings. I advise skepticism when researchers claim that their research was rejected because “mainstream science” is part of a larger conspiracy (e.g. with industry or government). I’m not saying that such research should be dismissed, just that it deserves an extra level of caution. In the case of cold fusion, the researchers making the claim blamed its rejection on mainstream physicists protecting their research in hot fusion.
3. The researchers cite only anecdotes or personal experiences. Anecdotes can be compelling. We all know of stories of someone who cured himself or another person with a new treatment. Unfortunately, such claims seldom stand up to rigorous research, such as a randomized, double-blind trial. Randomization and double-blinding ensure that neither researchers nor subjects know who’s getting the real treatment and who’s getting the comparison treatment or placebo. We need to use such gold-standard methods to guard against coincidence and wishful thinking.
4. The researchers worked in isolation. Scientific “breakthroughs” usually result from the combined work of many scientists over time. When lone scientists make unprecedented claims, be skeptical. Again, this doesn’t mean the research is wrong. It just warrants extra scrutiny.
5. Results have not been reproduced by others. The results of any one study can reflect errors, false-positive results, placebo effects and other legitimate shortcomings. This is why the scientific method requires “reproducible results.” In other words, other scientists should be able to repeat the experiment to see if they, too, get similar results. Often, research begins with a study enrolling a relatively small number of patients to gather preliminary data. If the results look promising, the study is repeated with a larger group. So beware of recommendations based on a single study.
6. Beware of conflict of interest. Scientists are human and, like all of us, vulnerable to conflicts of interest. This is particularly important when researchers are evaluating treatments they developed themselves. Also look at a scientist’s funding sources. For good reason, scientific journals require researchers to list their funding sources in their reports. Was the researcher funded by the company who developed the treatment or product?
Importantly, there’s nothing wrong with a researcher testing her own treatment. Certainly pharmaceutical companies and others must have their products tested for safety and effectiveness. This is just another circumstance where extra scrutiny is warranted. Look for additional studies confirming the results.
Along these lines, beware of researchers who publish dire warnings of danger from a particular product or regimen. Look for potential biases – including but not limited to competing treatments or products.
7. Beware of simplistic conclusions drawn from a complex study. Even a well-designed study can be interpreted in over-simplistic or other misleading ways. For example, researchers conducting a study of complex gene interactions may be tempted to conclude gene “X” causes “Y” when there are other factors to consider. It can be particularly hard for the non-expert to spot such flaws. Scientific articles should include a thorough discussion of strengths and weaknesses of the study.
Got more questions? Email them to us at GotQuestions@autismspeaks.org.