It’s also odd that increased cancer was seen only in male rats and not in female rats. Do we believe that females are protected from cellphone radiation?
Oddly, the male rats in the control group lived much shorter lives than expected. Do we believe that cellphone radiation exposure makes rats live longer? The rate of cancer in the exposed rats was actually in line with what you’d expect in rats in general. It’s the controls that had oddly low rates of cancer. Do we believe that the control rats were somehow cured of cancer?
The answer to all of these questions is, of course, no. But headlines blaring those results would be just as valid as those we saw last week.
The shortened life span in the control rats is a real problem, too. If it turns out that these cancers develop later in life, then their dying early could be responsible for all the significant results of the study.
Given the small number of rats in the study, the many comparisons done, and the low rates of cancer overall, we have to be concerned with the validity of the results. When you’re designing research, you need to make sure that you have enough of a sample so that you don’t get a negative result when there really is a difference (a false negative), or a positive result when there really is no difference (a false positive). This study is likely to have a high false discovery rate, or an increased risk of being a false positive.
You can’t cherry-pick in science, as I’ve discussed before. If you want to own the positive result seen in males, you have to own the protection being female seems to confer. You have to own that the control rats were somehow magically without cancer. You have to own that cellphone exposure increases life span. Or you can admit that none of these results are especially convincing.
But too often, the news media latch onto one finding while ignoring others. Too often, this finding is the one that’s most frightening, or scariest. It’s certainly the one that seems most likely to get attention.
No study can be judged in isolation. In this case, taking the results from one arguably imperfect rat study while ignoring others makes no sense. When it comes to cellphones and cancer, a great deal of research already exists.
Most of that work is what we call case-control studies. To do a study like that, you’d find a group of people with brain tumors (cases) and a group of similar people without brain tumors (controls). Then, you’d get data on them (like “do you use a cellphone?”) to see if differences exist between them.
Studies like this, though, are susceptible to what we call recall bias. This is where people who have had something bad happen to them are more likely to think hard and remember things, like using a cellphone, than people who have not. Better-designed studies, including cohort studies, have not shown a link between cellphones and cancer.
Many organizations, such as the American Cancer Society, the National Institute of Environmental Health Sciences, the Food and Drug Administration, the Centers for Disease Control and Prevention, the Federal Communications Commission and the European Commission Scientific Committee on Emerging and Newly Identified Health Risks have reviewed the collected research — there is a lot — and found insufficient evidence for a link.
No new study can be viewed in a vacuum. It must be added to what is already known. Given such a large body of studies, you can’t take one small rat study and say that it’s a “game changer.” It’s nearly impossible for any such study to overcome all that has come before.
This is especially true because of publication bias. It’s probably much more likely that you’ll get a new study published if you find a link between cellphones and cancer than if you don’t.
There’s a media bias, too. When published studies have negative results, you often don’t hear about them. When results are positive, especially if they are frightening, they’re talked about as if they’re definitive.
One more point. All research should be hypothesis-driven, and make sense in the real world. Cellphones are unbelievably common. More than 90 percent of people in the United States use one regularly. If they caused brain cancer in even a small percentage of users, we’d see an increase in its incidence. Since the late 1980s, however, the incidence of brain cancer in the United States has been decreasing.
Faced with such real-world evidence, anyone should be skeptical about arguments that one causes the other.
When a new study is published, it must be thoroughly vetted to see how robust its results are. It must be evaluated in light of all other research that exists. It must be considered alongside real-world data to ensure that it makes sense. That’s how we need to think — and report — about new research.