Oh, to be a scientist 50 or 60 years ago, warning people about the stuff they really needed to know: Stop smoking! Don't take thalidomide if you're pregnant! For goodness' sake, ditch the Corvair, unless you want to get impaled on your gearshift!
Your findings would make headlines, and the people reading about them would end up safer and healthier. Score.
But today, the safest time in human history, a time in which Americans are living six years longer than even in 1990, you can't turn on your media device without hearing about another new thing you supposedly must stop doing/eating/touching/breathing immediately — or else. Coffee! Lack of coffee! Plastic bottles, cellphones, GMOs! Non-organic cantaloupe! My gosh, the Environmental Working Group can't stop warning us about lipstick: "Millions of women get a little bit of toxic lead on their lips each day with every swipe of their lipstick," reads a press release.
OK, that sounds scary, but are they dropping dead?! See earlier paragraph: We are living longer than ever today, and that doesn't seem to be because women have stopped wearing lipstick.
It is precisely that disconnect that drove Geoffrey Kabat to write his new book, "Getting Risk Right: Understanding the Science of Elusive Health Risks." Kabat is a cancer epidemiologist at the Albert Einstein College of Medicine in the Bronx. He has published 140 scientific papers on the factors that play a role in causing cancer and other diseases. And he is sick of watching the rest of us wake up and get warned that if we do x (it's always changing), we will regret it till the day we die.
Which will be next Thursday.
"You need to make distinctions," the doctor told me in a phone interview. There's a difference between large-scale, long-term, replicated studies and the fly-by-night "breakthroughs" that the media love to report on.
We civilians tend to think of researchers sitting in their labs conducting experiments with only humanity's welfare in mind. But the truth is scientists also have to make a living. That means "they may feel the need to overstate the importance of their work in order to attract attention and obtain funding," Kabat says. And increasingly, they are publishing results that cannot be replicated because the studies they did were either too small to really measure a phenomenon or simply shoddy.
What's more, there is a herd mentality in science, as in any field. So if some research area becomes a hot topic, many scientists will pile on, in part because that's where the money is and in part because if your findings go against the grain, you will be on the outs. Remember all the research in the '90s showing that a low-fat diet is good for you?
But if you substitute sugar for fat, as many food companies proceeded to do, it's not good for you at all. "The large-scale and dramatic change — sometimes referred to as the 'SnackWell phenomenon' — has been credited with making a substantial contribution to increasing rates of obesity," says Kabat. That was thanks to the pile-on phenomenon.
Another problem plaguing those of us trying to make sense of science is the fact that with ever more sensitive instruments, scientists can measure ever smaller stuff. So when, for instance, we hear that there are trace elements of a toxin in our blood, we tend to think, "Yikes!" We don't think, "I wonder how important one drop in a trillion is." (Answer: It's not. It's like one drop of poison in 20 Olympic-size pools.)
So how can we know which headlines to trust and which to ignore? Here is Kabat's rule of thumb — a wise one: "The more dramatic the result the less likely it is to be true."
Comment by clicking here.