The ‘Can We?’ and the ‘Should We?’ of Science
The remarkable journey of the COVID-19 virus — from bat caves in Laos to freezers in Wuhan, through test tubes and gene-editing machines and “humanized mice,” and finally out into the rest of the world and through your lungs and mine, often two or three or four times by now — has made a lot of people think long and hard about whether they can trust scientists, as a whole, to do the right thing. As well it should have.
Defenders of the status quo are quick to admonish the plebs for not trusting the experts, as they point to the fact that most professional virologists are in favor of keeping gain-of-function research going. How can you, a pleb, act as though you know more about virology than the average virologist?
And yet the problem with trusting the experts isn’t that they don’t know enough about their chosen fields of expertise. Rather, it’s that most of them are biased in favor of their own institutions and, for a mixture of emotional and careerist reasons, they avoid thinking too hard about the chance that they are employing their prodigious knowledge in service of useless or even harmful ends.
The virologists in Wuhan really did know a lot more about virus evolution than the man on the street. Hence their ability to modify bat viruses to infect human beings. I lack the knowledge to do this, or even to describe in detail how it can be done, and in all likelihood, so do you. And it’s true that these scientists’ work did contribute, in a small and marginal way, to the body of scientific knowledge, by answering questions like “what modifications to a bat virus’s spike proteins will best enable it to get past the human immune system?”
But having knowledge isn’t the same thing as having wisdom. It’s one thing to ask if you can do something and another to ask if you should do it. It’s difficult for people who can answer the first question with a “yes,” and who naturally want their hard earned expertise to be important and useful, to answer the second question with a “no.”
Nor is this a peculiarly Chinese thing. Obviously, China’s lax lab safety standards, plus the low value the CCP places on human life, made this kind of research easier to do in China than in the United States. But one must not forget that the American government had a hand in funding it. And if you want another reminder that our own country isn’t that innocent, just think about our contributions to psychopharmacology, a field that’s been chock-full of tawdry conflicts of interest since before gain-of-function research was a glimmer in Anthony Fauci’s eye.
If you want to see another dismal story of scientists placing their genuine knowledge and expertise in the service of absurd or harmful ends, just consider how America’s medical authorities got into the habit of putting about one in ten of their country’s children on mind-altering drugs to treat conditions — mainly ADHD — that weren’t considered diseases at all just a few decades ago.
This topic has been in the news recently due to the slow-rolling shortage of ADHD medications, mainly stimulant drugs like Ritalin, Adderall, and Vyvanse. Most of these stories are filled up with the usual boilerplate expressions of sympathy for the millions of children and adults struggling to get the drugs they’ve become dependent on, plus speculations on who is to blame — the manufacturers? The DEA? One particular factory on Long Island?
One might think that, in a civilization that’s been drugging some 5 to 10 percent of its children for ADHD for about 30 years by now, scientists would have a good physical model of what causes this disorder. But they still don’t.
Consider, for instance, the chaotic scientific literature — of which this paper is a typical example — on the question of whether various brain regions are smaller or larger in children with ADHD, and whether these differences, if they exist at all, are natural or caused by medication. Hundreds of studies have been done on this topic, but even in the most recent (and, supposedly, largest and best) meta-analysis, only 44 percent of the studies even bothered to distinguish between children who had already been exposed to drugs and those who hadn’t.
(Other subfields of ADHD science are even worse — for instance, almost all of the studies on Google Scholar that look at the links between childhood ADHD and adult income or unemployment rates don’t distinguish between drugged and undrugged children, and some of them include only drugged children, but still conclude with a exhortation that the negative life outcomes they just saw are yet more evidence that ADHD is a serious illness and that early treatment is necessary.)
Amid all this confusing literature on the physical effects of ADHD drugs, a few facts manage to stick out. One is that stimulant drugs suppress children’s growth. While some studies bury this conclusion by using a test group comprising mainly of children who took the medication only briefly or irregularly, the best studies (which focus on children who were on heavy dosages for three years or more) show strong evidence of a permanent height reduction.
Another fact is that, while ADHD is often blamed on a chemical imbalance in the dopamine transporter protein in the brain, ADHD patients who have never been medicated have normal amounts of dopamine transporters. But, as shown here and here, patients who have been on drugs for a long time suffer a steady increase in transporter concentrations (meaning less free dopamine). Nor is this the only chemical anomaly at issue; this Dutch study from 2017 showed that exposure to Ritalin in childhood leads to lasting deficiencies in the neurotransmitter GABA, a chemical associated with impulse control, which the drug is boosting in the short term.
Then add to this all the psychological harms of long-term drug dependency — the insomnia, the social isolation, the low motivation, the erratic moods, the narcissism and lack of self awareness. Anyone with a drug-addled relative or close friend will have some idea of what it’s like, and while most of us don’t like to use the word “addict” to describe a ten- or twelve-year-old who meekly takes the speed pills that his doctor gave him, the biological processes behind drug dependency are the same, whether the drug came from a licensed or an unlicensed pharmacist.
So why do we keep doing it? Why do Americans (and, to a lesser extent, the citizens of every other wealthy country) keep getting their children started on lifelong drug dependencies in order to treat elementary-school behavior issues that, two generations ago, were not considered medical problems at all?
The answer? Because it works. Perhaps not in the long run, since most of the studies show that the academic benefits of putting a schoolchild on ADHD drugs are strong for the first year or two, drop off after that, and might become negative in the end. But the evidence that starting a child on medication leads to an instant drop in disruptive behavior, an improved ability to sit still, and a better focus on homework is so strong that hardly anyone bothers to argue with it. And millions of parents and teachers can tell you that their lives got much easier the moment their troublesome child was medicated.
But in the big picture, we’re looking at the same sort of ethics failure that gave us the COVID pandemic. In both cases, science has given people an effective way to satisfy their immediate wants. In China, it was up-and-coming virologists who wanted to pad their résumés, and possibly get tenure, by cataloguing the hundreds upon hundreds of bat viruses that are just a few mutations away from infecting humans.
And in the United States and Europe, it is parents and schoolteachers who want their children to write more neatly, and to sit still for eight hours a day without squirming too much or talking out of turn. (You can tell that this is mostly about policing classroom behavior by looking at the fact that children born in December, who are constantly being compared with classmates a little older than themselves, are 47 percent more likely to be medicated for ADHD than children born in January.)
Back in the Victorian era, it was common for parents of fussy toddlers to give them morphine for their teething pain. In the United States, the commonest brand was “Mrs. Winslow’s Soothing Syrup,” which at its peak sold more than 1.5 million bottles per year. As one might imagine, this remedy created more problems than it solved, and the “Soothing Syrup” finally disappeared at around the same time that cocaine and lithium were disappearing from our sodas.
One might be tempted to read this odd piece of history as a tale of science triumphing over superstition. But the truth isn’t that simple, because the key fact here is that Mrs. Winslow’s Soothing Syrup actually worked! The core scientific claim of the people who sold this drug was that giving a one- or two-year-old morphine will ease the pain of a new tooth coming in — and nobody disagrees with this! Our ancestors just decided that the syrup’s benefits weren’t as important as its harms.
The question, for our own time, is whether we will be as wise as our forebears, who knew that science and ethics are not quite the same thing, and who knew that the question of “Can we do it?” and the question of “Should we do it?” will often have two different answers.
An earlier generation’s scientists were wise enough to see that while they could use opiates to relieve the pain and suffering of infants, they probably shouldn’t. One can only hope that, sooner or later, the present generation will realize that much the same logic applies to using meth analogues to relieve hyperactive grade-schoolers, or manufacturing deadly viruses in a not-quite-sealed-off lab in order to add to mankind’s stock of publishable (but not exactly useful) scientific knowledge.
Twilight Patriot is the pen name for a young American who lives in Georgia, where he is currently working toward a graduate degree. You can read more of his writings at his Substack.
Image via Pexels.