Why Doctors Reject Tools that Make Their Jobs Easier
I want to tell you about a brouhaha in my field over a “new” medical discipline three hundred years ago. Half my fellow doctors thought it weighed them down and wanted nothing to do with it. The other half celebrated it as a means for medicine to finally become modern, objective, and scientific. The discipline was thermometry, and its controversial tool a glass tube used to measure body temperature called a thermometer.
This all began in 1717, when Daniel Fahrenheit moved to Amsterdam and offered his newest temperature sensor to the German physician Hermann Boerhaave. Boerhaave tried it out and liked it. He proposed using measurements with this device to guide diagnosis and therapy.
Boerhaave’s innovation was not embraced. Doctors were all for detecting fevers to guide diagnosis and treatment, but their determination of whether fever was present was qualitative. “There is, for example, that acrid, irritating quality of feverish heat,” the French physician Jean Charles Grimaud said as he scorned the thermometer’s reducing his observations down to numbers. “These [numerical] differences are the least important in practice.”
Grimaud captured the prevailing view of the time when he argued that the physician’s touch captured information richer than any tool, and for over a hundred years doctors were loath to use the glass tube. Researchers among them, however, persevered. They wanted to discover reproducible laws in medicine, and the verbal descriptions from doctors were not getting them there. Words were idiosyncratic; they varied from doctor to doctor and even for the same doctor from day to day. Numbers never wavered.
In 1851 at the Leipzig university hospital in Germany, Carl Reinhold Wunderlich started recording temperatures of his patients. 100,000 cases and several million readings later, he published the landmark work “On the Temperature in Diseases: a manual of medical thermometry.” His text established an average body temperature of 37 degrees, the variation from this mean which could be considered normal, and the cutoff of 38 degrees as a bona fide fever. Wunderlich’s data were compelling; he could predict the course of illness better when he defined fever by a number than when fever had been defined by feel alone. The qualitative status quo would have to change.
Using a thermometer had previously suggested incompetence in a doctor. By 1886, not using one did. “The information obtained by merely placing the hand on the body of the patient is inaccurate and unreliable,” remarked the American physician Austin Flint. “If it be desirable to count the pulse and not trust to the judgment to estimate the number of beats per minute, it is far more desirable to ascertain the animal heat by means of a heat measurer.”
Evidence that temperature signaled disease made patient expectations change too. After listening to the doctor’s exam and evaluations, a patient in England asked, “Doctor, you didn’t try the little glass thing that goes in the mouth? Mrs Mc__ told me that you would put a little glass thing in her mouth and that would tell just where the disease was…”
Thermometry was part of a seismic shift in the nineteenth century, along with blood tests, microscopy, and eventually the x-ray, to what we now know as modern medicine. From impressionistic illnesses that went unnamed and thus had no systematized treatment or cure, modern medicine identified culprit bacteria, trialled antibiotics and other drugs, and targeted diseased organs or even specific parts of organs.
Imagine being a doctor at this watershed moment, trained in an old model and staring a new one in the face. Your patients ask for blood tests and measurements, not for you to feel their skin. Would you use all the new technology even if you didn’t understand it? Would you continue feeling skin, or let the old ways fall to the wayside? And would it trouble you, as the blood tests were drawn and temperatures taken by the nurse, that these tools didn’t need you to report their results. That if those results dictated future tests and prescriptions, doctors may as well be replaced completely?
The original thermometers were a foot long, available only in academic hospitals, and took twenty minutes to get a reading. How wonderful that now they are now cheap and ubiquitous, and that pretty much anyone can use one. It's hard to imagine a medical technology whose diffusion has been more successful. Even so, the thermometer's takeover has hardly done away with our use for doctors. If we have a fever we want a doctor to tell us what to do about it, and if we don't have a fever but feel lousy we want a doctor anyway, to figure out what's wrong.
Still, the same debate about technology replacing doctors rages on. Today patients want not just the doctor’s opinion, but everything from their microbiome array and MRI to tests for their testosterone and B12 levels. Some doctors celebrate this millimeter and microliter resolution inside patients’ bodies. They proudly brandish their arsenal of tests and say technology has made medicine the best it’s ever been.
The other camp thinks Grimaud was on to something. They resent all these tests because they miss things that listening to and touching the patient would catch. They insist there is more to health and disease than what quantitative testing shows, and try to limit the tests that are ordered. But even if a practiced touch detects things tools miss, it is hard to deny that tools also detect things we would miss that we don’t want to.
Modern CT scans, for example, perform better than even the best surgeons’ palpation of a painful abdomen in detecting appendicitis. As CT scans become cheaper, faster, and dose less radiation, they will become even more accurate. The same will happen with genome sequences and other up-and-coming tests that detect what overwhelms our human senses. There is no hope trying to rein in their ascent, nor is it right to. Medicine is better off with them around.
What's keeping some doctors from celebrating this miraculous era of medicine is the nagging concern that we have nothing to do with its triumphs. We are told the machines’ autopilot outperforms us so we sit quietly and get weaker, yawning and complacent like a mangy tiger in captivity. We wish we could do as Grimaud said: “distinguishing in feverish heat qualities that may be perceived only by a highly practiced touch, and which elude whatever means physics may offer.”
A children’s hospital in Philadelphia tried just that. Children often have fevers, as anyone who has had children around them well knows. Usually, they have a simple cold and there’s not much to fuss about. But about once in a thousand cases, feverish kids have deadly infections and need antibiotics, ICU care, all that modern medicine can muster.
An experienced doctor’s judgment picks the one in a thousand very sick child about three quarters of the time. To try to capture the remainder of these children being missed, hospitals started using quantitative algorithms from their electronic health records to choose which fevers were dangerous based on hard facts alone. And indeed, the computers did better catching the serious infections nine times out of ten, albeit also with ten times the false alarms.
The Philadelphia hospital accepted the computer-based list of worrisome fevers, but then deployed their best doctors and nurses to apply Grimaud’s “highly practiced touch” and look over the children before declaring the infection was deadly and bringing them into the hospital for intravenous medications. Their teams were able to weed out the algorithm’s false alarms with high accuracy, and in addition find cases the computer missed, bringing their detection rate of deadly infections from 86.2 percent by the algorithm alone, to 99.4 percent by the algorithm in combination with human perception.
Too many doctors have resigned that they have nothing to add in a world of advanced technology. They thoughtlessly order tests and thoughtlessly obey the results. When, inevitably, the tests give unsatisfying answers they shrug their shoulders. I wish more of them knew about the Philadelphia pediatricians, whose close human attention caught mistakes a purely numerical rules-driven system would miss.
It’s true that a doctor’s eyes and hands are slower, less precise, and more biased than modern machines and algorithms. But these technologies can count only what they have been programmed to count: human perception is not so constrained.
Our distractible, rebellious, infinitely curious eyes and hands decide moment-by-moment what deserves attention. While this leeway can lead us astray, with the best of training and judgment, it can also lead us to the as of yet undiscovered phenomena that no existing technology knows to look for. My profession and other increasingly automated fields would do better to focus on finding new answers than on fettering old algorithms.