Health Unlimited
The debate over the current crisis in health care often seems to swirl like a dust storm, generating little but further obfuscation as it drearily goes around and around. And no wonder. Attempts to explain how we got into this mess—and it is a mess—seem invariably to begin in precisely the wrong place. Most experts have been focusing on the failures and deficiencies of modern medicine. The litany is familiar: greedy physicians, unnecessary procedures, expensive technologies, and so on. Each of these certainly adds its pennyweight to the scales. But even were we to make angels out of doctors and philanthropists out of insurance company executives, we would not stem the rise of health-care costs. That is because this increase, far from being a symptom of modern medicine’s failure, is a product of its success.
Good medicine keeps sick people alive. It increases the percentage of people in the population with illnesses. The fact that there are proportionally more people with arteriosclerotic heart disease, diabetes, essential hypertension, and other chronic—and expensive—diseases in the United States than there are in Iraq, Nigeria, or Colombia paradoxically signals the triumph of the American health-care system.
There is another and perhaps even more important way in which modern medicine keeps costs rising: by altering our very definition of sickness and vastly expanding the boundaries of what is considered the domain of health care. This process is not entirely new. Consider this example. As I am writing now, I am using reading glasses, prescribed on the basis of an ophthalmologist’s diagnosis of presbyopia, a loss of acuity in close-range vision. Before the invention of the glass lens, there was no such disease as presbyopia. It simply was expected that old people wouldn’t be able to read without difficulty, if indeed they could read at all. Declining eyesight, like diminished hearing, potency, and fertility, was regarded as an inevitable part of growing older. But once impairments are no longer perceived as inevitable, they become curable impediments to healthy functioning—illnesses in need of treatment.
To understand how the domain of health care has expanded, one must go back to the late 19th century, when modern medicine was born in the laboratories of Europe—mainly those of France and Germany. Through the genius of researchers such as Wilhelm Wundt, Rudolph Virchow, Robert Koch, and Louis Pasteur, a basic understanding of human physiology was established, the foundations of pathology were laid, and the first true understanding of the nature of disease—the germ theory—was developed. Researchers and physicians now had a much better understanding of what was going on in the human body, but there was still little they could do about it. As late as 1950, a distinguished physiologist could tell an incoming class of medical students that, until then, medical intervention had taken more lives than it had saved.
Even as this truth was being articulated, however, a second revolution in medicine was under way. It was only after breakthroughs in the late 1930s and during World War II that the age of therapeutic medicine began to emerge.
With the discovery of the sulfonamides, and then of penicillin and a series of major antibiotics, medicine finally became what the laity in its ignorance had always assumed it to be: a lifesaving enterprise. We in the medical profession became very effective at treating sick people and saving lives—so effective, in fact, that until the advent of AIDS (acquired immune deficiency syndrome), we arrogantly assumed that we had conquered infectious diseases.
The control of infection and the development of new anesthetics permitted extraordinary medical interventions that previously had been inconceivable. As a result, the traditional quantitative methods of evaluating alternative procedures became outmoded. "Survival days," for example, was traditionally the one central measurement by which various treatments for a cancer were weighed. If one treatment averaged 100 survival days and another averaged 50 survival days, then the first treatment was considered, if not twice as good, at least superior. But today, the new antibiotics permit surgical procedures so extravagant and extreme that the old standard no longer makes sense. An oncologist once made this point using an example that remains indelibly imprinted on my mind: 100 days of survival without a face, he observed, may not be superior to 50 days of survival with a face.
Introducing considerations of the nature or quality of survival adds a whole new dimension to the definitions of sickness and health. Increasingly, to be "healthy," one must not only be free of disease but enjoy a good "quality of life." Happiness, self-fulfillment, and enrichment have been added to the criteria for medical treatment. This has set the stage for a profound expansion of the concept of health and a changed perception of the ends of medicine.
I can illustrate how this process works by casting stones at my own glass house, psychiatry, even though it is not the most extreme example. The patients I deal with in my daily practice would not have been considered mentally ill in the 19th century. The concept of mental illness then described a clear and limited set of conditions. The leading causes of mental illness were tertiary syphilis and schizophrenia. Those who were mentally ill were confined to asylums. They were insane; they were different from you and me.
Let me offer a brief (and necessarily crude) history of psychiatry since then. At the turn of the century, psychiatry’s first true genius, Sigmund Freud, decided that craziness was not necessarily confined to those who are completely out of touch with reality, that a normal person, like himself or people he knew, could be partly crazy. These "normal" people had in their psyches isolated areas of irrationality, with symptoms that demonstrated the same "crazy" distortions that one saw in the insane. Freud invented a new category of mental diseases that we now call the "neuroses," thereby vastly increasing the population of the mentally ill. The neuroses were characterized by such symptoms as phobias, compulsions, anxiety attacks, and hysterical conversions.
In the 1930s, Wilhelm Reich went further. He decided that one does not even have to exhibit a neurosis to be mentally ill, that one can suffer from "character disorders." An individual could be totally without symptoms of any illness, yet the nature of his character might so limit his productivity or his pleasure in life that we might justifiably (or not) label him "neurotic."
Still later, in the 1940s and ’50s, medicine "discovered" the psychosomatic disorders. There are people who have no evidence of mental illness or impairment but have physical conditions with psychic roots, such as peptic ulcers, ulcerative colitis, migraine headache, and allergy. They, too, were now classifiable as mentally ill. By such imaginative expansions, we eventually managed to get some 60 to 70 percent of the population (as one study of the residents of Manhattan’s Upper East Side did) into the realm of the mentally ill.
But we still were short about 30 percent. The mental hygiene movement and preventive medicine solved that problem. When one takes a preventive approach, encompassing both the mentally ill and the potentially mentally ill, the universe expands to include the entire population.
Thus, by progressively expanding the definition of mental illness, we took in more and more of the populace. The same sort of growth has happened with health in general, as can be readily demonstrated in surgery, orthopedics, gynecology, and virtually all other fields of medicine. Until recently, for example, infertility was not considered a disease. It was a God-given condition.
With the advances in modern medicine—in vitro fertilization, artificial insemination, and surrogate mothering—a whole new array of cures was discovered for "illnesses" that had to be invented. And this, of course, meant new demands for dollars to be spent on health care.
One might question the necessity of some of these expenditures. Many knee operations, for instance, are performed so that the individual can continue to play golf or to ski, and many elbow operations are done for tennis buffs. Are these things for which anyone other than the amateur athlete himself should pay? If a person is free of pain except when playing tennis, should not the only insurable prescription be—much as the old joke has it—to stop playing tennis? How much "quality of life" is an American entitled to have?
New technologies also exert strong pressure to expand the domain of health.
Consider the seemingly rather undramatic development of the electronic fetal monitor. It used to be that when a pregnant woman in labor came to a hospital—if she came at all—she was "observed" by a nurse, who at frequent intervals checked the fetal heartbeat with a stethoscope. If it became more rapid, suggesting fetal distress, a Caesarean section was considered. But once the electronic fetal monitor came into common use in the 1970s, continuous monitoring by the device became standard. As a result, there was a huge increase in the number of Caesareans performed in major teaching hospitals across the country, to the point that 30 to 32 percent of the pregnant women in those hospitals were giving birth through surgery. It is ridiculous to suggest that one out of three pregnancies requires surgical intervention. Yet technology, or rather the seductiveness of technology, has caused that to happen.
Linked to the national enthusiasm for high technology is the archetypically American reluctance to acknowledge that there are limits, not just limits to health care but limits to anything. The American character is different. Why this is so was suggested some years ago by historian William Leuchtenberg in a lecture on the meaning of the frontier. To Europeans, he explained, the frontier meant limits. You sowed seed up to the border and then you had to stop; you cut timber up to the border and then you had to stop; you journeyed across your country to the border and then you had to stop. In America, the frontier had exactly the opposite connotation: it was where things began. If you ran out of timber, you went to the frontier, where there was more; if you ran out of land, again, you went to the frontier for more. Whatever it was that you ran out of, you would find more if you kept pushing forward. That is our historical experience, and it is a key to the American character. We simply refuse to accept limits. Why should the provision of health care be an exception?
To see that it isn’t, all one need do is consider Americans’ infatuation with such notions as "death with dignity," which translates into death without dying, and "growing old gracefully," which on close inspection turns out to mean living a long time without aging. The only "death with dignity" that most American men seem willing to accept is to die in one’s sleep at the age of 92 after winning three sets of tennis from one’s 40-year-old grandson in the afternoon and making passionate love to one’s wife twice in the evening. This does indeed sound like a wonderful way to go—but it may not be entirely realistic to think that that is what lies in store for most of us.
During the past 25 years, health-care costs in the United States have risen from six percent of the gross national product to about 14 percent. If spending continues on its current trajectory, it will bankrupt the country. To my knowledge, there is no way to alter that trajectory except by limiting access to health care and by limiting the incessant expansion of the concept of health. There is absolutely no evidence that the costs of health-care services can be brought under control through improved management techniques alone. So-called managed care saves money, for the most part, by offering less—by covert allocation. Expensive, unprofitable operations such as burn centers, neonatal intensive care units, and emergency rooms are curtailed or eliminated (with the comforting, if perhaps unrealistic, thought that municipal and university hospitals will make up the difference).
Rationing, when done, should not be hidden; nor should it be left to the discretion of a relative handful of health-care managers. It requires open discussion and wide participation. When that which we are rationing is life itself, the decisions as to how, what, and when must be made by a consensus of the public at large through its elected and other representatives, in open debate.
What factors ought to be considered in weighing claims on scarce and expensive services? An obvious one is age. This suggestion is often met with violent abuse and accusations of "age-ism," or worse. But age is a factor. Surely, most of us would agree that, all other things being equal, a 75-year-old man (never mind a 92-year-old man) has less claim on certain scarce resources, such as an organ transplant, than a 32-year-old mother or a 16-yearold boy. But, of course, other things often are not equal. Suppose the 75-yearold man is president of the United States and the 32-year-old mother is a drug addict, or the 16-year-old boy is a high school dropout. We need, in as dispassionate and disinterested a way as possible, to consider what other factors besides age should be taken into account. Should political position count? Character? General health? Marital status? Number of dependents?
Rationing is already being done through market mechanisms, with access to kidney or liver transplants and other scarce and expensive procedures determined by such factors as how much money one has or how close one lives to a major health-care center. Power and celebrity can also play a role—which explains why politicians and professional athletes suddenly turn up at the top of waiting lists for donated organs. A fairer system is needed.
The painful but necessary decisions involved in explicit rationing are, obviously, not just medical matters—and they must not be left to physicians or health-care managers. Nor should they be left to philosophers designated as "bioethicists," though these may be helpful. The population at large will have to reach a consensus, through the messy—but noble—devices of democratic government. This will require legislation, as well as litigation and case law.
In the late 1980s, the state of Oregon began to face up to the necessity of rationing. The state legislature decided to extend Medicaid coverage to more poor people but to pay for the change by curbing Medicaid costs by explicitly rationing benefits. (Eventually, rationing was to be extended to virtually all Oregonians, but that part of the plan later ran afoul of federal regulations.) After hundreds of public hearings, a priority list of services was drawn up to guide the allocation of funds. As a result, dozens of services became difficult (but not impossible) for the poor to obtain through Medicaid. These range from psychotherapy for sexual dysfunctions and severe conduct disorder to medical therapy for chronic bronchitis and splints for TMJ Disorder, a painful jaw condition. Although the idea of explicit rationing created a furor at first, most Oregonians came to accept it. Most other Americans will have to do the same.
Our nation has a health-care crisis, and rationing is the only solution. There is no honorable way that we Americans can duck this responsibility. Despite our historical reluctance to accept limits, we must finally acknowledge that they exist, in health care, as in life itself.
This article originally appeared in print