Mismanaged Care
Two years ago, the United States was caught up in a furious national debate over the future of its health-care system. That debate is over, with nothing substantial accomplished, and most Americans probably believe that its passing spelled the end of any significant change in the health-care system in the immediate future. Today, however, that system is changing right before our eyes. Only now there is little debate, and the driving forces are said to be beyond anybody’s control.
The signs of change are everywhere. Economy-minded employers are switching to lower-cost "managed-care" plans, and employees are being told to choose new doctors or forgo insurance reimbursement. More than two-thirds of all insured Americans now belong to health maintenance organizations, preferred provider plans, or other managed-care health insurance plans. People who do not work for big corporations or other large employers, even healthy people, are finding it more and more difficult to obtain insurance. Those who fall seriously ill or leave their jobs are having trouble maintaining their insurance coverage. Patients are being discharged from hospitals quicker—and maybe sicker. Some new mothers now are sent home 24 hours after routine deliveries.
Physicians are also feeling the effects. Under the regime of managed care, they are being told by insurers to reduce their fees and adapt their practices to new guidelines, or else lose their patients. Many newly graduated specialists, carrying debts the size of home mortgages, cannot find permanent jobs because managed care has sharply limited referrals to expensive specialists. Tasks formerly performed only by doctors—such as simple surgery and routine anaesthesia—are being turned over to less costly "physician extenders"—physicians’ assistants, nurse practitioners, and technicians. Yet the _Wall Street Journal_ notes that new health-care conglomerates are making more money than they can profitably invest.
Hospitals are being merged, sold, or closed. Last year, 664 U.S. hospitals (more than 10 percent of the total) were involved in mergers or acquisitions. Many nonprofit hospitals are being taken over by for-profit companies, and some hospitals are being shut down. In the last two years, Philadelphia alone has lost six hospitals and a medical school. Proud old teaching hospitals have been told by managed-care companies to bring their charges down to competitive levels or suffer the consequences. Two bastions of the American medical establishment, Harvard’s Brigham and Women’s Hospital and Johns Hopkins Medical Institution, are even advertising for patients.
Some of the seeds of today’s transformation were sowed by the very success of American medicine during the past half-century. The rise of third-party health insurance and the triumph of modern technology, combined with the traditional fee-for-service structure of American medicine, are driving today’s historic changes.
American medicine has always been highly decentralized, rooted in close personal bonds between doctors and their patients. The doctor-patient relationship was considered essential to accurate diagnosis and a key to effective therapy, boosting the patient toward recovery—or helping him to accept failure. Even specialists operating out of hospitals tried to develop personal relationships with their patients. Each doctor was—and remains today—legally and morally responsible to the patient for the consequences of each decision he or she makes for that patient, and good doctors take that responsibility seriously.
The historical focus on the doctor-patient relationship had important economic consequences. With competition among physicians for business held in check by the American Medical Association, great economic power rested in the hands of the individual doctors. They alone decided whether, and where, a person should be hospitalized (albeit with the patient’s consent) and which expensive tests or treatments should be undertaken. Doctors, like most repairmen, generally charged separately for each service they performed and for each visit, a custom called feefor-service billing. A major thrust of the managed-care revolution is to change that practice.
Traditionally, doctors have also claimed the right to set their own prices for their services. This practice has the potential for abuse, but it has also allowed physicians to charge wealthier patients more so that they might also offer services to the poor. Such cross-subsidies, not only for care of the poor but for research and education, are a characteristic feature of American medicine. From the days of the earliest colonial dis-pensaries and 19th-century charity wards right up to the present, moreover, there has been an abiding link between medical education and care of the poor. Young doctors have learned their profession by taking care, at virtually no charge, of those who could not afford a doctor on their own.
For all the apparent continuity in American medicine, many familiar features of the system are of quite recent origin. Not until the end of the last century, for example, did professionalized medical care become an important factor in the lives of ordinary people—often the difference between death and total recovery. Medical science simply did not have much to offer most people. Only in the last 50 years have Americans ranked medicine as a necessity of life, along with food, clothing, and shelter, and a "right" to which everyone is entitled.
Health insurance is likewise of relatively recent vintage. Blue Cross (for hospital bills) was created in the 1930s, after hospital care became too costly for middle-class families to afford out of pocket, and Blue Shield (for doctors’ bills) was launched in the early ’40s. These were nonprofit plans created by the medical profession and the business community. Large commercial insurers, such as Prudential and Aetna, entered the market in force after World War II, and labor unions were instrumental in winning employer-subsidized health insurance as a benefit for many people. Today, more than 1,200 firms sell health insurance in the United States.
It was not entirely coincidental that this period also saw the rise of the wealthy doctor. Before World War II, physicians were respected members of the communities they served, but they were not usually rich. Only with America’s postwar prosperity did the practice of medicine become a reliable opportunity to do well by doing good. Today, the average physician enjoys an income of about $150,000, and some specialists, such as radiologists and certain surgeons, routinely earn in excess of $200,000.
The final postwar building block was the involvement of the federal government. For 200 years, the only real public contribution to medicine in the United States was the construction of municipal hospitals for the poor, state hospitals for the insane, and the provision of care to the military in war. Significant federal support for medical research and education dates only from the 1950s; federally sponsored health insurance for the elderly and the poor, with Medicare and Medicaid, respectively, began in 1965. Today, the federal government pays about 45 percent of the nation’s $1 trillion annual health-care bill.
Federally sponsored research and education have had a profound impact on the system. Federal dollars helped to build the downtown temples of medicine and to produce the specialists, researchers, and teachers who make American medicine in many ways the envy of the world. During the 1960s and ’70s, the boom years of American medicine, 40 new medical schools opened their doors; medical specialists now outnumber generalists nearly three to one. The National Institutes of Health, the primary overseer of the government’s research effort, was consolidated in 1930; its budget has grown from $200,000 in that year to just over $12 billion today. In 1971, President Richard Nixon declared war on cancer, calling it "the most significant action taken during my administration." Congress appropriated about $230 million for the effort that year. In 1995, despite new fiscal constraints, it gave the National Cancer Institute about $2.1 billion.
The results of the nation’s heavy investment in research and training came in a rush: widespread use of ventilators, the development of the intensive care unit and the computer-assisted tomography (CAT) scanner, the introduction of cardiac bypass surgery, all in the 1970s; fiber-optic devices and magnetic resonance imagers (MRIs) in the 1980s, which made possible diagnoses that heretofore had required invasive surgery, along with recombinant DNA pharmaceuticals, and materials and techniques for total joint replacement; and finally, in the 1990s, laparascopic surgery, which permits surgeons to perform major procedures such as gall bladder removal and chest lymph-node biopsy through a few inch-long slits, thus allowing the patient to go home the same day.
These new technologies are marvelous, but there is a catch: they are all very expensive.
By the late 1970s, policymakers were beginning to realize that Medicare, the crown jewel of the Great Society, might be turning into a budgetary disaster. Medicare spending started at $64 million in 1966, grew to $32 billion in 1980, reached $160 billion in 1994, and is still climbing.
Throughout the 1980s, medical costs grew faster than inflation, rising at annual rates of about 10 percent. The rate of growth has since subsided somewhat, but health-care cost increases still outpace increases for other items in the consumer’s market basket. By 1994, the United States was spending 14 percent of its gross domestic product (GDP) on health care, the highest percentage of any country in the world and more than double the share in 1960. (Next on the list of big spenders was Canada, at 10.2 percent of GDP. By comparison, in 1993 France and Germany spent 9.8 percent and 8.6 percent, respectively.)
As much as we spend, we still do not take care of everyone. Nearly 40 million Americans now lack health insurance. Some of these people choose to forgo insurance, and some get medical care at public facilities. Yet the existence of this big uninsured population is one of the most important reasons why, even though it spends a larger share of its national wealth on health care than any other nation in the world, the United States does not necessarily enjoy the best health in the world. America’s life expectancy and infant mortality rates, for example, are only in the middling range among Western industrialized nations.
Why does it cost so much to cover so few? The answer lies in the peculiar interaction between modern medicine and the marketplace.
As anyone who has ever been ill knows, obtaining health care is not like buying a car or some other product. Ordinarily, a consumer shopping for an expensive item actively searches out the merchant who will give him what he wants for the lowest price. The dealer will charge the highest price he can get without driving his customer to another store. By such transactions does the invisible hand of the free market produce efficiency: the most desired type and quantity of goods and services at the lowest cost.
Not so in medicine. When a doctor orders tests or treatments for a patient with insurance, that patient has no reason to try to shop for a lower price, even if he has the time and information to do so. This can be quite striking in practice: a patient who would cross town to take advantage of double coupons at the grocery store, or haggle for weeks over the price of a car, will enthusiastically accede to an expensive test without ever asking "How much will that be?" (or the related question, "Is it really necessary?"). Incentives to the providers, however, are unchanged: they want to sell as much as possible at the highest prices they can command. The insurance company, now the only one with an incentive to hold the line on costs, is not even a party to the initial transaction. It doesn’t find out about it until the bill arrives. These elements together are a prescription for soaring costs.
The asymmetry between buyer and seller, patient and provider, does not mean the end of competition. On the contrary, providers—doctors, laboratories, hospitals, and others—continue to compete fiercely for consumers’ business. But they often compete on the basis of quality rather than price: convenient facilities, attentive staff, good outcomes, whatever they think will attract their target market.
It is important to remember that not everything about this situation is bad. The knowledge that they would be rewarded for superior new technology, even if it was more expensive, doubtless encouraged manufacturers to push ahead with the development of CAT scanners and MRI machines, which are invaluable and indispensable tools in modern medicine. The flip side, though, is that medical "arms races" developed in many cities, as hospital executives concluded they must have the latest equipment to attract doctors and patients. (At one time, it was said that there were more MRI machines in Boston than in all of Canada.)
The traditional structure of health insurance, modeled on commercial insurance, also helped push medicine toward high-cost, inpatient procedures. In general, insurers design policies to cover only unexpected, expensive losses. Routine, predictable costs—be they ordinary wear and tear on cars or routine outpatient visits for people—generally are not covered. That gives both patient and provider an incentive to shift treatment into one of the covered—and more costly—areas.
With strong pressures driving costs up and nothing pushing them down, the medical system now fondly remembered by so many doctors and patients was inherently unstable. There inevitably would come a time when those footing the bill—employers, insurers, and taxpayers—would tolerate it no more. That time arrived during the 1980s.
The federal government, paying open-ended "reasonable and customary" fees under Medicare (the pricing system organized medicine demanded in return for supporting the creation of Medicare in 1965), responded to the steadily rising costs of health care with price controls, first on hospitals, then on doctors, for Medicare reimbursements. As physicians increased the volume of their services to make up for the lost income, the government added a downward adjustment based on volume. And so it went, with escalating effort and ingenuity each round.
Many insurers responded to rising costs with their traditional weapon: they tightened their "underwriting," the practice of identifying and classifying risks and setting appropriate premiums. Since something like 10 percent of the population is responsible for 80 percent of medical costs in any given year, it behooves a prudent insurer to identify the sickly individuals and avoid them like the plague. This is called "cherry picking" by some policymakers. Tighter underwriting is the reason individuals are having more difficulty obtaining health insurance, especially at attractive "group" or "community" rates, and why insurers refuse to cover "pre-existing conditions."
Finally, under mounting financial pressure, private employers, together with their insurers, devised an innovative solution—"managed care." Much of the thinking was done by insurance company officials and corporate executives who met periodically in the late 1980s in Jackson Hole, Wyoming, under the tutelage of physician Paul Ellwood and Stanford University economist Alain Enthoven. The corporate managers took advantage of the power they understood best: market power.
Recall that in the "classical" medical transaction, the third-party payer is passive: the doctor decides what is best for the patient, the patient agrees, and the insurer gets the bill. Some insurers and employers realized that because they insured many patients, they had enormous power in what was in fact a highly competitive provider market, with too many hospital beds (particularly if patients were hospitalized only for conditions requiring hospitalization) and too many doctors (especially as research funds dwindled). Increasingly, insurers and employers demanded steep discounts for services rendered to the individuals they covered, secure in the knowledge that if a particular doctor or hospital refused, others would be happy to step in. Patients were told by insurers to see doctors only on an approved list.
Doctors complained, correctly, that this new insurance technique would destroy the doctor-patient relationship. Many were bitterly disappointed when patients they had served faithfully for years went off to the new, discount doctors with barely a whimper or a look back. Yet for the average— which is to say healthy—patient, such a change is not necessarily a big deal. It is the chronically ill patient who suffers.
Insurers did not stop with discounts. They began to suspect that some doctors were ordering more tests and doing more procedures than were really "necessary" in order to make up for money trimmed elsewhere. Certainly it was difficult to explain why, for example, orthopedic surgeons replaced almost twice as many knees in Boston as in New Haven in 1982 despite the two cities’ having similar populations. Perhaps the New Haven doctors were doing too few knee replacements, but, considering differences in the compensation system, it seemed more likely to analysts that the Bostonians were doing too many. So in the late 1980s some insurers moved closer to truly "managing" care: they began to examine what care was ordered, not just how much it cost.
Their new initiative took a variety of forms—a requirement for second opinions, "preclearance" from the company for elective hospital admissions, and "utilization review," an after-the-fact check to make sure the service was medically indicated. Predictably—and appropriately—these techniques evoked howls of protest from the medical profession. Doctors complained they were being second-guessed by nurses or even clerks who knew little about medicine, were using secret protocols, and had never seen the patient. Doctors also complained that they were required to spend too much time on paperwork.
There was worse to come. Managed-care companies are increasingly finding that the best means of controlling costs lies with the doctor himself. In the most highly developed form of managed care, instead of paying a doctor for each visit or task ("fee for service"), the company pays him a flat fee per patient per month. If the patient stays healthy and needs nothing, the fee is all profit for the doctor; if the patient falls ill, the doctor must provide whatever care the patient needs, even if it costs more than the monthly fee. Under such a system, doctors become, in effect, insurers; they are at financial risk. This arrangement is called "capitation," and it is the hallmark of the emerging system of managed care.
Capitation reverses the incentives of fee-for-service medicine. Under the old system, the more a physician did, the more money he made. In the new regime, the less he does, the better off he is. Often the principle is extended to expensive services the doctor controls but does not necessarily perform himself. For example, the company may withhold certain sums from a physician’s compensation for referrals or hospitalizations in excess of an expected number. The company doesn’t inspect these cases individually. After all, he is the doctor. And if he makes an error under this cost-cutting pressure, only he is responsible.
For insurers and employers, capitation is the Holy Grail. By definition, it limits their costs. There is no need to second-guess experts in the field. They don’t have to risk alienating patients by denying claims. Their paperwork is simplified. More important, they can offer the kind of truly comprehensive coverage long sought by consumers; it is now in the doctor’s interest as well as the insurer’s to manage the patient with the least-expensive effective therapy. The doctor now has a stronger incentive, for example, to closely monitor chronic conditions such as asthma and diabetes in order to prevent costly hospitalizations or complications. The insurers can legitimately argue that they are shifting the emphasis in health care from curing disease to preventing it.
For doctors, however, capitation is a pact with the devil. The only way to survive financially under such a system is to sign up a large number of healthy patients and try to avoid the sick, which directly contradicts their training. There are also strong incentives to abandon solo practice for a group practice: a few severely ill people at the wrong time can spell disaster for the solo doctor—and perhaps for his patients too, as strains on his time and finances begin to effect the quality of their care.
Most troubling of all, however, is the effect of capitation on physicians’ medical decisions. Many medical calls are quite straightforward. A frail 80-year-old woman with diabetes, living alone in her own home, is hospitalized so that she can be given intravenous antibiotics for pneumococcal pneumonia; it would take a brave doctor to try to manage her as an outpatient. A 50-year-old male smoker with crushing substernal chest pain and certain electrocardiogram changes goes straight to the emergency room for clot-busting drugs if he can get there in less than six hours. (Even this case is not entirely straightforward: does the man get streptokinase at $300 per dose, or TPA, a slightly better drug for certain heart attacks, at $2,400?)
But what about the 45-year-old woman with chest pain and more subtle EKG abnormalities? The EKG is consistent with heart disease but also with other conditions. Do you send her home? Order an exercise stress test (about $1,200, and many false positives)? Refer her to a cardiologist (knowing referrals count against you)? Treat her with medication empirically "just in case," although every drug has side effects? Every doctor in practice knows that serious heart disease is not common among women in this category, but there are some exceptions. Is your patient one of those?
Of course, doctors have been making such decisions for a long time. However, managed care introduces a new element: the doctor’s own financial interest. It is sometimes said that under the old fee-for-service system, doctors also had a financial interest—in doing more: more tests, more procedures, more visits. But there is a significant difference. Doing more rarely means doing harm. Under managed care, doctors protect themselves by denying care that might help their patient (but also might not).
Some analysts say the solution is disclosure. The doctor says, "Yes, Mrs. Smith, you have locally invasive breast cancer, and I think a bone-marrow transplant might help you. But your insurance doesn’t cover it." The doctor has fulfilled his professional responsibility and is off the hook. The patient sues the insurance company to have her treatment paid for. That’s why many managed-care companies now include a "gag" clause in their contracts with physicians, threatening discharge for just such disclosures, or even the disclosure that a gag clause exists.
Capitation is more fiendish still. If the physician decides not to recommend the bone-marrow transplant because recovery is unlikely and the insurer will drop him if he goes ahead, the last thing he is going to do is tell the patient. Nor will a doctor tell a heart patient who has occasional chest pain but can still get around that he is not recommending bypass surgery (at a cost of $25,000) because, since the research literature shows that surgery for the patient’s single vessel disease increases the quality but not the length of life, the insurer penalizes doctors who recommend it.
Ethically, of course, the decision about surgery should be the patient’s to make, but when recommended surgery is free to the patient, virtually everyone will choose it, and costs will soar. Between 1990 and ’93, for example, U.S. physicians performed four times as much bypass surgery on heart attack victims as their Canadian counterparts did, with only modest differences in ultimate outcomes.
Managed care is changing the entire health-care delivery system in the United States—who provides care, who receives it, and what care is given. The stated goal of managed care is efficiency. Its method is to bring to medicine, the last cottage industry in the United States, the techniques of mass production. It works on volume. It assumes that there are economies of scale to be achieved. It incorporates the latest information technology. It seeks to standardize care. This allows an employer to use less skilled (and lower paid) personnel. The cardiologist can tell the internists how to treat the heart attack victim; the internist can tell the nurse practitioners how to take care of diabetics. "Cookbook medicine," say the doctors. "Improved quality control," respond the managers.
To managed-care advocates, however, the crowning achievement of their system occurs at the next level up: the reintroduction of the market. If all managed care accomplished were a transfer of profits from physicians to managers, what would be gained? The savings to society only accrue when different managed-care companies compete with one another for customers. As competition drives down the price each company asks, the total spent on health care must necessarily decline.\_
Managed care promises to reshape health care in America. It could very well alter the traditional doctor-patient relationship beyond recognition. More important, it provides an unsettling answer to the question of who should be making the important therapeutic decisions: the doctor, the patient, or the managed-care company.
The changes wrought by managed care will reverberate throughout the health-care system, touching important institutions that consumers rarely think about. Medical schools are already feeling the effects. While academics are vigilantly protecting their right to take on as many subspecialty fellows—doctors seeking advanced training in cardiology, orthopedic surgery, and the like—as they want, young physicians are voting with their feet. Applications for specialty residencies are already falling. No one in his or her mid-thirties is going to spend three or four years working 80 hours a week at a salary of $35,000 to get trained out of a job. At some point, senior faculty are going to have to put aside some of their research and other pursuits to take up the slack.
Nonetheless, it is heartening that, despite clear suggestions that doctors in the future will have less independence and lower incomes than physicians today, applications to medical schools reached an all-time high in 1994. There were 45,000 applicants, almost double the 1986 number, for about 16,000 slots. Maybe it is just the prospect of a secure job in an insecure time that explains this increase, but perhaps now that medicine’s material rewards are being scaled back, the field is attracting fewer people who are interested in the money and more whose chief goal is to help others feel better. The organized profession, in the meantime, is trying to improve its position vis-à-vis insurers by reducing the oversupply of physicians. It is cutting residencies, reducing medical-school class sizes, and trying to close doors to foreign medical graduates.
Medical research is also likely to be affected by the onslaught of managed care. Overall, there may well be less money going into research, particularly since insurers are intent on eliminating the higher fees that universities and specialists charge for ordinary care in order to subsidize research. The focus of research may also change, from seeking better medications or techniques that cost more to identifying those that cost less (or can be used effectively by workers with less training).
Hospitals are already changing. Community hospitals, unable to meet expenses in the new environment, are selling out to investor-owned chains. In return for financial support, the new owners may radically alter a hospital’s mission—closing an unprofitable emergency room, converting it from acute to convalescent care, or restricting uncompensated care to the minimum required by law. Big cities such as New York and Washington, D.C., are overhauling the aging municipal hospitals that have traditionally served the poor, laying off bureaucrats and medical staff alike. Nor are proud university hospitals exempt from the new managed-care regime. They also must transform themselves, reducing research and teaching in favor of patient care and shifting from cutting-edge, high-tech specialty care to inexpensive primary care.
Despite all of managed care’s pitfalls, Republicans and Democrats in Washington, who have reached near-total gridlock in other areas, seem to agree that it is the solution to the nation’s healthcare problem—even though they disagree what that problem is. Embarking on his health-care reform initiative in 1993, President Clinton said that the principal problem was access. The percentage of the population lacking medical insurance was on the rise, having increased from 12.5 percent in 1980 to about 15 percent in 1993. The only way to save enough money to pay the bill for covering these people, Clinton concluded, was to encourage everyone to choose managed care, in a system of managed competition. The administration attempted to overcome all the shortcomings of managed care with detailed government regulation, spelling out its vision in a 1,364-page plan. There is no need to remind you of the plan’s fate.
The Republicans took another route. In 1994, they warned that Medicare, the giant federal health-care program for the elderly, would be "bankrupt" by 2002. Their solution? Introduce managed care. Give seniors vouchers for private health insurance and allow private companies to compete for their business on the basis of price and, in theory, quality. No regulations were necessary. Health care for seniors would be back in the private sector where it belonged. Consumers would have more choices (of plans if not of providers), and by paying attention to the price of insurance, they would drive down the total cost of their health care to something the nation could afford. (Savings of $270 billion over seven years were promised.) And tempting prices would lead most of them to sign up for managed care. This bill, however, was the victim of a presidential veto during the budget battle of 1995.
Some conservatives, including House Speaker Newt Gingrich (R.-Ga.), were particularly taken with a variation on the voucher theme known as medical savings accounts (MSAs). Under this scenario, seniors use a portion of a government voucher worth perhaps $5,000 to buy "catastrophic" health insurance—coverage for medical expenses in excess of, say, $3,000. The remainder of the voucher goes into a savings account to cover checkups, medications, and other routine medical expenses. Any money that goes unspent ultimately winds up in the insured individual’s pocket.
In theory, this encourages the prudent patient to shop carefully for doctors, drugs, and tests, and not to overuse routine services or go to the doctor too often. In other words, it is supposed to restore price competition to the market for health-care services and thus drive down costs. (This is one reason why Gingrich and others favor making MSAs of some kind more available not only to Medicare beneficiaries but to the population as a whole.) In practice, these accounts give patients an incentive to skimp on important preventive care. But MSAs have other significant drawbacks. At bottom, the difficulty is that they would return us to a model that doesn’t work anymore, the old fee-for-service system with a third-party payer. Any medical problem serious enough to require hospitalization or significant medical tests will put a patient over the deductible. If that happens, an insurance company will again be doling out checks to physicians, hospitals, and other providers. This is precisely the arrangement that paved the way for managed care in the first place.
Between 1988 and ’95, the proportion of workers and their families covered by managed care jumped from 29 to 70 percent. Some analysts predict that by 2000, this number will reach 90 percent.
One way or another, managed care will be incorporated into Medicaid and Medicare—already, about 10 percent of seniors nationwide have opted for managed-care programs.
Does managed care work? Is it providing more efficacious health care at lower cost? Is it at least providing the same health care for less money?
On quality, the jury will be out for a long time. Advocates of managed care say they have positive indications, but even they admit that these gauges—immunization and mammography rates and member satisfaction surveys—are crude measures. On cost, there are a few more straws in the wind. In California, where managed-care providers now dominate the market (covering 95 percent of the insured population in southern California alone), average insurance premiums fell for the first time in 1992. Nationwide, annual increases in medical costs have moderated in the last year or two. Some analysts attribute part of the improvement to the increased penetration of managed care. Those who have probed deeper into managed care’s impact ascribe the savings primarily to two factors: a decline in hospitalization (especially length of stay) and capitation of physicians. The savings from shorter hospital stays, they fear, are one-time reductions. And the success of capitation returns us to the all-important and still-unanswered question of what is happening to the quality of care Americans receive.
Whether or not managed care will lead us to medical utopia, do we have any choice? For reasons we are all too familiar with, it is apparent that we can no longer afford the present system, certainly not Medicaid and Medicare. Doubtless, fee-for-service medicine will survive as a niche market for the well-to-do and the health obsessed. Must managed care be the destiny of everybody else?
In virtually every other developed country, it is not. These countries have gone a different way. As Joseph White, a Brookings Institution analyst, points out in Competing Solutions (1995), the United States is revolutionizing its health-care delivery systems in order to maintain its private financing structure. To one degree or another, Canada, Germany, France, England, Australia, and Japan have done the opposite: they have changed their finance systems and left their care-provider structures largely in place.
Each of those countries has enacted some form of national health insurance that is universal, mandatory, and comprehensive. The degree of individual choice in selecting doctors and treatments depends primarily on the historical practices in each country. Germans, for example, are able to select their own outpatient doctors, but, following the national tradition, generally get whoever is on call at the time if they need hospital care. In Canada, again following established practices, the family doctor remains the patient’s primary physician in and out of the hospital. In most countries financing is public, but health care provision remains in the private sector. Only in England are doctors and other medical personnel employees of the government.
However, in each single-payer country, the national government is directly or indirectly involved. Generally, it controls costs by negotiating overall "global" budgets with large groups of providers. The providers then allocate the money among themselves as they see fit, but no more money is spent on health care. One way or another, the government also controls large capital expenses, such as hospital construction and major equipment purchases.
The single-payer approach does rein in costs, without any detectable increase in illness or mortality. At the same time, it extends at least some health care to everyone and avoids expenses caused by adverse selection, cost shifting, and multiple bureaucracies. It has already achieved some of the more desirable goals of managed care, such as a higher ratio of family doctors to specialists.
Of course, these systems are not perfect. Canada, whose experience is most relevant to our own, is also having difficulty keeping costs down. Ironically, the Canadians are now considering some managed-care techniques, including capitation. And the technique that might do the most to control expenses, requiring copayments (small fees paid by the patient for each service), seems to have been rejected as too politically unpopular. Still, Americans have much to learn from Canada and other countries.
The problems of American medicine, indeed of all Western medicine, are a direct result of its triumph. Our technology and understanding allow us to go to unprecedented lengths in pursuit of "health," and most patients expect the system to go to those lengths for them. Yet increasingly, we do not want to pay for the system that makes such benefits possible. Taxpayers do not want to pay more for the care of the elderly and the poor; employers and employees balk at paying higher insurance premiums.
The cost of health care must be trimmed, and that means that someone must decide who gets less than "everything." Traditionally in this country, the market has performed this rationing function, efficiently and invisibly, transaction by transaction. But in medicine this system is now failing us, and whatever their particular virtues, piecemeal reforms such as those proposed in the Kennedy-Kassebaum bill will not solve the fundamental problem. (The proposed law guarantees workers who leave their jobs the opportunity to retain some insurance, limits insurers’ ability to deny coverage for pre-existing conditions, and may make MSAs more attractive.) Leaders across the political spectrum, from Hillary Clinton to Newt Gingrich (despite his flirtation with MSAs), are opting instead for managed care. The consequences of this fateful decision are now beginning to be felt, and doctors in particular are waiting, some anxiously, some confidently, for patients to revolt. But it is not enough to criticize managed care. Those who fear its failings must be prepared to offer something better.
\_One of the reasons President Clinton’s failed Health Security Act of 1994 grew to such gargantuan proportions was that its architects tried to remedy some of the shortcomings of managed care. To prevent monopolies from emerging (in, say, a town that can support only one hospital), the plan provided for "managed competition." To help consumers evaluate the quality, as well as the price, of competing health plans and to prevent companies from soliciting only healthy customers, it called for more government oversight.
This article originally appeared in print