Good and Bad Social Science Research: Medical Costs
-- Posted by Neil H. Buchanan
A new on-line legal periodical, whimsically titled Jotwell: The Journal of Things We Like (Lots), was launched last Fall. The idea behind the journal is to have law professors write short (500-1000 word) entries describing an important book, article, or work-in-progress in their field that they have read recently. The journal is, by all accounts, off to a very good start. The Tax Law section of Jotwell is edited by Allison Christians of Wisconsin and George Mundstock of Miami, and they were nice enough to invite me to be among the inaugural group of Contributing Editors.
My first entry, "Health Care Costs and Fiscal Infirmity," was posted last week. I discuss an article that appeared in June 2009 in The New Yorker, written by Harvard Medical School professor Atul Gawande: "The Cost Conundrum: What a Texas town can teach us about health care." I argue that tax policy is going to be driven in the next few decades by health care costs, and Gawande's article is an extremely good starting point to understand the issues involved in causing the U.S.'s disastrously high (and rising) health care costs.
I felt a bit awkward about reviewing an article from a non-academic journal, but I justified my decision in part by noting: "Although written for a non-academic audience, and although written by a medical doctor, Gawande’s article is in some ways among the best empirical social-science writing available." I pointed out that Gawande had found a "natural experiment," that is, a real-world comparison that can mimic the pure scientific method of laboratory experiments, with control groups and experimental groups. Specifically, Gawande identified two Texas cities, McAllen and El Paso, that were similar in every way that could be relevant to medical costs (age and ethnicity of the populations, diets, income levels, etc.) but that had wildly different medical care costs -- McAllen being twice as expensive as El Paso.
Looking at the data, Gawande then supplemented statistical analysis with old-fashioned interviewing: asking the relevant players in each city what they thought explained the enormous cost difference. He concluded that the difference between the two cities is that medical care providers in El Paso (the low-cost city) have not yet begun to respond to the economic incentives of the fee-for-service model, in which providers are reimbursed for "doing things to people," not for making them healthier.
Almost as a throwaway, I mentioned in my Jotwell piece Gawande's comment that McAllen is the second-highest-cost city in the country, and that Miami is the highest cost city. Miami, however, "has much higher labor and living costs." Gawande concluded, correctly, that it would be ridiculous to compare Miami to El Paso, because the higher medical care costs in Miami might simply be a result of having to pay doctors more to cover the local cost of living.
As obvious as that observation might be, The New York Times published a truly amazing article last week showing that similar simple adjustments were not made in one of the the most widely cited studies on health care costs in the country: The Dartmouth Atlas of Health Care. The article points out that the Atlas was extremely influential during the debate on the Democrats' health care bills in 2009 and 2010, and it was used repeatedly by the Obama administration to claim that there is a simple and straightforward way to lower Medicare costs. The Atlas shows the costs for hospitals in different regions of the country, seeming to show that hospitals in some parts of the country are more efficient while other areas are the source of Medicare's cost problems. Members of Congress asked the Administration to simply cut the reimbursements to the high-cost hospitals down to the levels that the low-cost hospitals spend, on the theory that all hospitals should be able to do their work as inexpensively as the least expensive hospitals already provide care.
This is an appealing prospect, because it purports to cut only "waste," not beneficial care. If a hospital in Mississippi, for example, costs 40% more than a hospital in Minnesota, then the Mississippi hospital should send its management team to learn the cost-cutting secrets of the Land of 10,000 Lakes. No danger of cutting necessary care. No death panels. What could be better?
The Atlas, however, has a major shortcoming: "Measures of the quality of care are not part of the formula." In other words, the Atlas merely shows who has low costs, for whatever reason. They are not only failing to control for patients' health, but they are not even taking into account local cost of living: "Neither patients’ health nor differences in prices are fully considered by the Dartmouth Atlas." The Times reporters offer the following interesting hypothetical: "[T]he atlas’s hospital rankings do not take into account care that prolongs or improves lives. If one hospital spends a lot on five patients and manages to keep four of them alive, while another spends less on each but all five die, the hospital that saved patients could rank lower because Dartmouth compares only costs before death."
The authors of the Atlas are, unfortunately, responding more like advocates than like scientists. In response to hospital administrators' anger at being downgraded even if they have better outcomes, the article notes that "making these administrators uncomfortable is the point of the rankings," quoting one of the principal authors as saying: "“When you name names, people start paying more attention. We never asserted and never claimed that we judged the quality of care at a hospital — only the cost." The article then points to examples showing that, in fact, the authors have made precisely such claims. (The author's response: He wasn't as careful as he should have been.)
Most jaw-dropping, I think, is their claim that "even if they adjusted more fully to reflect differences in regional costs and patients’ health, the overall effect on the atlas’s findings would be relatively small." If they know that to be true, then they must be able to demonstrate it. If they can, why don't they?
The article includes some other gems that I will leave readers to enjoy on their own. I will simply conclude by making the obvious point that good social science is evidence-based, and the evidence must be gathered, distilled, and described carefully. Atul Gawande has, as far as I know, no training as a social scientist. The Dartmouth study is co-authored by an economist. Gawande's conclusions are much more reliable.
A new on-line legal periodical, whimsically titled Jotwell: The Journal of Things We Like (Lots), was launched last Fall. The idea behind the journal is to have law professors write short (500-1000 word) entries describing an important book, article, or work-in-progress in their field that they have read recently. The journal is, by all accounts, off to a very good start. The Tax Law section of Jotwell is edited by Allison Christians of Wisconsin and George Mundstock of Miami, and they were nice enough to invite me to be among the inaugural group of Contributing Editors.
My first entry, "Health Care Costs and Fiscal Infirmity," was posted last week. I discuss an article that appeared in June 2009 in The New Yorker, written by Harvard Medical School professor Atul Gawande: "The Cost Conundrum: What a Texas town can teach us about health care." I argue that tax policy is going to be driven in the next few decades by health care costs, and Gawande's article is an extremely good starting point to understand the issues involved in causing the U.S.'s disastrously high (and rising) health care costs.
I felt a bit awkward about reviewing an article from a non-academic journal, but I justified my decision in part by noting: "Although written for a non-academic audience, and although written by a medical doctor, Gawande’s article is in some ways among the best empirical social-science writing available." I pointed out that Gawande had found a "natural experiment," that is, a real-world comparison that can mimic the pure scientific method of laboratory experiments, with control groups and experimental groups. Specifically, Gawande identified two Texas cities, McAllen and El Paso, that were similar in every way that could be relevant to medical costs (age and ethnicity of the populations, diets, income levels, etc.) but that had wildly different medical care costs -- McAllen being twice as expensive as El Paso.
Looking at the data, Gawande then supplemented statistical analysis with old-fashioned interviewing: asking the relevant players in each city what they thought explained the enormous cost difference. He concluded that the difference between the two cities is that medical care providers in El Paso (the low-cost city) have not yet begun to respond to the economic incentives of the fee-for-service model, in which providers are reimbursed for "doing things to people," not for making them healthier.
Almost as a throwaway, I mentioned in my Jotwell piece Gawande's comment that McAllen is the second-highest-cost city in the country, and that Miami is the highest cost city. Miami, however, "has much higher labor and living costs." Gawande concluded, correctly, that it would be ridiculous to compare Miami to El Paso, because the higher medical care costs in Miami might simply be a result of having to pay doctors more to cover the local cost of living.
As obvious as that observation might be, The New York Times published a truly amazing article last week showing that similar simple adjustments were not made in one of the the most widely cited studies on health care costs in the country: The Dartmouth Atlas of Health Care. The article points out that the Atlas was extremely influential during the debate on the Democrats' health care bills in 2009 and 2010, and it was used repeatedly by the Obama administration to claim that there is a simple and straightforward way to lower Medicare costs. The Atlas shows the costs for hospitals in different regions of the country, seeming to show that hospitals in some parts of the country are more efficient while other areas are the source of Medicare's cost problems. Members of Congress asked the Administration to simply cut the reimbursements to the high-cost hospitals down to the levels that the low-cost hospitals spend, on the theory that all hospitals should be able to do their work as inexpensively as the least expensive hospitals already provide care.
This is an appealing prospect, because it purports to cut only "waste," not beneficial care. If a hospital in Mississippi, for example, costs 40% more than a hospital in Minnesota, then the Mississippi hospital should send its management team to learn the cost-cutting secrets of the Land of 10,000 Lakes. No danger of cutting necessary care. No death panels. What could be better?
The Atlas, however, has a major shortcoming: "Measures of the quality of care are not part of the formula." In other words, the Atlas merely shows who has low costs, for whatever reason. They are not only failing to control for patients' health, but they are not even taking into account local cost of living: "Neither patients’ health nor differences in prices are fully considered by the Dartmouth Atlas." The Times reporters offer the following interesting hypothetical: "[T]he atlas’s hospital rankings do not take into account care that prolongs or improves lives. If one hospital spends a lot on five patients and manages to keep four of them alive, while another spends less on each but all five die, the hospital that saved patients could rank lower because Dartmouth compares only costs before death."
The authors of the Atlas are, unfortunately, responding more like advocates than like scientists. In response to hospital administrators' anger at being downgraded even if they have better outcomes, the article notes that "making these administrators uncomfortable is the point of the rankings," quoting one of the principal authors as saying: "“When you name names, people start paying more attention. We never asserted and never claimed that we judged the quality of care at a hospital — only the cost." The article then points to examples showing that, in fact, the authors have made precisely such claims. (The author's response: He wasn't as careful as he should have been.)
Most jaw-dropping, I think, is their claim that "even if they adjusted more fully to reflect differences in regional costs and patients’ health, the overall effect on the atlas’s findings would be relatively small." If they know that to be true, then they must be able to demonstrate it. If they can, why don't they?
The article includes some other gems that I will leave readers to enjoy on their own. I will simply conclude by making the obvious point that good social science is evidence-based, and the evidence must be gathered, distilled, and described carefully. Atul Gawande has, as far as I know, no training as a social scientist. The Dartmouth study is co-authored by an economist. Gawande's conclusions are much more reliable.