The Value of Medicaid: The Evolution of a Socially Undesirable Finding
In 2015, Amy Finkelstein, Nathaniel Hendren, and Erzo Luttmer released an NBER working paper called “The Value of Medicaid: Interpreting Results from the Oregon Health Insurance Experiment.” The paper’s results were a slap in the face of Social Desirability Bias — and the authors boldly advertised them right in the abstract:
Our baseline estimates of Medicaid's welfare benefit to recipients per dollar of government spending range from about $0.2 to $0.4, depending on the framework, with at least two-fifths – and as much as four-fifths – of the value of Medicaid coming from a transfer component, as opposed to its ability to move resources across states of the world.
Translation: Medicaid drastically fails a cost-benefit test by giving extremely expensive health care to recipients who barely appreciate it. Using standard economic logic, the poor themselves would be far better-off if the government cut the Medicaid budget in half, then handed current recipients the remaining money.
By the time the paper was finally published in the Journal of Political Economy in 2019, however, the paper’s findings had drastically changed in a socially desirable direction. The updated abstract:
Across different approaches, we estimate low-income uninsured adults’ willingness to pay for Medicaid between $0.5 and $1.2 per dollar of the resource cost of providing Medicaid; estimates of the expected transfer Medicaid provides to recipients are relatively stable across approaches, but estimates of its additional value from risk protection are more variable.
Notice: The new low estimate of the value of Medicaid exceeds the old high estimate. And while the midpoint of the estimates still says Medicaid is worth less than it costs, the high estimate labels it a resounding cost-benefit success.
What happened? The final version only mentions the original paper once, stating:
In the working-paper version (Finkelstein, Hendren, and Luttmer 2015), we also implemented the health-based optimization approach and found that we lacked the statistical power to credibly estimate heterogeneity in the return to medical spending, dh/dm, and hence the pure-insurance component (I; see eq. [20]).
So what happened? Googling revealed ideological objections to the conclusion, but no discussion of why the answer changed. So I turned to ChatGPT, asking:
Why did the 2019 version of Finkelstein et al. reach much more favorable estimates of the value of Medicaid than the original 2015 working paper?
GTP’s answer is below. To me, this very much sounds like the authors were desperately hunting for a way to remain pro-Medicaid. The cost of the program is now nearing $1 trillion annually; can we seriously believe that the typical recipient wouldn’t gladly have half the cash instead?
But please share your own assessment in the comments…
Ah, good question — you’re talking about Amy Finkelstein and co-authors’ work on the Oregon Health Insurance Experiment (OHIE), which has gone through several iterations: the original NBER working paper (~2015) and later the peer-reviewed published versions (~2019 in QJE and related outlets).
Here’s what happened and why the 2019 estimates looked much more favorable toward Medicaid’s value than the preliminary 2015 ones:
1. Change in Valuation Framework
2015 NBER working paper (with Nathaniel Hendren & Erzo F.P. Luttmer):
They introduced a willingness-to-pay (WTP) framework for valuing Medicaid, comparing the consumption-equivalent welfare gain from receiving Medicaid to the government’s cost of providing it.Their baseline estimates suggested that recipients valued Medicaid at well below its fiscal cost (sometimes around 20–40¢ on the dollar).
This generated a lot of attention — and criticism — since it implied Medicaid was highly inefficient in pure welfare terms.
2019 published version (QJE, “The Value of Medicaid: Interpreting Results from the Oregon Health Insurance Experiment”):
The authors revised both methodology and interpretation. They emphasized heterogeneity in valuation and considered broader welfare components (e.g., risk reduction, utility from financial security, health improvements not captured in short-term consumption).Their updated estimates suggested that Medicaid’s value to recipients could be much closer to its cost, in some cases nearly dollar-for-dollar.
2. Methodological Refinements
In the 2015 draft, they measured WTP narrowly by looking at changes in out-of-pocket spending and consumption patterns. That left out hard-to-observe welfare benefits.
By 2019, they incorporated:
Risk protection / consumption smoothing benefits (Medicaid insulates against catastrophic shocks).
Better measures of heterogeneity — some groups (especially the sick) value insurance far more highly than the average.
Broader welfare metrics beyond immediate spending responses.
These adjustments raised the implied value substantially.
3. Interpretive Shift
The 2015 draft framed its results as suggesting Medicaid recipients didn’t “value” the program very much, relative to cost.
After criticism (both academic and policy-oriented), the 2019 version stressed that efficiency should not be judged solely by average WTP — because insurance provides large welfare gains to the most vulnerable, which isn’t captured by a simple average.
They also highlighted that redistribution itself is part of Medicaid’s value proposition — which a purely WTP framework underweights.
4. Response to Criticism
Economists pointed out that the 2015 approach might underestimate Medicaid’s value because it treated recipients’ observed consumption responses as the main channel of benefit, ignoring risk pooling and unmeasured health effects.
The later versions responded by explicitly modeling those dimensions, producing more favorable welfare conclusions.
✅ Bottom line:
The 2015 working paper’s framework made Medicaid look inefficient because it focused narrowly on immediate consumption value. By 2019, Finkelstein et al. incorporated risk protection, heterogeneity, and broader welfare effects, shifting the conclusion: Medicaid recipients do indeed derive substantial value, often approaching the government’s cost.



There is a pattern that is hard to discern for outsiders, but it's there.
Early in the COVID-19 pandemic Jay Bhattacharya and others tested for the presence of SARS-CoV-2 antibodies in Santa Clara County, where Stanford University resides. The findings of the study? The prevalence of antibodies implied that the number of infections was much higher than previously estimated. Further, given that the death rate was known, the results suggested that the disease was much less deadly than previously estimated.
These were not the results the "authorities" wanted and the study and the study's authors were viciously attacked. I talked to Jay during this time and he was deeply affected.
The study's methodology was successfully defended and Jay, a true scientist, stuck by the results. In hindsight, the study's results were correct. In an amazing turn of events, Jay went on to become the head of the NIH.
But the point is that science isn't objective and impartial. If you generate a result that isn't acceptable to the "authorities" and the "accepted way of thinking," you will be subjected to strong pressures to change or renounce your results.
Consider climate science, police killings of minorities, nuclear power, recycling, organic farming, minorities and academic success, etc.
GPT's explanation sounds to me that people pointed out that they had missed some of the benefits in their calculation so they added them in. I don't see any indication in the post that the adjustment was politically motivated.