The Affordable Care Act encompasses a host of programs and provisions, which are the subject of much discussion and debate right now. This post offers a nonpartisan, genuinely curious exploration of one of the Affordable Care Act’s less frequently debated programs, the Shared Savings Program (SSP). The program’s objective is to reduce Medicare costs and increase the quality of care provided to Medicare patients. In this short sequence of posts, I explore whether the program looks to be meeting these objectives. Given that this topic may appeal to both non-technical and technical audiences, I’ve split the posts into a higher-level description of the findings (part I) and a more technical post with code (part II).
The Shared Savings Program
Under the Shared Savings Program, a group of Medicare providers voluntarily come together to participate in the program as an Accountable Care Organization (ACO). Medicare generates a benchmark of what it would expect to pay the ACO in expenditures in a given year, based upon historical data, the type of patients the ACO treats, expected changes in the cost of treatment, etc. If Medicare’s actual expenditures to the ACO for that year end up being meaningfully below the benchmark (there are specific criteria for what constitutes “meaningfully”) AND the ACO still offers good quality care (there are also specific criteria here), then Medicare will split some of what they saved with the ACO…. Hence the term the Shared Savings Program.
For purposes of this post, it’s worth clarifying a few points:
First, SSP terminology is framed from Medicare’s perspective: thus, “expenditures” refer to what Medicare pays to providers, “savings” refer to the difference between what Medicare expected to pay providers and what it actually paid, etc. The picture would be somewhat different for providers, who are presumably invoicing for less that they normally would, but receiving some of the difference back from Medicare in the form of “shared savings.” (One of the primary ways the SSP hopes to reduce costs is by encouraging ACOs to embrace electronic health records and share information more efficiently among providers, thereby reducing redundant work.)
Second, there’s an important caveat that applies to all the analyses and results presented here. Generally, when you want to demonstrate causality (e.g., the Shared Savings Program resulted in savings or change in quality of care), you need a control condition, so you can ensure that any observed change wouldn’t have happened on its own or can’t be attributed to other factors that had nothing to do with the program (e.g., other changes in medical costs and billing, or how treatment is provided, or whatnot). Unfortunately, we don’t have a control group of ACOs that didn’t participate in the Shared Savings Program. To work around this limitation, instead of comparing a test group to a control group, in the models below I compare the ACOs to themselves at previous timepoints. This task, too, is somewhat limited because we also don’t have baseline data for the ACOs, before they entered the program, but we can look at whether they exhibit change in the program that is consistent with the program’s objectives. In looking at the program financials, I also examine whether the program is in the red or in the black. Once again, we can’t draw any conclusions about what the financial picture would look like without the SSP, but we can note whether the program appears to be financially self-sustaining or not.
Lastly, the Affordable Care Act activates strong feelings on both sides of the political spectrum, and it would be understandable for readers to wonder about any bias I might bring to these analyses. I approached this task with genuine curiosity. I wasn't looking to make the program look good or bad - my goal was to discern, as much as possible, what's happening. I think this is the role of any good data scientist. I include all my code in the the technical accompaniment to this post (part II), so interested parties can replicate my work or build on it as they choose.
Spoiler alert: For any readers more interested in the bottom line than in how I got there, I’ll share it now. From 2013-2015, the Shared Savings Program did not generate sufficient savings to cover the cost of payouts to ACOs that earned them - in other words, it was not financially self-sustaining - but ACOs in the program do exhibit improved savings rates (their percentage under benchmark) over time and improvements in the quality of care they provide over time. Details below.
Aggregated Financial Impact by Year
I began with a high-level overview of the financial numbers for the program. (This isn't the multilevel modeling part of the analyses yet.) In press releases on their website, the Centers for Medicare and Medicaid Services (CMS) reports that the Shared Savings Program generated savings in 2013, 2014, and 2015. My analyses tell a slightly different picture. I’m able to replicate most of the numbers CMS reports, but their reported total savings numbers don’t take into account: 1) the losses from ACOs that exceeded their benchmarks, and 2) payouts to the ACOs that earned them. For example, CMS reports that in 2013,
“58 Shared Savings Program ACOs held spending $705 million below their targets and earned performance payments of more than $315 million as their share of program savings… Total net savings to Medicare is about $383 million in shared savings” (Medicare Shared Savings Program Performance Year 1 Results)
I, too, found that 58 ACOs produced a savings of approximately $705 million and earned payouts of approximately $315 million (row 2 in the table below), but it also looks like 162 ACOs either didn’t meet their minimum savings rate or were over benchmark in 2013 and cumulatively produced a loss of $471 million (row 3 in the table). When the cumulative savings and losses for all ACOs in a given year are considered, along with the payouts to those ACOs who earned them, the SSP looks to have been operating in the red at $-78 million in 2013, $-49 million in 2014, and $-216 million in 2015.
In the interest of full disclosure, I do want to note that my 2014 estimates differ from CMS’s 2014 numbers in a couple of places. Specifically, CMS reports that “Ninety-two Shared Savings Program ACOs held spending $806 million below their targets and earned performance payments of more than $341 million as their share of program savings”. Using the publicly available data, I found that 86 ACOs earned payouts and saved $777 million; very oddly, I do get $341 million as the amount spent on payouts to these ACOs. This $29 million discrepancy ($806 million - $777 million) isn’t enough to make a difference in covering the cost of the 2014 payouts, however. Regardless of which 2014 numbers are accurate, the publicly available data indicate that the program has not, thus far, generated enough savings to cover the payouts in any given year.
Change in Savings Rate over Time
Given this, a different question we might ask is whether individual ACOs tend to improve their savings rate (or a percentage representing the amount they were under or over benchmark, divided by their benchmark) over time in the Shared Savings Program. In other words, overall, the program may not be breaking even, but ACOs may still tend to improve (even if the average savings rate is getting less negative over time, or if improvements have thus far been obscured by a fresh crop of first-year ACOs with negative savings every year).
To explore the possibility that ACOs improved their savings rate over time in the program, I estimated a series of multilevel models predicting the ACOs’ savings rates from their year in the program. (Detail about what this means and accompanying code can be found in Part II to this post.) The average ACO had a savings rate of about 0.3% their first year in the program, and increased their savings rate an additional o.4% with each year in the program (so, the average savings rate in the second year was about 0.7% and about 1.1% in the third year). This relationship held even when I controlled for the number of patients (referred to as "beneficiaries" in the program documentation) who were assigned to the ACO each year. In other words, the improved savings rate over time isn't an artifact of changes in the number of patients served by the ACO over time. ACOs do appear to have better savings rates over time.
Change in Quality of Care over Time
The Shared Savings Program was created to lower growth in costs AND improve the quality of care provided to Medicare patients. The last set of analyses explored whether ACOs in the program exhibited changes in the quality of care they provided. Medicare collected data for the quality-of-care measures from a combination of patient or caregiver report, claims data, and a reporting portal for ACO providers. There were a total of 33 measures in each year, but eight of the items changed in 2015. Thus, there are only 25 measures with data availablefor all three years to date. Each of the 25 quality-of-care measures was examined in a separate multilevel model.
The results indicate that ACOs in the program exhibited improvements in the quality of care they provided. On 18 measures, the average trajectory in the program was for ACOs to significantly improve on the measure over time. One additional measure exhibited improvement that wasn’t quite statistically significant, and two measures exhibited no change. Interestingly, performance on the remaining four measures significantly decreased over time in the program. The four measures on which ACOs tended to decrease over time were aspects of the patient/caregiver experience (e.g., getting timely care, feeling that one’s doctor communicated well, the patients’ ratings of their doctors, and access to specialists), and raise the fascinating possibility that although ACOs may do a better job over time of keeping patients healthy and screening them for risks, the pressures to do this may cause patients to feel their providers are less available and less communicative.
Conclusions and Recommendations
The available data indicates that the Shared Savings Program is not generating enough savings to cover the cost of the payouts that are part of the program. Nonetheless, ACOs in the program do improve their savings rates and the quality of care they provide to Medicare patients over time. The absence of a control condition, or data about Medicare expenditures to providers who are not part of the Shared Savings Program, limits the conclusions we can draw. It's possible that providers who are not part of the Shared Savings Program grossly exceed their benchmarks, such that Medicare is operating even more in the red with these providers than it is with those who are part of the Shared Savings Program. Similarly, ACOs tend to change in the program in ways that are consistent with the program's objectives, but without a comparison group, we can't confidently attribute this change to the Shared Savings Program.
In terms of recommendations, future work in this area might determine what differentiates an ACO that produces savings in the program from one that does not. Payouts from the program exceed savings because the bulk of the ACOs each year exceed their benchmarks, and produce losses. If these ACOs can be assisted - perhaps using strategies or lessons learned from successful ACOs - the program may become financially self-sustaining.
Technical details and code for this post can be found in Part II.
Complete code for this post can be found on GitHub at: https://github.com/kc001/Blog_code/blob/master/2017.01%20Multilevel%20modeling.R