Does public release of performance results improve quality of care? A systematic review

Does public release of performance results improve quality of care? A systematic review Paul G Shekelle, Yee-Wei Lim, Soeren Mattke, Cheryl Damberg So...
Author: Winifred Hall
27 downloads 0 Views 481KB Size
Does public release of performance results improve quality of care? A systematic review Paul G Shekelle, Yee-Wei Lim, Soeren Mattke, Cheryl Damberg Southern California Evidence-based Practice Centre RAND Corporation

QQUIP and the Quality Enhancing Interventions project QQUIP (Quest for Quality and Improved Performance) is a five-year research initiative of The Health Foundation. QQUIP provides independent reports on a wide range of data about the quality of healthcare in the UK. It draws on the international evidence base to produce information on where healthcare resources are currently being spent, whether they provide value for money and how interventions in the UK and around the world have been used to improve healthcare quality. The Quality Enhancing Interventions component of the QQUIP initiative provides a series of structured evidence-based reviews of the effectiveness of a wide range of interventions designed to improve the quality of healthcare. The six main categories of Quality Enhancing Interventions for which evidence will be reviewed are shown below.

For more information visit www.health.org.uk/qquip

Acknowledgements This study was produced as part of the Quest for Quality and Improved Performance (QQUIP), an initiative of The Health Foundation. This report is adapted from a RAND Corporation white paper, Does Public Release of Performance Results Improve Quality?, and a peer-reviewed paper, ‘The impact of quality of publicly-released performance data: a systematic review of the peer-reviewed literature’, published in the Annals of Internal Medicine (Fung et al, 2008). The report was adapted from work supported by RAND Health’s Comprehensive Assessment of Reform Options (COMPARE) initiative which receives funding from a consortium of sources including RAND’s corporate endowment, contributions from individual donors, corporations, foundations and other organisations. A special thanks to Roberta Shanman for her assistance with the literature search and to Susan Chen, Carlo Tringale and Jason Carter at the Southern California Evidence-Based Practice Centre for their assistance retrieving the articles and preparing this manuscript. Constance Fung, MD, MSPH, authored the original white paper and journal article, and provided valuable comments on this adaptation. Thanks are due to anonymous reviewers.

Published by: The Health Foundation 90 Long Acre London WC2E 9RA Telephone: 020 7257 8000 Facsimile: 020 7257 8001 www.health.org.uk Registered charity number 286967 Registered company number 1714937 First published 2008 ISBN 978-1-906461-06-5 Copyright The Health Foundation All rights reserved, including the right of reproduction in whole or in part in any form. Every effort has been made to obtain permission from copyright holders to reproduce material. The publishers would be pleased to rectify any errors or omissions bought to their attention.

Contents

Executive summary

4

Introduction

7

Methods

8

Results

9

Overview

10

Impact on selection of providers

11

Selection of health plans

11

Selection of hospitals

12

Selection of individual providers

13

Impact on quality improvement activity (change)

14

Impact on performance (quality of care)

16

Effectiveness

16

Patient safety

17

Patient-centredness

17

Discussion

18

Evidence table: Summary of evidence from the literature search

20

References

36

Appendix A: Public release of performance results – search methodology for PubMed

42

Appendix B: Public release of performance results – search methodology for Wilson Business Periodicals Abstracts

46

Executive summary Although recent reports on the quality of healthcare in the USA and England document improvement in a number of clinical areas, quality deficits remain and progress has been slow. The public release of performance results has been proposed as a mechanism for improving quality of care. In theory, disclosing performance results increases the accountability of healthcare providers as managers will be concerned about maintaining their public image and increasing market share. It also motivates quality improvement activities in healthcare organisations, especially by targeting underperforming areas identified by the performance results. The framework we used to guide this literature review was devised as follows: public reporting can improve performance (effectiveness of care, patient safety, and patient-centredness) through two pathways (selection and change), which are interconnected by a provider’s motivation to maintain or increase market share. Since the last major review of healthcare public reporting (Marshall et al, 2000), new studies provide additional support for the conclusion that the public release of performance data stimulates change at the level of the hospital. In addition, since 2000, studies of the impact of public reporting on consumer selection of health plans have been published. The articles we identified provide mixed evidence for an effect on this pathway with no clear signals regarding the types of health practitioners or services, or the format of public reporting most likely to influence consumers’ selection of providers. Most studies are about the same few public reporting systems. Some studies assessing the effects of major reporting systems in the USA and England on effectiveness, patient safety and patient-centredness have not been published in the peer-reviewed literature. The empirical literature on using publicly-reported performance data to improve health outcomes is still scant (existing literature primarily focuses on mortality and cardiac procedures), with limited assessments of their usefulness to improve patient safety and patient-centredness. In the absence of a better co-ordinated funding and research strategy we can expect more studies limited to a few reporting systems and little improvement in the need for information about the effects of many of the public reporting systems that are currently being used prominently.

Introduction

Does public release of performance results improve quality of care?

Introduction Although recent reports on the quality of healthcare in the USA and England document improvement in a number of clinical areas (AHRQ, 2004; National Committee for Quality Assurance, 2005; Campbell et al, 2007), quality deficits remain and progress has been slow (McGlynn et al, 2003; Asch et al, 2004; Asch et al, 2006). Studies suggest that healthcare systems often fail to deliver on three areas that the US Institute of Medicine (IOM) has identified as being appropriate aims for a well-functioning healthcare system (Institute of Medicine): • effectiveness (McGlynn et al, 2003) • safety (Kuperman et al, 2001; Bates et al, 2001; Lannon et al, 2001; Handler et al, 2000) • patient-centred care (Barry et al, 2000; Marvel et al, 1999). There are many potential explanations for these failures. Lack of a transparent, explicit, systematic, data-driven performance measurement and feedback mechanism for healthcare providers has been considered to be a major contributor (Lansky 2002). While many citizens may routinely seek information on cost and quality of services and products such as schools, restaurants and cars, they have had limited access to information on healthcare providers even though studies suggest they are interested in comparative information (Sofaer et al, 2005; RAND Health, 2001; The Kaiser Family Foundation, Agency for Healthcare Research and Quality, and Harvard School of Public Health, 2004). The public release of performance results has been proposed as a mechanism for improving quality of care (Leapfrog Group; Pacific Business Group on Health; US Department of Health and Human Services). In theory, disclosing performance results increases the accountability of healthcare providers because their managers will be concerned about maintaining their public image and increasing market share. It could also motivate quality improvement activities in healthcare organisations, especially by targeting underperforming areas identified by the performance results (Lansky, 2002; Marshall et al, 2003b). Over the past decade reporting systems that summarise publicly-released performance results have proliferated (Office of Inspector General June Gibbs Brown). Several authors have systematically evaluated the evidence for continuing this practice. In 2000, Marshall and co-authors published a systematic review on the topic. Schauffler and Mordavsky (2001) then evaluated evidence on the impact of consumer report cards on the behaviour of consumers, providers and purchasers. However, since the publication of these two systematic reviews many empirical studies of publicly-reported performance data have been published. Our systematic review began with the Marshall et al (2000) study and updated this by examining the utility of publicly-released performance results as a mechanism for improving healthcare quality (effectiveness, patient safety and patient-centredness).

Shekelle, Lim, Mattke, Damberg

7

Does public release of performance results improve quality of care?

Introduction

Methods Conceptual model We used a framework adapted from Berwick’s model (Berwick, James and Coye, 2003) to guide our literature search (see Figure 1). According to this framework public reporting can improve performance (effectiveness of care, patient safety and patient-centredness) through two pathways (selection and change), which are interconnected by a provider’s motivation to maintain or increase market share. Publicly-reported performance data Knowledge

Selection 1

Motivation

Change 2

Performance Effectiveness of care Safety Patient- centredness Figure 1: Two pathways for improving performance through release of publicly-reported performance data (Berwick, James and Coye, 2003) In the ‘selection pathway’ – which assumes that performance within any group (for example, physicians, hospitals, health plans) will vary – a consumer (patient, purchaser, regulator, contractor or referring clinician) obtains, compares and contrasts publicly-released performance data to try to obtain the best quality for the best price (value-based purchasing). The consumer then selects (or rewards/recognises/ punishes/pays) a member of the group (for example, a surgeon, a hospital). In the ‘change pathway’, performance results help organisations understand and improve their care processes to improve their performance (Berwick, James and Coye, 2003). Change may occur through pressure to avoid being identified as a poor quality provider (damaged reputation) or in some cases by prompting poor quality providers to cease practising. The change pathway also includes external incentives such as government intervention to reward highly performing providers or sanction poorly performing ones. Data sources and searches Sources included Web of Science, MEDLINE, EconLit and Wilson Business Periodicals Abstracts. Our research team included a professional librarian at the Southern California Evidence-Based Practice Center to ensure that our search strategy captured as much of the pertinent literature as possible. We started with a ‘forward search’ of the Web of Science (Science and Social Science Citation Indexes) database on a Marshall et al (2000) article and on an article by Schneider and Lieberman (2001) because we believed that all or nearly all of the newer articles on this topic would refer to either or both of these seminal articles. Additional searches of our sources using keywords (see Appendices A and B) complemented this initial forward search strategy.

8

Shekelle, Lim, Mattke, Damberg

Introduction

Does public release of performance results improve quality of care?

For completeness, we also searched three published review papers (Schauffler and Mordavsky, 2001; Hibbard, Stockard and Tusler, 2003; Marshall et al, 2003) and one unpublished working paper by experts in the field (Romano, Rainwater and Marcin, forthcoming) for additional references. Since the Marshall review included articles published between January 1986 and October 1999 we limited our database search to evaluations of healthcare reporting systems published between January 1999 and March 2006. However, we reviewed all of the empirical studies cited in the Marshall et al (2000) systematic review that addressed public reporting and quality. Study selection Three of the research team discussed and selected the inclusion criteria. Articles were included if they summarised the effects of publishing healthcare performance results. We excluded opinion pieces, review articles and articles unavailable in English. We also excluded articles that: • described the theory or history of publicly-reported performance results • identified controversies in performance measurement development • examined only changes in inequity or access to care as a result of public reporting • focused on whether consumers were aware of publicly-reported performance data focused solely on whether consumers, providers, payers or intermediaries use report cards if the study did not also measure a change in quality improvement activity/behaviour, a selection of providers or our endpoints (effectiveness, patient safety or patient-centredness). Data extraction All studies were reviewed by at least two authors who independently assessed the text of qualifying studies for study design, study sample, type of reporting system (for example, New York State Cardiac Surgery Reporting System), health conditions that were the subject of the reporting, reporting level (health plan, hospital, individual provider) and outcomes. Outcomes included changes in selection of provider or market share, changes in quality improvement activity, changes in performance (effectiveness of care, patient safety and patient-centredness) and unintended consequences. We used a standardised data abstraction form. We resolved disagreements about article content and quality through discussion. Data synthesis The data were too heterogeneous to support pooling so we summarised data by narrative in groups according to our conceptual model: the two pathways (selection, change) and endpoints (effectiveness, patient safety and patient-centredness). Results Identification of evidence For the pre-1999 period we identified and retrieved 31 empirical articles from the Marshall systematic review (Marshall et al, 2000). After reviewing the full text of these articles we selected 17 to include in our results section. The excluded articles assessed aspects of public reporting other than its effect on quality (for example, whether consumers read and understand public reports). For the 1999–2006 period we initially reviewed 704 titles. From the initial list we selected 143 articles to retrieve and review. After reviewing the full articles we identified 47 articles that provided relevant data (Figure 2). Three additional articles were identified during peer review.

Shekelle, Lim, Mattke, Damberg

9

Does public release of performance results improve quality of care?

Title search on Marshall et al, 2000 N=31

Introduction

Forward citation search on Schneider and Lieberman, 2001 Marshall et al, 2000 N=361

Literature search with related terms N=2142

Total articles from library searches N=2534

Rejected articles: Title search on Marshall N=16 Related articles and literature Search N=2481

Articles identified outside of searches N=6 (Reference Mining N=5 & Other N=1)

Articles identified during peer review N=3

Total articles N=50

Figure 2: Search results and article flow

Overview In the sections that follow we describe the evidence published between 1986 and 2006. First, we discuss the impact of publicly-reported performance data on selection of providers. Many of these studies examined how publicly-reported performance results affected consumers’ selection of health plans, hospitals or individual providers. Next we describe evidence for the impact on quality improvement activity (change pathway). The subsequent sections focus on the effects of publiclyreported performance data on our endpoints: effectiveness of care, patient safety and patientcentredness. The evidence table section presents pertinent information on all included articles.

10

Shekelle, Lim, Mattke, Damberg

Impact on selection of providers

Does public release of performance results improve quality of care?

Impact on selection of providers Selection of health plans We identified ten studies that assessed the effect of the public performance data on selection of health plans. All of these studies were published after Marshall’s systematic review in 2000. Three studies focused solely on the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey data, which describe patients’ experiences with ambulatory and facility-level care (for example, communication with doctors, access to care). CAHPS data are quantitatively different than the process or outcome data used in many public reporting systems as they reflect patient experience rather than technical quality (Agency for Healthcare Research and Quality). Two randomised controlled trials (Farley et al 2002a; Farley et al 2002b), which compared health plan choices of new Medicaid beneficiaries who received CAHPS data to a group that did not receive the information, found no effect of the data on health plan selection for the total sample. In a subgroup analysis, participants who read the report and did not choose the health maintenance organisation with dominant market share selected plans with higher CAHPS scores compared to an equivalent control group (Farley et al, 2002b). Spranca et al (2000) used an experimental design involving hypothetical health plan performance ratings and found that, in general, consumers preferred more expensive plans with better coverage. However, subjects were willing to accept plans with less generous coverage if the health plan had high CAHPS ratings. This suggested that providing consumers with information on health plan quality may affect their selection of health plans. Harris (2002) found that subjects were willing to trade access restrictions for higher quality. Several studies used longitudinal observational data to examine what happens before and after other types of health plan performance information is made available. Beaulieu (2002) found that Harvard employees in plans with lower reported quality were more likely to switch plans than those enrolled in higher quality plans. In two separate studies, Wedig and Tai-Seale (2002) and Jin and Sorensen (2006) found that federal employees were more likely to choose plans with better reported quality. Wedig and Tai-Seale found that an increase of one standard deviation in a quality of care score was associated with an increased likelihood (more than 50 per cent) of plan selection. In the Jin and Sorensen study an increase of one standard deviation in a quality score was associated with a 2.63 per cent increase in the likelihood that the plan would be selected. They also found that new enrolees were more likely than existing ones to respond to information about plan quality. In Dafny and Dranove’s study (2005) of Medicare health maintenance organisation (HMO) enrolees, performance results induced enrolees to change to higher-quality plans. This study simulated the change in market share as a result of public reporting and estimated that over 8 years the market share of high-quality plans would increase by 76 per cent, although the absolute share would increase only from 2.7 to 4.7 per cent. Consumer satisfaction, rather than technical quality of care (effectiveness), related to mammography seemed to be the more important driver of change. In contrast, the study by Scanlon et al (2002) of employees in General Motors found that, in response to plan ratings, staff avoided plans with below-average ratings, but were not strongly attracted to plans with superior ratings; the authors concluded that information on bad performers may be more valuable than information on superior performers. McCormick et al (2002) reported that low scoring US commercial health plans were more likely to stop publicly disclosing their quality data than high scoring ones.

Shekelle, Lim, Mattke, Damberg

11

Does public release of performance results improve quality of care?

Impact on selection of providers

Selection of hospitals We identified ten publications on this topic area. Four were published at the time of the prior review (Marshall et al, 2000), and six are new. Early reports focused on one of the first experiments in publicly reporting performance data: the Health Care Financing Administration (HCFA) hospital-specific mortality rates. Mennemeyer et al (1997) reported that the release of HCFA hospital-specific mortality rates was associated with small but significant effects on utilisation (hospitals with mortality rates that were twice that expected by HCFA had less than one fewer discharge per week in the first year). However, press reports of single, unexpected deaths were associated with a relatively large effect (9 per cent reduction in hospital discharges within one year). Although Mennemeyer concluded that the public release of performance results does not provide an effective consumer tool, the study design was not able to identify other causes that may have contributed to the weak results. Vladeck et al (1988) reported that the release of HCFA data did not lead to significant differences in occupancy rates between high- and low- mortality hospitals. A number of papers described the impact of the New York State Cardiac Surgery Reporting System, which reports hospital mortality following coronary artery bypass graft (CABG) surgery, on market share of hospitals. Both earlier and more recent analyses provide conflicting results. Mukamel and Mushlin (1998) found that providers with better outcomes had higher growth rates in market share than those with poorer outcomes. In contrast, Hannan et al (1994b) did not find a change in hospital surgical volume during roughly the same time period. Chassin (2002) compared market share of hospitals that were identified as statistical outliers in the year before they were named as outliers compared to the year after (1989 to 1995). Changes were small; fewer than half the hospitals saw an increased market share for high performance or decreased share for poor performance. Jha and Epstein (2006) also found no evidence that the New York State Cardiac Reporting System had a meaningful impact on hospitals’ or surgeons’ market share. Cutler et al (2004) compared the volume of CABG surgeries in low- and highmortality hospitals and found that high-mortality hospitals experienced an initial decline after being designated as a ‘poor-performer’ that could represent as much as 10 per cent of the hospital’s volume. However, this decline was not statistically significant one year after the initial report. The authors did not find a corresponding increase in volume among low-mortality hospitals. Using low- and high-risk subsamples, high-mortality hospitals had approximately 10–15 per cent fewer low-risk patients but the same number of high-risk patients. Studies of other hospital-level reporting systems also show mixed results. Baker et al (2003) examined the impact of the Cleveland Health Quality Choice Program. This programme reported a number of outcomes such as severity adjusted mortality rates, caesarean section deliveries, patient satisfaction and length of stay. Among the five highest-mortality hospitals the decline in market share was about 0.6 per cent and was not statistically significant; lower-mortality hospitals did not gain market share. A similar lack of effect of public reporting on market share was identified by Hibbard, Stockard and Tusler (2005). Romano and Zhou (2004) examined the volume of acute myocardial infarction, cervical discectomy and discectomy-related conditions and procedures in New York State and California. The study did not find any significant emergency diagnosis (acute myocardial infarction) related volume changes among low-mortality and high-mortality outlier hospitals and found only a slight increase in volume for low-complication outliers for lumbar discectomy. In contrast, the study found a significant increase in CABG volume for low-mortality hospitals in New York within the first month of publication and a significant decrease in volume for high-mortality outliers in the second month after release of the information. The results seem to suggest the effect of public reporting is selective and short-term.

12

Shekelle, Lim, Mattke, Damberg

Impact on selection of providers

Does public release of performance results improve quality of care?

Selection of individual providers We identified seven articles that examined the impact of publicly releasing performance data on consumers’ choice of individual providers; four have been published since Marshall’s systematic review. Focusing on the New York State Cardiac Surgery Reporting System data, Mukamel et al (2004/05) found that Medicare enrolees were less likely to select a surgeon with higher mortality after the release of cardiac surgeons’ risk-adjusted mortality rates in New York State. A prior Mukamel and Mushlin study (1998) also suggested that physicians with better outcomes had higher growth rates in their charges for CABG. In contrast, Hannan et al (1994b) did not find a change in individual providers’ volume of surgery. In a subsequent article, Hannan et al (1995) found that the decrease in risk-adjusted mortality was not due to shifts from low-volume to high-volume surgeons; rather, the low-volume surgeons stopped practising and surgeons (either new to the system or non-consistently low-volume) performed better. Using multivariable logistic regression, Jha and Epstein (2006) reported that the odds of ceasing practice were statistically higher (OR 3.5, 95 per cent confidence interval [CI] 1.35, 9.01) for surgeons performing in the bottom quartile. Eight surgeons who had left practice stated that public reporting had affected their decision to cease practice; four were in the bottom quartile prior to departure (31 cardiac surgeons left practice between 1989 and 1999). Several articles examined the impact of publicly-reported performance data on intermediaries’ selection (contracting) practices. Mukamel et al (2000) compared survey findings of managed care organisations in New York State to actual contracting patterns. Even though 64 per cent of managed care organisations (MCOs) indicated some knowledge of the New York State Cardiac Surgery Reports, only 20 per cent indicated that the reports were a major factor in their contracting decisions. Analyses of actual contracting patterns show that, in general, systematic selection either for or against surgeons based on their reported mortality scores did not occur. In a later study, Mukamel et al (2002) examined contracting between MCOs and individual cardiac surgeons and found that cardiac surgeons with higher reported quality in New York State had an increased probability of having an MCO contract; however, this occurred only in certain parts of the state.

Shekelle, Lim, Mattke, Damberg

13

Does public release of performance results improve quality of care?

Impact on quality improvement activity (change)

Impact on quality improvement activity (change) We identified 11 studies assessing the effect of publicly releasing performance data on quality improvement activity; 5 have been published since the prior review. Three studies described the effects of publicly releasing CABG surgery mortality performance results on quality improvement activity. Through a series of interviews with selected hospital administrators and physicians in New York State, Chassin (2002) found that some hospitals with high mortality rates engaged in steps to improve their cardiac surgery programmes (including activities such as instituting quality assurance programmes, retraining staff, redesigning the intensive care unit delivery system). In a case study of one New York hospital, Dziuban et al (1994) described the staff response to the publiclyreleased performance results. A mortality pattern analysis and changes in practice patterns occurred after the hospital was identified as having higher than expected mortality rates. Bentley and Nash (1998) reported additional marketing efforts and monitoring of clinicians’ performance as a result of the release of CABG mortality in Pennsylvania and New Jersey. Two studies by Hibbard, Stockard and Tusler (2003, 2005) evaluated the impact of QualityCounts, a programme that reported summary indices of adverse events related to care of a variety of areas including cardiac care, obstetric care and hip/knee surgery on quality improvement efforts. In the first study the authors compared the quantity of quality improvement activities in hospitals that were subject to public reporting to those receiving confidential feedback or none at all. The study indicated that making performance information public stimulated quality improvement activities in the areas where performance was reported to be low. In the second study the average number of quality improvement activities among the hospitals listed in the public report was significantly greater than those hospitals that were given only a confidential report or none at all. Hospitals reported that concern for public image was a key motivator for their quality improvement efforts. Other papers described the effects of other types of publicly-reported performance data on quality improvement activity. Rosenthal et al (1998) reported changed practice in Cleveland hospitals following the released of hospital outcomes. In a survey, Tu and Cameron (2003) showed that more than half of the hospitals responded to the Canadian acute myocardial infarction hospital-specific quality of care report by implementing quality improvement activities. Longo et al (1997) reported implementation of process improvement efforts following the publication of obstetrics data. Approximately 50 per cent of hospitals that did not have requirements that newborns go home in a car seat, formal transfer arrangements or nurse educators for breastfeeding prior to the report had instituted or planned to institute these services. In contrast, Luce et al (1996) found that only 3 out of 17 acute care public hospitals in California initiated quality improvement activities following the California Office of Statewide Health Planning and Development public release of risk-adjusted monitoring of outcomes (California Hospital Outcomes Project or CHOP). Also describing results from the CHOP project, Rainwater et al (1998), in a survey of hospital leaders, found that the publicly-released performance results did not directly affect acute myocardial infarction processes of care, although some leaders reported studying the outcome data to identify areas for improvement (some said they changed processes because of poor ratings). While many of the studies found favourable quality improvement activity, Mannion et al (2005) identified several instances in England where the public release of performance results could be a disincentive for improvement, although report card ratings did help align internal quality improvement objectives with national targets. Additional studies by Mannion and Goddard (2001, 2003) on the Scottish Clinical Research and Audit Group (CRAG) clinical outcome indicators found that they were rarely cited as a

14

Shekelle, Lim, Mattke, Damberg

Impact on quality improvement activity (change)

Does public release of performance results improve quality of care?

primary driver of quality improvement activities because of concerns about credibility, timeliness and lack of incentives or external accountability. They also reported (Mannion and Goddard, 2003; 2004) that while general practitioners (GPs) were aware of the data they were almost never discussed with patients, colleagues, hospitals or local health boards, and that consumers were unaware of the data (Mannion and Goddard, 2003).

Shekelle, Lim, Mattke, Damberg

15

Does public release of performance results improve quality of care?

Impact on performance (quality of care)

Impact on performance (quality of care) Effectiveness We identified 18 studies that assessed the effect of public performance data on quality of care, either as processes or outcomes. Of these, ten have been published since the prior review. Of note, all but one study assessed hospital-level performance and we identified no published studies of the effect of public reporting on quality improvement among physician or physician groups. Several studies, presented here in chronological order, have demonstrated a decline in mortality associated with the New York State Cardiac Surgery Reporting System, whereas others have failed to demonstrate such favourable results. Cardiac surgery mortality rates have been decreasing for years. The challenge has been attributing causality to the introduction of public reporting. Hannan et al (1994a) reported a reduction in mortality (risk-adjusted mortality decreased from 4.17 to 2.45 per cent) after the institution of the New York State Cardiac Surgery Reporting System. Likewise, Dziuban et al (1994) found that the risk-adjusted mortality improved from 6.6 per cent in 1991 to 1.8 per cent in 1993. However, in an article published in the same year Hannan et al (1994b) reported that the risk-adjusted cardiac mortality rate of all surgeons and hospitals improved, although he notes that providers with the highest initial mortalities displayed the most improvement. Ghali et al (1997) found that observed mortality rates in Massachusetts (with no public reporting) decreased at a rate comparable to those observed in New York and northern New England (regions that had publicly-released performance CABG data), which questioned the effectiveness of publicly releasing performance results. In contrast, Peterson et al (1998) reported a greater decline in 30-day CABG mortality in New York State compared to the national trend (33 per cent compared to 19 per cent). A more recent study suggests that the benefits of public reporting may be less significant than previously reported. Cutler et al (2004) showed that high-mortality New York hospitals had a reduction in mortality of 1.2 per cent during the first 12 months after release of public reports, but the reductions were much less in subsequent years. No significant effect was found on low-mortality hospitals. Rosenthal et al (1997) reported a decline in risk-adjusted mortality for most conditions following the introduction of the Cleveland Health Quality Choice programme, but more recent studies failed to confirm a sustained benefit. Baker et al (2003) found that outlier hospitals with higher than expected mortality rates did not have significantly lower risk-adjusted 30-day mortality rates after the release of the reports. Clough et al (2002) compared Cleveland hospitals’ mortality rate trends with those in the rest of Ohio and showed that the rate of decline in mortality in Cleveland hospitals was statistically indistinguishable from that in the rest of the state. In a subsequent study, Baker et al (2002) showed that increases in mortality after hospital discharge tended to offset the declines in in-hospital mortality so that there was no net reduction in 30-day mortality. A subsequent article found that high mortality rate outlier hospitals did not have statistically significantly lower risk-adjusted 30-day mortality rates after the release of the reports (Baker et al, 2003). In a study unrelated to the New York State Cardiac Surgery Reporting System or the Cleveland Health Quality Choice programme, Longo et al (1997) found that a hospital-level obstetrics report card in Missouri improved several measures including rates of high-risk infant transfer and very low birth-weight infants. In a retrospective cohort study, Bost (2001) compared Healthcare Effectiveness Data and Information Set (HEDIS) measures scores for health plans that do and do not voluntarily publicly report performance results and found that, overall, health plans that report voluntarily outperformed nonpublicly reporting plans. The health plans that publicly report performance results particularly excelled

16

Shekelle, Lim, Mattke, Damberg

Impact on performance (quality of care)

Does public release of performance results improve quality of care?

in immunising adolescents, providing beta-blockers and completing diabetic eye exams. The problem of selection bias in this study limits the ability to assign causality to the public reporting (that is, poor performing plans may be less likely to publicly report these results). Several studies reported possible adverse or unintended effects on quality of public reporting. Dranove (2003) found that report cards in New York and Pennsylvania were associated with a shift in CABG use to healthier patients, and that patients with higher hospital expenditures and days in the hospital had worse cardiac outcomes. Baker et al (2002) reported that the Cleveland Hospital Quality Choice programme was associated with a decrease in risk-adjusted in-hospital mortality but an increase in early post-discharge mortality for most conditions. Moscucci et al (2005) reported that the case mix was significantly different in patients getting percutaneous coronary interventions in Michigan and New York, speculating that there was a propensity in New York towards not intervening in high-risk patients. In a survey of physicians Schneider and Epstein (1996) found that 63 per cent of cardiac surgeons in Pennsylvania reported less willingness to operate on severely ill patients in need of CABG. Burack et al (1999) reported similar findings in a survey of New York physicians. In another survey of New York physicians, Narins et al (2005) reported that 79 per cent agreed that public reporting has influenced their decision on whether to perform angioplasty on critically ill patients. Finally, Werner et al (2005) reported an association between New York report card implementation and a short term increase in racial and ethnic disparities in CABG use.

Patient safety Hibbard et al (2005) assessed hospital performance in the two years following the release of QualityCounts results. The study compared patient safety measures in hospitals that were subject to public reports under QualityCounts with those receiving confidential feedback or those receiving none at all. Among hospitals with low scores in obstetric care at baseline (adverse events including complications and death), those in the QualityCounts group were significantly more likely than those in the two comparison groups to have improved their scores. The authors attributed the changes to greater quality improvement efforts which began immediately after the report’s release.

Patient-centredness Bost’s (2001) retrospective cohort study comparing performance of health plans that publicly report performance results voluntarily with those that do not found that CAHPS scores related to courtesy and customer service were higher for plans that voluntarily report performance results. Because the likelihood of self-selection is high, inferring a causal relationship between public reporting and higher CAHPS scores is not possible.

Shekelle, Lim, Mattke, Damberg

17

Does public release of performance results improve quality of care?

Discussion

Discussion We identified 33 new articles published since the prior review by Marshall et al (2000) that assessed the effects on quality of publicly reporting performance results. Most studies report data from the United States. The addition of these studies describing the effect of public reporting on selection, change, effectiveness, patient safety and patient-centredness doubles the amount of evidence that Marshall and co-authors reviewed on the topic. These new studies provide additional support for the conclusion that the public release of performance data stimulates change at hospital level. Studies published since 2000 include those examining the impact of public reporting on consumer selection of health plans. The articles we identified provide mixed evidence for an effect on this pathway, with no clear signals regarding the types of health practitioners or services, or the format of public reporting most likely to influence consumers’ selection of providers. Additionally, Marshall et al (2000) noted that ‘so few current reporting systems have been the subject of so little published empirical evaluation’. We found little improvement in this situation. Despite the development of public reporting systems over the past five years, many of the studies we identified continued to focus on those summarised by the prior systematic review, particularly the New York State Cardiac Surgery Reporting System. One bright spot is that several studies reported on the effects of public reporting systems that include CAHPS and HEDIS measures. However, studies assessing the effects of major reporting systems in America, such as Healthgrades (www.healthgrades.com) and Healthscope (www.healthscope.com), or in England, such as Dr Foster (www.drfoster.co.uk), on effectiveness, patient safety and patient-centredness have not been published in the peer-reviewed literature. Lastly, the empirical literature on using publicly-reported performance data to improve health outcomes is still scant (existing literature primarily focuses on mortality and cardiac procedures), with limited assessments of the usefulness in improving patient safety and patient-centredness. How can we explain the contrast between our results and the interest and resources that are being directed towards public reporting? (Leapfrog Group; Pacific Business Group on Health; US Department of Health and Human Services; Institute of Medicine; US Congress) In spite of its theoretical appeal, making public reporting work requires successfully addressing several challenges, most notably designing and implementing a reporting system that is appropriate for its purpose. We suspect that these ‘upstream’ design and implementation issues affected the more ‘downstream’ selection and change pathways and endpoints we studied: effectiveness of care, patient safety and patient-centredness. Evidence suggests that poorly constructed report cards may impair consumers’ comprehension of them and cause consumers to make decisions that are inconsistent with their healthcare goals (Hibbard et al, 2000). Earlier studies suggested that consumers, providers and group purchasers were not actually seeking out and using this information (Marshall et al, 2000; Hibbard et al, 1997; Hannan et al, 1997; Schneider and Epstein, 1996; Berwick and Wald, 1990; Robinson and Brodie, 1997), although recent pay-for-performance initiatives have increased use of performance results among purchasers (Grumbach et al, 1998; Henley, 2005; Rosenthal et al, 2005). It is possible that design and implementation issues, if sufficiently improved, could increase the effect of publicly-reported performance data on effectiveness, patient safety and patient-centred care. In particular, the link to pay-for-performance schemes in both the USA and the UK is increasing the motivation of providers regarding how performance data are measured, analysed and used.

18

Shekelle, Lim, Mattke, Damberg

Disscusion

Does public release of performance results improve quality of care?

While our review focused on the effects on quality of publicly-reported performance data, the effects on another area identified by the IOM as important for a well-functioning healthcare system – equity – deserves some discussion. This much debated topic was recently reviewed by Werner and Asch (2005). A number of studies that we identified in our review described a reluctance to care for high-risk patients after the New York State Cardiac Surgery Reporting System performance data were released (Werner and Asch, 2005; Moscucci et al, 2005; Narins et al, 2005). Werner et al (2005) examined whether physicians responded to public reporting by avoiding high-risk patients and whether this resulted in access problems for minorities. They found that New York State demonstrated greater disparity than other states did between racial groups in CABG use after the report card was released. Our study has several limitations. Although we reviewed the published empirical literature, additional studies in the ‘grey’ or trade literature on the effects of publicly-released performance results may be available. For example, the National Committee for Quality Assurance (NCQA) report (2006) suggests that report cards are associated with improvements in quality of care. Furthermore, virtually all the studies report on experiences in the USA. Our search was not restricted by country. Since the US has the longest experience with public reporting, it is plausible that the vast majority of published research concerns this country’s experience. Still, it is conceivable that we did not identify published or grey literature studies from other countries. Indeed, during the peer review of this paper three studies from the UK were identified that were not captured by our literature review. Because of the heterogeneity of the articles we were not able to perform meta-analyses to provide a better synthesis of the results. Finally, many of the early studies are now more than 10 or even 15 years old. Improvements in report card design and implementation may limit the generalisability of these older data to our current public reporting systems. Numerous systems that now summarise performance results for public use (for example, New York State Department of Health, New Jersey Department of Health and Senior Services, HealthGrades) create ample opportunities for research to fill the gaps in our knowledge. Future research should focus on three lines of work. First, additional data points are needed. More of the existing reporting systems should be evaluated in terms of their effect on consumer and provider behaviour as well as on endpoints of care. This includes the effects of public reporting in other countries. Notably absent are evaluations of public reporting in England, which has had systems in place for several years. Rigorous evaluation designs with a plausible comparison strategy should be employed to distinguish secular trends and bias from the effect of the intervention. Second, as with many interventions, reporting systems differ in design and implementation. More insight is needed into the effect of these issues on the impact of the report card. Third, it is important to investigate empirically the causal pathways through which public reporting influences quality of care. In the absence of a co-ordinated funding and research strategy we can only expect more studies limited to a few reporting systems and little improvement in the availability of information about the effects of many of the public reporting systems that are currently being used prominently.

Shekelle, Lim, Mattke, Damberg

19

Study design Data set

Aim

Baker et al, 2002

Time series

Cleveland Health To examine mortality trends associated with the CHQC Quality Choice (CHQC) database programme

Baker et al, 2003

Time series

Cleveland Health Quality Choice database

Beaulieu, 2002

Observational cohort

Enrolment and plan data – Harvard employees; Quality of health plan from HEDIS measures

Setting and subjects Key findings Cleveland; Cleveland hospitals; Medicare patients hospitalised with acute myocardial infarction, heart failure, gastrointestinal haemorrhage, obstructive pulmonary disease, pneumonia, or stroke (1991 to 1997)

Risk-adjusted in-hospital mortality declined for most conditions, but mortality rate in the early postdischarge period rose for most conditions and the 30-day mortality rate declined for only heart failure and obstructive pulmonary disease, and increased for stroke.

To examine hospitals’ market share and riskadjusted mortality from 1991 to 1997 at hospitals participating in CHQC

Cleveland hospitals, 1991 to 1997

No relationship overall between higher than expected mortality rates and market share. Hospital outlier status was not related to changes in risk-adjusted 30-day mortality overall; one of three high outlier hospitals did improve significantly.

To analyse the effects of providing information about plan quality on consumers’ health plan choices in a private employment setting

Provision of quality information had Harvard employees a small but statistically significant and their health plan choices between 1994 effect on health plan choices. and 1997

Evidence table

Shekelle, Lim, Mattke, Damberg

Study, year

Does public release of performance results improve quality of care?

20

Evidence table: Summary of evidence from the literature search

Data set

Aim

Setting and subjects

Key findings

Bentley and Nash, 1998

Survey

Pennsylvania Health Care Cost Containment council; riskadjusted CABG mortality

To determine whether Pennsylvania Health Care Cost Containment Council’s Consumer Guide to CABG, which compared in-hospital mortality rates, led to more changes in Pennsylvania hospitals’ CABG policies/ practices than in New Jersey hospitals, which were not required to publicly report performance results

Pennsylvania and New Jersey; Hospitals providing CABG surgery in these states; Key informants at the hospitals identified by the chief executive officers of these hospitals (1995–96)

Response in Pennsylvania hospitals included recruiting staff and starting continuous quality improvement programme to improve CABG procedures. More changes in Pennsylvania than New Jersey hospitals (no formal statistical testing because small sample size).

Bost, 2001

Observational cohort

(1) HEDIS and CAHPS; (2) NCQA’s commercial health plan database

To compare HEDIS and CAHPS results for plans that publicly report data with those that do not, over a three-year period (1997–99)

Commercial health plans across the USA

Technical performance measures and patient experience measures (except communication) were higher for health plans that publicly report data.

Burack et al, 1999

Descriptive

New York Cardiac Surgery Reporting System

To examine the effects on the New York; all New practice of cardiac surgery, York cardiac surgeons as perceived by surgeons performing CABG; 104 New York cardiac surgeons (69 per cent response rate) (1997)

62 per cent of cardiac surgeons refused to operate on at least one high-risk CABG patient over the prior year, primarily because of public reporting.

Clough et al, 2002

Observational cohort

Cleveland Health Quality Choice database

(1) To verify decline in mortality occurred after public reporting of hospital mortality data (2) To better understand possible relationship between reporting hospital mortality data through CHQC and inpatient mortality in Cleveland

No statistical difference in rate of decline in combined mortality in Cleveland compared to the rest of the Ohio.

Ohio Hospital Association inpatient discharge data, 1992–95

21

Does public release of performance results improve quality of care?

Design

Evidence table

Shekelle, Lim, Mattke, Damberg

Study, year

Design

Data set

Aim

Setting and subjects

Key findings

Chassin, 2002

Case series

(1) New York State Cardiac Surgery Reports; (2) Interviews with key physicians, hospital administrators and state officials in New York State

To study how physicians, hospitals and the market responded to the release of New York State Department of Health release of annual data on risk-adjusted mortality following coronary bypass graft surgery

Interviews with select physicians, hospital administrators and state officials in New York State in spring of 2001

Small changes in market share and less than half the time in the expected direction; increase in quality improvement activity (for example, staffing policy changes, multidisciplinary approach to examining care processes, changes in operating room schedule).

New York State Cardiac Surgery Reporting System

(1) To examine whether report cards affect the distribution of where patients go for bypass surgery (2) To determine whether report cards lead to improved medical quality among hospitals identified as particularly bad or good performers

Patients who have undergone bypass surgery in New York between 1991 to 1999

Being identified as a high mortality hospital was associated with a decline of approximately 4.9 bypasssurgery patients per month during the 12 months following that designation due to decreases in the number of low-severity cases. Low-mortality flag hospitals had little evidence of mortality changes.

Cutler, Cross Huckman, sectional and Landrum, 2004

Does public release of performance results improve quality of care?

22

Study, year

Evidence table

Shekelle, Lim, Mattke, Damberg

Data set

Aim

Setting and subjects

Key findings

Dafny and Dranove, 2005

Time series

Medicare enrolment data from Medicare Managed Care Quarterly State/ County Plan Data Files for December of each year from 1994–2002. Quality scores extracted from Medicare HEDIS files and the Medicare Compare Databases. CAHPS measures were also used.

To examine the relationship between Medicare HMO enrolment and quality before and after report cards were mailed to 40 million Medicare beneficiaries in 1999 and 2000

Medicare beneficiaries in the USA

(1994–2002) Enrolees switched into higher quality plans independently of report cards issued in 1999 and 2000. Over time, this market learning attenuated over time. It was most pronounced in markets which US News provided report cards and in which migration and prior HMO experience was relatively low. The Medicare report cards were associated with switching, even after controlling for market learning. Consumer satisfaction, not quality measures such as mammography rate, affected enrolment. Report cards encouraged a substantial amount of switching among enrolees already in Medicare HMOs, but only drew a small fraction of enrolees in traditional Medicare into Medicare HMO.

Dranove, 2003

Observational cohort

New York Cardiac Surgery Reporting System, Pennsylvania Health Care Cost Containment Council

To study the effects of public reporting in New York and Pennsylvania

New York and Pennsylvania; all New York and Pennsylvania hospitals performing CABG; Medicare beneficiaries and hospitals found in a Medicare claims data set (not specified) and hospitals participating in the American Hospital Association annual survey (1987 to 1994)

Report cards were associated with a shift in CABG use to healthier patients, leading to worse cardiac outcomes, especially among sicker patients (defined as higher hospital expenditures and days in hospital).

23

Does public release of performance results improve quality of care?

Design

Evidence table

Shekelle, Lim, Mattke, Damberg

Study, year

Data set

Aim

Setting and subjects

Key findings

Dziuban et al, 1994

Case study

New York cardiac surgery reporting system; riskadjusted mortality data

To document a hospital’s response to being identified as a high risk-adjusted mortality outlier in the CSRS

New York; all New York hospitals providing CABG surgery; one outlier hospital (1992 to 1993)

Quality improvement activity increased (change in timing and technique used for patients undergoing emergent CABG, change in hospital policies).

Farley et al, 2002a

RCT

(1) CAHPS; (2) Iowa Medicaid beneficiaries enrolee files

To examine if CAHPS information on plan performance affected health plan choices by new beneficiaries in Iowa Medicaid

New Iowa Medicaid beneficiaries in select counties; 2000

No effect on HMO choices overall.

Farley et al, 2002b

RCT

(1) CAHPS; (2) New Jersey Medicaid office data file and consumer interview

To assess effects of CAHPS health plan performance information on plan choices and decision processes by NJ Medicaid beneficiaries

NJ, 1998, Medicaid beneficiaries

No effect on HMO choices overall. Participants who read the report card and did not select the dominant HMO chose the HMO with higher CAHPS scores.

Ghali et al, 1997

Observational cohort

New York Cardiac Surgery Reporting System; riskadjusted CABG mortality

To compare trends in CABG-related mortality in Massachusetts (a state without statewide public reporting of CABG outcomes) to New York (a state with public reporting) and northern New England

New York; all New York hospitals performing CABG; 12 Massachusetts hospitals performing cardiac surgery (except Veterans Affairs hospitals) and hospitals contained in the HCFA hospital 30-day unadjusted mortality dataset (1990, 1992, and 1994)

Risk adjusted mortality rate (RAMR) reductions in Massachusetts were comparable to mortality reduction in New York and northern New England; unadjusted mortality trends were similar in Massachusetts, New York, northern New England, and the USA.

Evidence table

Shekelle, Lim, Mattke, Damberg

Design

Does public release of performance results improve quality of care?

24

Study, year

Data set

Aim

Setting and subjects

Key findings

Hannan et al, 1994a

Observational cohort

New York Cardiac Surgery Reporting System (CSRS); risk-adjusted CABG mortality data;

To assess changes in inhospital mortality rates of CABG patients following the publication of mortality data in the CSRS

New York; all New York hospitals performing CABG; 57,187 patients undergoing CABG (1989 to 1992)

RAMR decreased from 4.17 per cent to 2.45 per cent.

Hannan et al, 1994b

Observational cohort

New York Cardiac Surgery Reporting System; riskadjusted CABG mortality data

To determine if mortality rate outlier status was associated with overall improvement in risk-adjusted mortality and changes in provider volume of CABG operations performed following the implementation of the CSRS

New York; all New York hospitals performing CABG; all New York hospitals performing CABG (1989 to 1992)

No association overall between mortality rate outlier status and hospital volume.

Hannan et al, 1995

Observational cohort

New York Cardiac Surgery Reporting System

To examine the longitudinal relationship between surgeon volume and in-hospital mortality for CABG surgery in New York and explain changes in mortality that occurred over time

New York; all New York cardiac surgeons performing CABG; 57,187 patients undergoing isolated CABG surgery in New York (1989 to 1992)

Percentage of patients undergoing CABG surgery by low-volume surgeons decreased from 7.6 per cent in 1989 to 5.7 per cent in 1992

Harris, 2002

Experimental study

(1) CAHPS-like and HEDIS-like measures; (2) survey responses

(1) To investigate impact of quality information on willingness of consumers to enrol in health plans that restrict provider access (2) To assess relative impact of consumer- and expertassess quality information on willingness to enrol in restrictive plans

West Los Angeles (RAND, mall); age 25–64, private insured; 2000

Provision of report cards with information about quality of health plan reduced importance of provider network features.

25

Does public release of performance results improve quality of care?

Design

Evidence table

Shekelle, Lim, Mattke, Damberg

Study, year

Design

Data set

Aim

Setting and subjects

Key findings

Hibbard, Controlled trial Stockard, and Tusler, 2003

(1) Hospital safety report by Alliance (large employerpurchasing cooperative in Madison, Wisconsin) containing Wisconsin Bureau of Health Information data sets; (2) Study: interviews

To evaluate impact on QI of reporting hospital performance publicly versus privately back to hospital

Wisconsin, 2001–2002; CEOs/medical directors, QI directors of each hospital

QualityCounts hospital did not engage in more quality improvements overall compared to confidential reporting and no report hospitals, but they did engage in a statistically higher number of quality improvement efforts specific to the areas included in the reports.

Hibbard, Analysis of Stockard, and time trend Tusler, 2005

QualityCounts - 24 hospitals in South Central Wisconsin. Quality Data for report from Wisconsin Bureau of Health Information inpatient public use data sets

To assess the long-term impact of a public hospital performance report on consumers and hospitals

Hospitals in South Central Wisconsin

No changes in market share for hospital with publicly-reported data. No results given for internal or no reporting groups. Out of seven possible activities, mean number of quality improvement activities was 4.1 overall; 5.7 for hospitals with improved ratings; 2.6 with no change in ratings; 4 with decrease in ratings (no formal statistical testing). Performance feedback, whether public or confidential, was associated with improved performance.

Does public release of performance results improve quality of care?

26

Study, year

Evidence table

Shekelle, Lim, Mattke, Damberg

Data set

Aim

Setting and subjects

Key findings

Jha and Epstein, 2006

Time series (for market share analysis

New York State Cardiac Surgery Reporting System

(1) To determine if hospitals’ or surgeons’ performance affects market share (2) To evaluate the association between a surgeon’s performance and the likelihood of discontinuing his/her practice in the state (3) To report whether surgeons who cease practising identify the reporting system as a factor in their decision to cease practising in the state

Patients who have undergone bypass surgery in New York between 1989 and 2002

No relationship between ranking and subsequent market share. Performance in the bottom quartile was associated with increased odds of ceasing practice; (odds ratio for ceasing practice = 3.5, 95 per cent CI 1.35, 9.01).

Jin and Sorensen, 2006

Observational cohort

(1) Office of Personnel Management health plan enrolment records; public and nonpublic NCQA ratings; (2) Health plan ratings

(1) To measure the impact of health plan ratings on individuals’ choices (new enrolees and switching behaviour) by evaluating the correlation between plan ratings and unobserved plan quality for public vs. nonpublic plans (2) To use these estimates to calculate the value of the information to consumers

Federal government annuitants (fed employees, retirees, surviving family members of deceased fed employees); 1995–2000; subsetted out the 86 counties with the greatest number of public and nonpublic plans operating simultaneously, 1998–99 (because of HEDIS/CAHPS became widely available in 1997)

Overall, inertia in health plan enrolment decisions. For individuals affected by performance ratings, better scores were associated with increased likelihood of selecting the plan. One standard deviation increase in report card measure of quality increased the likelihood of plan selection by 2.63 percentage points.

27

Does public release of performance results improve quality of care?

Design

Evidence table

Shekelle, Lim, Mattke, Damberg

Study, year

Data set

Aim

Setting and subjects

Key findings

Longo et al, 1997

Descriptive

Obstetrics consumer report

To examine the impact of Missouri Department of Health’s obstetrics consumer report, which provides structure, process, and outcomes measures, on quality improvement activity and clinical outcomes

Missouri; all hospitals providing obstetric care; key informant designated by hospital administrators at 82 hospitals (93 per cent response rate) (1994)

Hospitals instituted services (for example, hospital policy that infants ride in car seats upon discharge, formal neonatal transfer agreements) after the reports were published.

Luce et al, 1996

Descriptive

California Hospitals Outcome project; risk-adjusted myocardial infarction mortality

To describe quality improvement activity following the California CHOP report featuring riskadjusted outcomes

California; all California non-federal hospitals; 17 out of 22 public hospitals that are members of the California Association of Public Hospitals and Health Systems (1993 to 1994)

Minimal impact on quality improvement activity

Mannion and Goddard, 2001

Survey

Scottish clinical resource and audit group clinical outcome indicators

Assess the impact of publication of clinical outcomes on Scottish hospitals

Eight Scottish hospitals

Data were rarely cited as informing quality improvement activities; concerns raised about validity, timeliness and lack of external incentives.

Mannion and Goddard, 2003

Case studies, surveys

Scottish clinical resource and audit group clinical outcome indicators

Assess the impact of publication of clinical outcomes on several stakeholders

Eight Scottish hospitals, 150 primary care physicians, survey of Health Councils

Little use of data to stimulate quality improvement in hospitals, most doctors were aware of data but rarely used it, most Health Councils were unaware of the data.

Mannion and Goddard, 2004

Survey

Scottish clinical resource and audit group clinical outcome indicators

Assess the impact of publication of clinical outcomes on general practitioners

150 general practitioners

78 per cent of GPs were aware of the data, but these were discussed only rarely with patients or others. Evidence table

Shekelle, Lim, Mattke, Damberg

Design

Does public release of performance results improve quality of care?

28

Study, year

Design

Data set

Aim

Setting and subjects

Key findings

Mannion, Davies, and Marshall, 2005

Case series

(1) British National Health Service ‘star’ performance ratings; (2) Semi-structured interviews and documentary analysis

To explore impact of star performance ratings

Managers and senior clinicians in acute hospital trusts in England

Ratings transmitted important priorities from central government and helped direct and concentrate front-line resources. Public reporting led to tunnel vision, distortion of clinical priorities and disincentive to improve performance among highrated organisations.

NCQA HEDIS data To assess the relationship between health plan performance and participation in public reporting programmes

US; commercial health plans (HMO only); HMO health plans (1997 to 1999)

Lower-scoring plans are more likely than plans with higher-scoring plans to stop disclosing publicly their quality data. Plans in the lowest tertile in 1997 were 2.2 to 7.0 times more likely than other plans to withdraw from public reporting in 1998.

Mennemeyer, Observational Morrisey and cohort Howard, 1997

HCFA hospitalspecific mortality rates for Medicare patients

US; hospitals providing care to Medicare patients; community hospitals treating Medicare patients (1984 to 1992)

Hospitals with mortality rates two times that expected by HCFA had less than one fewer discharge per week in the first year; press reports of single, unexpected deaths was associated with 9 per cent reduction in hospital discharges within one year.

To assess the relationship between the release of HCFA hospital-specific mortality rates and utilisation (discharges); to compare the impact of publicly-releasing HCFA mortality rates to press reports of unexpected deaths, on utilisations

29

Does public release of performance results improve quality of care?

McCormick et Observational al, 2002 cohort

Evidence table

Shekelle, Lim, Mattke, Damberg

Study, year

Data set

Aim

Setting and subjects

Key findings

Moscucci et al, 2005

Observational cohort

New York Cardiac Surgery Reporting System

To measure the effect of the New York State PCI report on case selection for percutaneous coronary intervention (PCI) by comparing Michigan’s and New York’s adjusted and unadjusted in-hospital mortality rates

New York; all New York hospitals performing CABG; 11,374 patients in a multicentre Michigan PCI database and 69, 048 patients in a statewide New York PCI. New York; all New York hospitals performing CABG; 11,374 patients in a multicentre (eight hospital) PCI database in Michigan and 69,048 patients in a statewide (34 hospital) PCI database in New York (1998 to 1999)

Unadjusted mortality rates were lower in New York than Michigan (case mix was different in the two states (0.83 per cent v. 1.54 per cent), but adjusted mortality rates were not statistically different. Significant case mix differences between PCI patients in Michigan and New York, suggesting propensity in New York towards not intervening on high-risk patients.

Mukamel and Observational Mushlin, 1998 cohort

New York State Cardiac Surgery Reporting System

To measure the relationship between provider ratings in the CSRS and rates of growth in fee-for-service market share

New York; All New York Hospitals with better outcomes hospitals performing experienced higher rates of growth in CABG; All New York market share hospitals performing CABG (1990 to 1993)

Mukamel et al, 2000

(1) New York State Cardiac Surgery Reports; (2) Telephone interviews with and contracting data from the majority of managed care organisations (MCO) licensed in New York State

To answer two related questions: (1) Do managed care organisations (MCOs) in New York State consider quality when they choose cardiac surgeons? (2) Do they use information about risk-adjusted mortality rates (RAMR) provided in the New York State Cardiac Surgery Reports?

Interview decision makers within MCOs who are responsible for the selection of providers in New York State

Crosssectional study

20 per cent indicated that the CSRS reports were a major factor in their contracting decision. Actual contracting patterns show that MCOs contract based upon a surgeon’s designation as a high-quality outlier, but they do not make choices based upon poor-quality outlier designation or actual RAMR.

Evidence table

Shekelle, Lim, Mattke, Damberg

Design

Does public release of performance results improve quality of care?

30

Study, year

Data set

Aim

Setting and subjects

Key findings

Mukamel et al, 2002

Observational cohort

(1) New York State Cardiac Surgery Reports; (2) Data on MCOs’ panel composition with respect to hospitals and cardiac surgeons

To evaluate the association between contracting practices of MCOs with cardiac surgeons and the quality of the cardiac surgeons.

Cardiac surgeons offering coronary artery bypass graft (CABG) surgery and 78 percent of MCOs in New York State in 1998.

Contract probability decreased with increasing RAMR and increased with high-quality outlier status in downstate New York.

Mukamel et Observational al, 2004-2005 cohort

(1) New York State Cardiac Surgery Reports; (2) NYS Medicare enrolees files

(1) To compare selection of surgeons before and after report publication to determine if reports influence selection of cardiac surgeon and diminishes importance of surgeon experience (defined as years since medical school) and price (2) To determine if this effect differs by race of the patient

NYS Medicare FFS enrolees who had CABG during 1991 and 1992

For the average patient, the CSRS influenced selection of cardiac surgeon and diminished the importance of surgeon experience and price as signals for quality. More affluent and more educated neighbourhoods were more likely to be treated by low RAMR surgeons in the post-report period. Patients from lower socioeconomic areas were more likely to be treated by high RAMR surgeons in the post-report period.

Narins et al, 2005

Percutaneous interventions in New York State report

To assess the influence of the NYS PCI report on physicians being monitored

New York; all New York physicians and hospitals performing PCI; interventional cardiologists included (120 respondents, 65 per cent response rate) (2003)

79 per cent of interventional cardiologists agreed or strongly agreed that public reporting has influenced their decision on whether to perform angioplasty on individual patients and critically ill patients with high expected mortality rates.

Descriptive

31

Does public release of performance results improve quality of care?

Design

Evidence table

Shekelle, Lim, Mattke, Damberg

Study, year

New York cardiac surgery reporting system; Cleveland clinic CABG patients

To determine if dissemination of CSRS mortality data was associated with outmigration of high-risk patients to undergo treatment at the Cleveland Clinic

New York, Cleveland; all hospitals performing CABG in New York State; 9442 patients receiving CABG at the Cleveland Clinic (1989 to 1993)

Patients from New York State receiving CABG at the Cleveland Clinic had higher expected mortality than the New York Statewide mix, patients from Ohio, other states/ countries.

Peterson et al, 1998

Observational cohort

New York cardiac surgery reporting system; riskadjusted CABG mortality

To examine the impact of the CSRS on in-hospital mortality rates by comparing unadjusted mortality rates in New York to other states

New York: all New York hospitals performing CABG; Medicare patients 65 or older who underwent CABG in a US hospital (1987 to 1992)

Both unadjusted and risk-adjusted mortality rates in New York declined more than in other states.

Study, year

Design

Data set

Aim

Setting and subjects

Key findings

Interviews Rainwater, Romano, and Antonius, 1998

California Hospital Outcomes project; risk-adjusted myocardial infarction mortality data

To describe the impact of publicly reporting California’s CHOP risk-adjusted 30-day inpatient mortality rates for patients with acute myocardial infarction on quality improvement activity

California; California non-federal acute care hospitals; 39 key informants at a sample of acute care hospitals in California (1996 to 1997)

Minimal impact on quality improvement activity (two-thirds respondents indicated no specific QI activity).

Romano and Zhou, 2004

Time series

(1) California Patient Discharge Data Set; (2) New York State Statewide Planning and Research Cooperative System discharge abstract data

To determine whether hospitals recognised as performance outliers experience volume changes after publication of a report card

Patients admitted to non-federal hospitals designated as outliers in reports on CABG mortality in New York, AMI mortality in California, and post-discectomy complication in California

No statistically significant AMIrelated volume changes among outlier hospitals. Slight increase in lumbar discectomy-related volume for low-complication outliers. Transient increase in CABG volume for lowmortality hospitals and transient decrease in volume for high-mortality outliers.

Study, year

Design

Data set

Aim

Setting and subjects

Key findings

Evidence table

Shekelle, Lim, Mattke, Damberg

Observational cohort

Does public release of performance results improve quality of care?

32

Omoigui et al, 1996

Cleveland Health Quality Choice hospital mortality for selected conditions

To measure changes in hospital mortality that occurred following the implementation of the CHQC reporting initiative, which publicly-released inhospital mortality rates

Cleveland; Cleveland hospitals; 101,060 consecutive eligible discharges with 8 diagnoses (acute myocardial infarction, heart failure, obstructive airway disease, gastrointestinal haemorrhage, pneumonia, stroke, CABG, and lower bowel resection) from 30 north-eastern Ohio hospitals (1992 to 1993)

Risk-adjusted mortality for most conditions declined from 7.5 per cent to 6.8 per cent, 6.8 per cent, and 6.5 per cent for 3 periods following publication. Declines in mortality rates were statistically significant in weighted linear regression analyses for heart failure (0.50 per cent per period) and pneumonia (0.38 per cent per period).

Rosenthal et al, 1998

Case series

Cleveland Health Quality Choice hospital outcomes

To measure quality improvement activity following release of CHQC reports of mortality rates, length of stay, and caesarean section rates (all measures severity-adjusted)

Cleveland; Cleveland hospitals; one academic and three community hospitals of varying size

Quality improvement activities increased (for example, interdisciplinary process improvement teams, review of processes of care, development of practice guidelines).

Study, year

Design

Data set

Aim

Setting and subjects

Key findings

(1) HEDIS; survey-patient satisfaction; GM consultantsoperational performance; (2) GM enrolment records

To determine how the release GM employees; of health plan performance 1996–97 ratings influence employee health plan choice

Scanlon et al, Observational cohort 2002

Employees avoided plans with many below-average ratings and would be willing to pay more to avoid plans with lower ratings ($41/month to avoid a plan with one extra below-average rating), but were not strongly attracted to plans with many superior ratings.

33

Does public release of performance results improve quality of care?

Time series

Evidence table

Shekelle, Lim, Mattke, Damberg

Rosenthal, Quinn and Harper, 1997

Design

Data set

Aim

Setting and subjects

Key findings

Schneider and Epstein, 1996

Descriptive

Pennsylvania Health Care Cost Containment Council

To assess the influence of the Pennsylvania Consumer Guide to CABG Surgery on cardiologists and cardiac surgeons

Pennsylvania; all Pennsylvania cardiac surgeons performing CABG; randomly selected cardiologists and cardiac surgeons practicing in Pennsylvania (65 per cent overall response rate) (1995)

59 per cent of cardiologists reported increased difficulty finding surgeons willing to perform CABG in severely ill patients who required it. 63 per cent of cardiac surgeons reported less willingness to operate on such patients.

Spranca et al, 2000

Experimental study

CAHPS, experimental data of consumers’ health plan choices

To learn whether consumer reports of health plan quality can affect health plan selection

311 privately insured adults in Los Angeles

When plans had high CAHPS ratings, participants were willing to enrol in less expensive plans that restrict services.

Tu and Cameron, 2003

Descriptive (survey)

Survey data of hospitals in Ontario. Quality information from Cardiovascular Health and Services in Ontario

To determine the impact of the Canada’s first release of Ami hospital-specific quality report on hospitals in Ontario

Survey of physicians/ CEO of hospitals in Ontario in 2000

54 per cent of respondents indicated that one or more changes were made at their hospital in response to public reporting.

HCFA hospitalspecific mortality rates for all Medicare patients

To examine relationship between mortality rate outlier status and hospital CABG volume/quality improvement activity following HCFA release of hospital mortality rates

New York; hospitals providing care to Medicare patients; all New York general acute hospitals serving Medicare patients (~1985 to ~1986)

No statistically significant effect on occupancy rates.

Vladeck et al, Analysis of time trend 1988

Does public release of performance results improve quality of care?

34

Study, year

Evidence table

Shekelle, Lim, Mattke, Damberg

Data set

Aim

Setting and subjects

Key findings

Wedig and Tai-Seale, 2002

Observational cohort

(1) Federal Employee Health Benefit guides 1995 and 1996; financial costs from Checkbook Guide to Health Insurance Plans for Federal employees; (2) Office of Personnel Management

(1) To describe effects of report card dissemination on consumers’ choice of health plan (2) To determine if new employees are more influenced by report cards (3) To determine if report cards affect the measured price elasticity of demand

Federal employees with single person HMO coverage residing in counties with 5 or fewer unique plans (new and existing employees), 1995–96

Dissemination of report cards influenced plan selection. Employees were more likely to select plans with better quality ratings. One standard deviation increase in report card measure of quality increased the likelihood of plan selection by more than 50 per cent.

Werner, Asch, and Polsky, 2005

Observational cohort

New York Cardiac Surgery Reporting System

To investigate the impact of the CSRS on racial and ethnic disparities in use of CABG, PTCA, and cardiac catheterization in patients with acute myocardial infarction

New York; all New York hospitals and cardiac surgeons performing CABG; hospital discharges from the New York State Department of Health’s inpatient data files and hospital discharges in a group of comparison states in the Nationwide Inpatient Sample from the HCUP-3 (928,551 patients with acute myocardial infarction) (1988–95)

Racial and ethnic disparity in CABG use increased in New York immediately after implementation of the CSRS (by 2 to 3 percentage points), whereas disparities did not change in the comparison states. These disparities decreased to levels similar to report card pre-release levels over time. No differences in PTCA or cardiac catheterisation were seen after the CABG report card was released.

35

Does public release of performance results improve quality of care?

Design

Evidence table

Shekelle, Lim, Mattke, Damberg

Study, year

Does public release of performance results improve quality of care?

References

References Agency for Healthcare Research and Quality. ‘CAHPS overview’. Available at https://www.cahps.ahrq. gov/content/cahpsOverview/OVER_Intro.asp. Accessed 15 May 2006. Agency for Healthcare Research and Quality (2004). 2004 National Healthcare Quality Report. US Department of Health and Human Services. Asch SM, Kerr E, Keesey J, Adams JL, Setodji CM, Malik S and McGlynn EA (2006). ‘Who is at greatest risk for receiving poor-quality health care?’. New England Journal of Medicine, vol 354, pp 1147–56. Asch SM, McGlynn EA, Hogan MM, Hayward RA, Shekelle P, Rubenstein L, Keesey J, Adams J and Kerr EA (2004). ‘Comparison of quality of care for patients in the Veterans Health Administration and patients in a national sample’. Annals of Internal Medicine, vol 141, pp 938–45. Baker DW, Einstadter D, Thomas C, Husak S, Gordon NH and Cebul RD (2003). ‘The effect of publicly reporting hospital performance on market share and risk-adjusted mortality at high-mortality hospitals’. Medical Care, vol 41, pp 729–40. Baker DW, Einstadter D, Thomas CL, Husak SS, Gordon NH and Cebul RD (2002). ‘Mortality trends during a program that publicly reported hospital performance’. Medical Care, vol 40, pp 879–90. Barry CA, Bradley CP, Britten N, Stevenson FA and Barber N (2000). ‘Patients’ unvoiced agendas in general practice consultations: qualitative study’. British Medical Journal, vol 320, p 1246–50. Bates DW, Cohen M, Leape LL, Overhage JM, Shabot MM and Sheridan T (2001). ‘Reducing the frequency of errors in medicine using information technology’. Journal of the American Medical Informatics Association, vol 8, pp 299–308. Beaulieu ND (2002). ‘Quality information and consumer health plan choices’. Journal of Health Economics, vol 21, pp 43–63. Bentley JM and Nash DB (1998). ‘How Pennsylvania hospitals have responded to publicly released reports on coronary artery bypass graft surgery’. The Joint Commission Journal on Quality Improvement, vol 24, pp 40–49. Berwick DM, James B and Coye MJ (2003). ‘Connections between quality measurement and improvement’. Medical Care, vol 41 (1 Suppl), pp I30–38. Berwick DM and Wald DL (1990). ‘Hospital leaders’ opinions of the HCFA mortality data’. Journal of the American Medical Association, vol 263, pp 247–49. Bost JE (2001). ‘Managed care organisations publicly reporting three years of HEDIS measures’. Managed Care Interface, vol 14 (9), pp 50–54.

36

Shekelle, Lim, Mattke, Damberg

References

Does public release of performance results improve quality of care?

Burack JH, Impellizzeri P, Homel P and Cunningham JN Jr (1999). ‘Public reporting of surgical mortality: a survey of New York State cardiothoracic surgeons’. Annals of Thoracic Surgery, vol 68, pp 1195–200; discussion pp 1201–02. Campbell S, Reeves D, Kontopantelis E, Middleton E, Sibbald B and Roland M (2007). ‘Quality of primary care in England with the introduction of pay for performance’. New England Journal of Medicine, vol 357, pp 181–90. Chassin MR (2002). ‘Achieving and sustaining improved quality: lessons from New York State and cardiac surgery’. Health Affairs (Millwood), vol 21 (4), pp 40–51. Clough JD, Engler D, Snow R and Canuto PE (2002). ‘Lack of relationship between the Cleveland Health Quality Choice project and decreased inpatient mortality in Cleveland’. American Journal of Medical Quality, vol 17, pp 47–55. Cutler DM, Huckman RS and Landrum MB (2004). ‘The role of information in medical markets: an analysis of publicly reported outcomes in cardiac surgery’. National Bureau of Economic Research Working Paper 10489. NBER. Dafny L and Dranove D (2005). ‘Do report cards tell consumers anything they don’t already know? The case of Medicare HMOs’. National Bureau of Economic Research Working Paper 11420. NBER. Dranove DEA, Kessler D, McClellan M and Satterthwaite M (2003). ‘Is more information better? The effects of “Report cards” on health care providers’. Journal of Political Economy, vol 111, pp 555–88. Dziuban SW Jr, McIlduff JB, Miller SJ and Dal Col RH (1994). ‘How a New York cardiac surgery program uses outcomes data’. Annals of Thoracic Surgery, vol 58, pp 1871–76. Farley DO, Elliott MN, Short PF, Damiano P, Kanouse DE and Hays RD (2002a). ‘Effect of CAHPS performance information on health plan choices by Iowa Medicaid beneficiaries’. Medical Care Research and Review, vol 59, pp 319–36. Farley DO, Short PF, Elliott M, Kanouse D, Brown J and Hays R (2002b). ‘Effects of CAHPS health plan performance information on plan choices by New Jersey Medicaid beneficiaries’. Health Services Research, vol 37, pp 985–1007. Fung CH, Lim Y-W, Mattke S, Damberg C and Shekelle PG (2008). ‘Systematic review: the evidence that publishing patient care performance data improves quality of care’. Annals of Internal Medicine, vol 148, pp 111–23. Ghali WA, Ash AS, Hall RE and Moskowitz MA (1997). ‘Statewide quality improvement initiatives and mortality after cardiac surgery’. Journal of the American Medical Association, vol 277, pp 379–82. Grumbach K, Osmond D, Vranizan K, Jaffe D and Bindman A (1998). ‘Primary care physicians’ experience of financial incentives in managed-care systems’. New England Journal of Medicine, vol 339, pp 1516–21. Handler JA, Gillam M, Sanders AB and Klasco R (2000). ‘Defining, identifying and measuring error in emergency medicine’. Academy of Emergency Medicine, vol 7, pp 1183–88.

Shekelle, Lim, Mattke, Damberg

37

Does public release of performance results improve quality of care?

References

Hannan EL, Kilburn H Jr, Racz M, Shields E and Chassin MR (1994a). ‘Improving the outcomes of coronary artery bypass surgery in New York State’. Journal of the American Medical Association, vol 271, pp 761–66. Hannan EL, Kumar D, Racz M, Siu AL and Chassin MR (1994b). ‘New York State’s Cardiac Surgery Reporting System: four years later’. Annals of Thoracic Surgery, vol 58, pp 1852–57. Hannan EL, Siu AL, Kumar D, Kilburn H Jr and Chassin MR (1995). ‘The decline in coronary artery bypass graft surgery mortality in New York State. The role of surgeon volume’, Journal of the American Medical Association, vol 273, pp 209–13. Hannan EL, Stone, CC, Biddle TL and DeBuono BA (1997). ‘Public release of cardiac surgery outcomes data in New York: what do New York state cardiologists think of it?’. American Heart Journal, vol 134, pp 55–61. Harris KM (2002). ‘Can high quality overcome consumer resistance to restricted provider access? Evidence from a health plan choice experiment’. Health Services Research, vol 37, pp 551–71. HealthGrades. Available at http://www.healthgrades.com. Accessed 10 May 2006. Henley E (2005). ‘Pay-for-performance: what can you expect?’. Journal of Family Practice, vol 54, pp 609–12. Hibbard JH, Harris-Kojetin L, Mullin P, Lubalin J and Garfinkel S (2000). ‘Increasing the impact of health plan report cards by addressing consumers’ concerns’. Health Affairs (Millwood), vol 19 (5), pp 138–43. Hibbard JH, Jewett JJ, Legnini MW and Tusler M (1997). ‘Choosing a health plan: do large employers use the data?’. Health Affairs (Millwood), vol 16 (6), pp 172–80. Hibbard JH, Stockard J and Tusler M (2005). ‘Hospital performance reports: impact on quality, market share, and reputation’. Health Affairs (Millwood), vol 24 (4), pp 1150–60. Hibbard JH, Stockard H and Tusler M (2003). ‘Does publicizing hospital performance stimulate quality improvement efforts?’. Health Affairs (Millwood), vol 22 (2), pp 84–94. Institute of Medicine. ‘Crossing the quality chasm’. Available at http://darwin.nap.edu/ books/0309072808/html/R1.html. Accessed 9 May 2006. Jha AK and Epstein AM (2006). ‘The predictive accuracy of the New York state coronary artery bypass surgery report-card system’. Health Affairs (Millwood), vol 25 (3), pp 844–55. Jin GZ and Sorensen AT (2006). ‘Information and consumer choice: the value of publicized health plan ratings’. Journal of Health Economics, vol 25, pp 248–75. Kuperman GJ, Teich JM, Gandhi TK and Bates DW (2001). ‘Patient safety and computerized medication ordering at Brigham and Women’s Hospital’. Joint Commission Journal of Quality Improvement, vol 27, pp 509–21.

38

Shekelle, Lim, Mattke, Damberg

References

Does public release of performance results improve quality of care?

Lannon CM, Coven BJ, France FL, Hickson GB, Miles PV, Swanson JT, Takayama JI, Wood DL and Yamamoto L (2001). ‘Principles of patient safety in pediatrics’. Pediatrics, vol 107, pp 1473–75. Lansky D (2002). ‘Improving quality through public disclosure of performance information’. Health Affairs (Millwood), vol 21 (4), pp 52–62. Leapfrog Group. Available at www.leapfroggroup.org/cp. Accessed 10 May 2006. Lohr K ed (1990). Medicare: A strategy for quality assurance. Institute of Medicine, National Academy Press. Longo DR, Land G, Schramm W, Fraas J, Hoskins B and Howell V (1997). ‘Consumer reports in health care. Do they make a difference in patient care?’. Journal of the American Medical Association, vol 278, pp 1579–84. Luce JM, Thiel GD, Holland MR, Swig L, Currin SA and Luft HS (1996). ‘Use of risk-adjusted outcome data for quality improvement by public hospitals’. Western Journal of Medicine, vol 164, pp 410–14. Mannion R, Davies H and Marshall M (2005). ‘Impact of star performance ratings in English acute hospital trusts’. Journal of Health Services Research and Policy, vol 10, pp 18–24. Mannion R and Goddard M (2001). ‘Impact of published clinical outcomes data: case study in NHS hospital trusts’. British Medical Journal, vol 323, pp 260–63. Mannion R and Goddard M (2003). ‘Public disclosure of comparative clinical performance data: lessons from the Scottish experience’. Journal of Evaluation in Clinical Practice, vol 9, pp 277–86. Mannion R and Goddard M (2004). ‘General practitioners’ assessments of hospital quality and performance’. Clinical Governance: an international journal, vol 9, pp 42–47. Marshall MN, Shekelle PG, Davies HTO and Smith PC (2003). ‘Public reporting on quality in the United States and the United Kingdom’. Health Affairs (Millwood), vol 22 (3), pp 134–48. Marshall MN, Shekelle PG, Leatherman S and Brook RH (2000). ‘The public release of performance data: what do we expect to gain? A review of the evidence’. Journal of the American Medical Association, vol 283, pp 1866–74. Marvel MK, Epstein RM, Flowers K and Beckman HB (1999). ‘Soliciting the patients’ agenda: have we improved?’. Journal of the American Medical Association, vol 281, pp 283–87. McCormick D, Himmelstein DU, Woolhandler S, Wolfe SM and Bor DH (2002). ‘Relationship between low quality-of-care scores and HMOs’ subsequent public disclosure of quality-of-care scores’. Journal of the American Medical Association, vol 288, pp 1484–90. McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A and Kerr EA (2003). ‘The quality of health care delivered to adults in the United States’. New England Journal of Medicine, vol 348, pp 2635–45.

Shekelle, Lim, Mattke, Damberg

39

Does public release of performance results improve quality of care?

References

Mennemeyer ST, Morrisey MA and Howard LZ (1997). ‘Death and reputation: how consumers acted upon HCFA mortality information’. Inquiry, vol 34, pp 117–28. Moscucci M, Eagle KA, Share D, Smith D, De Franco AC, O’Donnell M, Kline-Rogers E, Jani SM and Brown DL (2005). ‘Public reporting and case selection for percutaneous coronary interventions: an analysis from two large multicenter percutaneous coronary intervention databases’. Journal of the American College of Cardiology, vol 45, pp 1759–65. Mukamel DB and Mushlin AI (1998). ’Quality of care information makes a difference: an analysis of market share and price changes after publication of the New York State Cardiac Surgery Mortality Reports’. Medical Care, vol 36, pp 945–54. Mukamel DB, Mushlin AI, Weimer D, Zwanziger J, Parker T and Indridason I (2000). ‘Do quality report cards play a role in HMOs’ contracting practices? Evidence from New York State’. Health Services Research, vol 35 (1 Pt 2), pp 319–32. Mukamel DB, Weimer DL and Zwanziger J (2004/05). ‘Quality report cards, selection of cardiac surgeons, and racial disparities: a study of the publication of the New York State Cardiac Surgery Reports’. Inquiry, vol 41 (4), pp 435–46. Mukamel DB, Weimer DL, Zwanziger J and Mushlin AI (2002). ‘Quality of cardiac surgeons and managed care contracting practices’. Health Services Research, vol 37, pp 1129–44. Narins CR, Dozier AM, Ling FS and Zareba W (2005). ‘The influence of public reporting of outcome data on medical decision making by physicians’. Archives of Internal Medicine, vol 165, pp 83–87. National Committee for Quality Assurance. ‘Health plan report card’. Available at http://hprc.ncqa.org/. Accessed 5 May 2006. National Committee for Quality Assurance (2005). The State of Healthcare Quality 2005. NCQA. New Jersey Department of Health and Senior Services. ‘2005 HMO New Jersey performance report’. Available at http://web.doh.state.nj.us/hpr/. Accessed 5 May 2006. New York State Department of Health. ‘Heart disease’. Available at www.health.state.ny.us/nysdoh/ heart/heart_disease.htm#cardiovascular. Accessed 5 May 2006. Office of Inspector General June Gibbs Brown. ‘The external review of hospital quality state initiatives’. Available at http://oig.hhs.gov/oei/reports/oei-01-97-00054.pdf. Accessed at 5 May 2006. Omoigui NA, Miller DP, Brown KJ, Annan K, Cosgrove D III, Lytle B, Loop F and Topol EJ (1996). ‘Outmigration for coronary bypass surgery in an era of public dissemination of clinical outcomes’. Circulation, vol 93, pp 27–33. Pacific Business Group on Health. ‘Value based purchasing’. Available at www.pbgh.org/programs/ value_based_purchasing.asp. Accessed 5 May 2006.

40

Shekelle, Lim, Mattke, Damberg

References

Does public release of performance results improve quality of care?

Peterson ED, DeLong ER, Jollis JG, Muhlbaier LH and Mark DB (1998). ‘The effects of New York’s bypass surgery provider profiling on access to care and patient outcomes in the elderly’. Journal of the American College of Cardiology, vol 32, pp 993–99. Rainwater JA, Romano PS and Antonius DM (1998). ‘The California Hospital Outcomes Project: how useful is California’s report card for quality improvement?’. The Joint Commission Journal on Quality Improvement, vol 24, pp 31–39. RAND Health (2001). Consumers and Health Care Quality Information: Need, availability, utility. California Healthcare Foundation. Robinson S and Brodie M (1997). ‘Understanding the quality challenge for health consumers: the Kaiser/ AHCPR survey’. The Joint Commission Journal on Quality Improvement, vol 23, pp 239–44. Romano PS, Rainwater J and Marcin JP (forthcoming). ‘Review of the literature on the impact of public reporting of CABG surgery outcomes’. Romano PS and Zhou H (2004). ‘Do well-publicized risk-adjusted outcomes reports affect hospital volume?’. Medical Care, vol 42, pp 367–77. Rosenthal GE, Hammar PJ, Way LE, Shipley SA, Doner D, Wojtala B, Miller J and Harper DL (1998). ‘Using hospital performance data in quality improvement: the Cleveland Health Quality Choice experience’. The Joint Commission Journal on Quality Improvement, vol 24 (7), pp 347–60. Rosenthal GE, Quinn L and Harper DL (1997). ‘Declines in hospital mortality associated with a regional initiative to measure hospital performance’. American Journal of Medical Quality, vol 12, pp 103–12. Rosenthal MB, Frank RG, Li Z and Epstein AM (2005). ‘Early experience with pay-for-performance: from concept to practice’. Journal of the American Medical Association, vol 294, pp 1788–93. Scanlon DP, Chernew ME, McLaughlin C and Solon G (2002). ‘The impact of health plan report cards on managed care enrollment’. Journal of Health Economics, vol 21, pp 19–41. Schauffler HH and Mordavsky JK (2001). ‘Consumer reports in health care: do they make a difference?’. Annual Review of Public Health, vol 22, pp 69–89. Schneider EC and Epstein AM (1996). ‘Influence of cardiac-surgery performance reports on referral practices and access to care. A survey of cardiovascular specialists’. New England Journal of Medicine, vol 335, pp 251–56. Schneider EC and Lieberman T (2001). ‘Publicly disclosed information about the quality of health care: response of the US public’. Quality in Health Care, vol 10, pp 96–103. Sofaer S, Crofton C, Goldstein E, Hoy E and Crabb J (2005). ‘What do consumers want to know about the quality of care in hospitals?’. Health Services Research, vol 40, pp 2018–36. Spranca M, Kanouse DE, Elliott M, Short PF, Farley DO and Hays RD (2000). ‘Do consumer reports of health plan quality affect health plan selection?’. Health Services Research, vol 35, pp 933–47.

Shekelle, Lim, Mattke, Damberg

41

Does public release of performance results improve quality of care?

Appendix A

The Kaiser Family Foundation, Agency for Healthcare Research and Quality, and Harvard School of Public Health (2004). The National Survey on Consumers’ Experiences with Patient Safety and Quality Information. The Kaiser Family Foundation. Tu JV and Cameron C (2003). ‘Impact of an acute myocardial infarction report card in Ontario, Canada’. International Journal for Quality in Health Care, vol 15, pp 131–37. US Congress Joint Economic Committee (2006). The Next Generation of Health Information Tools for Consumers. Joint Economic Committee Hearings, 109th Congress. US Government Printing Office. US Department of Health and Human Services. ‘Hospital compare’. Available at http://www. hospitalcompare.hhs.gov/. Accessed 5 May 2006. USNews.com. America’s Best Colleges 2006. Available at http://www.usnews.com/usnews/edu/college/ rankings/rankindex_brief.php. Accessed 9 May 2006. Vladeck BC, Goodwin EJ, Myers LP and Sinisi M (1988). ‘Consumers and hospital use: the HCFA “death list”’. Health Affairs (Millwood), vol 7 (1), pp 122–25. Wedig GJ and Tai-Seale M (2002). ‘The effect of report cards on consumer choice in the health insurance market’. Journal of Health Economics, vol 21, pp 1031–48. Werner RM and Asch DA (2005). ‘The unintended consequences of publicly reporting quality information’. Journal of the American Medical Association, vol 293, pp 1239–44. Werner RM, Asch DA and Polsky D (2005). ‘Racial profiling: the unintended consequences of coronary artery bypass graft report cards’. Circulation, vol 111, pp 1257–63. Zagat Survey. Available at http://zagat.com. Accessed 9 May 2006.

42

Shekelle, Lim, Mattke, Damberg

Appendix A

Does public release of performance results improve quality of care?

Appendix A: Public release of performance results – search methodology for PubMed Database searched and time period covered: PubMed: 1999–2006 Other limiters: English Human Search strategies: Search 1 Title search on the following citation: “The public release of performance data: what do we expect to gain? A review of the evidence.” Marshall MN, Shekelle PG, Leatherman S, Brook RH. JAMA. 2000 Apr 12;283(14) : 1866-74. Number of items retrieved: 31 Search 2 “Related articles” search on the following citation: “The public release of performance data: what do we expect to gain? A review of the evidence.” Marshall MN, Shekelle PG, Leatherman S, Brook RH. JAMA. 2000 Apr 12;283(14) : 1866-74. Number of items retrieved: 312 Search 3 “Related articles” search on the following citation: “Publicly disclosed information about the quality of health care: response of the US public.” Schneider EC, Lieberman, T Qual Health Care. 2001 Jun;10(2):67-8. Number of items retrieved: 49

Shekelle, Lim, Mattke, Damberg

43

Does public release of performance results improve quality of care?

Appendix A

Search 4A information dissemination OR information services OR disclos* OR data shar* OR report card* OR profil* OR disseminat*[tiab] AND public opinion OR attitude of health personnel OR consumer participation OR benchmark* OR consumer*[tiab] OR public[tiab] AND quality of health care[mj] OR hospitals/standards[mh:noexp] OR physicians/standards[mh:noexp] OR performance[tiab] Number of items retrieved: 1691 Search #4B (Addition of three terms to above search) public opinion OR attitude of health personnel OR consumer participation OR benchmark* OR consumer*[tiab] OR public[tiab] AND quality of health care[mj] OR hospitals/standards[mh:noexp] OR physicians/standards[mh:noexp] OR performance[tiab] AND transparen* OR scorecard* OR score card* NOT Results Of Search 4A Number of items retrieved: 67 Search 5A Database searched and time period covered: ECONLIT – 1999–2006 Search strategy: Note – “kw = keyword or terms from title, abstract or subject heading “de” = descriptor (subject heading term) kw: health* or kw: medical or kw: doctor* or kw: physician* or kw: nurs* AND kw: report* or kw: scorecard* or kw: profil* or kw: benchmark* or kw: inform* AND kw: public* or kw: disclos* or kw: disseminat* or kw: releas* or kw: publish* or kw: share* or kw: sharing) AND kw: quality or kw: standard+ Number of items retrieved: 358

44

Shekelle, Lim, Mattke, Damberg

Appendix A

Does public release of performance results improve quality of care?

Search #5B Database searched and time period covered: ECONLIT – 1999–2006 kw: health* or kw: medical or kw: doctor* or kw: physician* or kw: nurs* AND de: information AND de: quality or de: standard NOT Results of Search 5A Number of items retrieved: 26

Shekelle, Lim, Mattke, Damberg

45

Does public release of performance results improve quality of care?

Appendix B

Appendix B: Public release of performance results – search methodology for Wilson Business Periodicals Abstracts Database searched and time period covered: Wilson Business Periodicals Abstracts – 1999–2006 Search strategies: kw: health* or kw: medical or kw: doctor* or kw: physician* or kw: nurs* AND kw: report* or kw: scorecard* or kw: profil* or kw: benchmark* or kw: inform* AND kw: public* or kw: disclos* or kw: disseminat* or kw: releas* or kw: publish* or kw: share* or kw: sharing AND de: quality or de: standard+ or de: ranking or de: rating Number of items retrieved: 200

46

Shekelle, Lim, Mattke, Damberg

Suggest Documents