A Clinician s Guide to Correct Cost-Effectiveness Analysis: Think Incremental Not Average

Review Paper A Clinician’s Guide to Correct Cost-Effectiveness Analysis: Think Incremental Not Average Jeffrey S Hoch, PhD1; Carolyn S Dewa, MPH, PhD...
Author: Lydia Barnett
51 downloads 2 Views 102KB Size
Review Paper

A Clinician’s Guide to Correct Cost-Effectiveness Analysis: Think Incremental Not Average Jeffrey S Hoch, PhD1; Carolyn S Dewa, MPH, PhD2 Objective: To explain how to correctly report the results from a cost-effectiveness analysis (CEA). Methods: Results were used from a hypothetical clinical trial to illustrate how different ways of reporting economic results affect both presentation of findings and formulation of conclusions. To provide context, we reviewed some high-profile exchanges in the scientific literature. Results: The critical issue with which decision makers must grapple involves the trade-offs introduced by a new treatment or intervention. Specifically, are decision makers willing to pay the additional cost for the additional outcomes? This question cannot be considered without estimates of the additional cost and additional outcomes. Correct cost-effectiveness measures, such as the incremental cost-effectiveness ratio or the incremental net benefit, address this issue. Conclusions: As decision makers face the challenge of balancing increasing health care demand with cost containment, it will be crucial to identify cost-effective ways of providing care. Health care providers and other decision makers should not be misled by the results of improperly reported CEAs. Decisions around adoption of pharmaceuticals or implementation of new programs or interventions may be affected by which cost-effectiveness summary measure is reported. Thus consumers of CEA must have a basic understanding of why different methods give different results, and how the results should be interpreted. Can J Psychiatry 2008;53(4):267–274

Clinical Implications · As decision makers face the challenge of balancing increasing health care demand with cost containment, it will be crucial to identify cost-effective methods of providing care. · Health care providers and other decision makers should not be misled by the results of improperly reported CEA. · Incremental cost-effectiveness ratios and incremental net benefits are proper ways of reporting the results from a CEA. Both measures consider extra cost in relation to extra effect. Limitations · In general, the results of a CEA do not indicate whether a new treatment is cost-effective—the results simply provide an estimate of the extra cost for one additional patient outcome. · Economic evaluations such as CEA do not consider the decision-maker’s budget. As a result, a decision-maker might deem a new treatment to be cost-effective but too expensive to approve. · Calculations in health economics sometimes incorporate value judgments that are not always explicitly acknowledged.

Key Words: cost-effectiveness analysis, incremental cost-effectiveness ratio, net benefit regression, mental health economics

The Canadian Journal of Psychiatry, Vol 53, No 4, April 2008 W

267

Review Paper

ensions from the interplay of constrained budgets and insatiable demand are forcing health care providers and other decision makers to consider whether a new treatment or intervention is cost-effective.1,2 In fact, in Canada it is becoming standard practice to approve treatments not only based on clinical effectiveness but also on cost considerations. In turn, researchers have responded by drawing on economic evaluation techniques that translate patient outcome and cost data into useful information for such decisions. However, consumers of this research must be careful, given that economic summary measures that have been proposed and published in the scientific literature may provide misleading clinical and policy implications.

T

There are many different types of economic evaluations, among them cost-benefit analysis, CEA, cost-utility analysis, and cost-minimization analysis.3 An easy way to distinguish one from the other is by examining the analyst’s choice of patient outcome and asking, “How is it measured?” In cost-benefit analyses, there are typically many outcomes and all outcomes are valued in dollars; this type of analysis is not very popular in health care. In CEA, a single outcome is analyzed. Commonly, the outcome is measured in clinical units such as symptom-free days or outpatient visits. A second form of CEA is cost-utility analysis, which values patient outcomes in QALYs, equal to the number of life years remaining multiplied by a factor reflecting quality of life. In costminimization analysis, only costs are compared, as patient outcomes are assumed to be identical. The most popular economic evaluation method in health care is CEA.3 The goal of CEA is to quantify the trade-off between resources used and outcomes gained. CEA determines the relative efficiency of a treatment alternative, while cost-benefit analysis assesses whether a treatment is worthwhile. The attractiveness of CEA lies in its simplicity: one patient

Abbreviations used in this article DE

additional effect

DC

additional cost

ACER

average cost-effectiveness ratio

CCOHTA Canadian Coordinating Office for Health Technology Assessment CEA

cost-effectiveness analysis

CEAC

cost-effectiveness acceptability curve

H2RA

H2-receptor antagonist

ICER

incremental cost-effectiveness ratio

INB

incremental net benefit

QALY

quality-adjusted life year

268

outcome is expressed in its natural units (for example, depression-free days) and compared with the resources used. Typically, resources used are measured in dollars, so the summary measure of a CEA involves cost and a patient outcome (for example, $50 for one more depression-free day). As CEA has been applied to health care, researchers have used predominantly 2 methods of calculating the summary measure—the ACER and the ICER. The ACER captures the average cost per effect (for example, C/E). In contrast, the ICER reports the ratio of the change in cost to the change in effect (for example, DC/DE). For example, when buying one’s favourite beverage, often one has a choice of which size to order. The pricing might be $2.40 for a 12-oz beverage and $4.00 for a 40-oz beverage. The ACERs are 20 cents per ounce and 10 cents per ounce for the smaller and larger drink, respectively; the ICER is about 6 cents per additional ounce. At first glance, the difference between the ACER and ICER may seem trivial; however, in practice, the difference can be highly consequential, with the potential to affect patient care. In this paper, we review some high-profile exchanges in the scientific literature about how CEAs should be reported. We use hypothetical results from a clinical trial to illustrate the points raised in the exchanges and how different methods affect both presentation of findings and formulation of conclusions. We conclude by describing promising new CEA statistics.

Lessons From the Literature Applied Economic Evaluations In spite of published claims to the contrary, “currently there is no clear evidence that atypical antipsychotics generate cost savings or are cost-effective in general use among all schizophrenia patients.”4, p 2054 This was the conclusion of a recent review of CEAs of second-generation antipsychotics. The studies’ claims of cost-effectiveness were invalidated by methodological problems. Better methods may foster greater confidence in the results of CEAs, but so, too, may better use of existing methods. Methodological commentators have suggested that, some of the limitations in [application of CEA] methods may be owing to the fact that most of the advances in design and statistical techniques for the analysis of cost and cost-effectiveness are published in highly technical economics or biostatistical journals.4, p 2054 This highlights the importance of developing a general understanding of why different methods give different results and how the results should be interpreted. The call for a wider appreciation of how economic evaluation concepts are applied in mental health is not new. W La Revue canadienne de psychiatrie, vol 53, no 4, avril 2008

A Clinician’s Guide to Correct Cost-Effectiveness Analysis: Think Incremental Not Average

In 1997, Evers et al5 reviewed the quality of economic evaluation in the field of mental health care and found, “few good full economic evaluation studies have been undertaken in the domain of mental health care.”p 161 Their detailed review initially identified 113 articles for review during 1966 to 1995; however, 8 of them—despite what was claimed in the abstract or title—were not actually mental health economic evaluations. After applying various other exclusion criteria, Evers et al5 assembled 91 mental health economic evaluation studies. Of the 91 studies, only 6 reported additional costs and effects (consistent with the ICER approach); and looking solely at the 27 randomized controlled trials, only 3 employed an incremental approach. Summarizing their results and the results of others,6–9 Evers et al concluded that, “only a few of the articles reviewed were, in our opinion, good examples of economic evaluation.”5, p 171 Further, “the poor quality of the (economic evaluation) studies is not unique to mental health care: other authors have also found that very few studies adhere to the basic principles of economic evaluation.”5, p 171 A review of all economic evaluations published in 2003 found, “a substantial number of clinical trial-based economic studies using statistical methods of poor quality.”10, p 338 As a first step, the quality of CEAs could be improved by clarifying which is the correct CEA statistic to report. It is not clear if the answer is universally known. An early disagreement about whether to use the ACER or the ICER occurred when reporting the results of the first controlled study to show that esophagitis healing with medical antireflux therapy could affect the natural history of peptic stricture disease. Researchers estimated that the average cost to heal a single patient with omeprazole was $1440, compared with $2546 to heal a patient with H2RA.11 As the ACER estimate for omeprazole was less than the ACER estimate for H2RA, the researchers claimed that omeprazole was more cost-effective. This drew a critical response claiming that the ICER (and not the ACER) was the relevant measure that should have been calculated and considered.12 In their rejoinder, the authors dismissed the ICER asserting that it “would have been superfluous and potentially misleading.”13, p 304 They argued the ICER was superfluous because, from the ACER data, the ICER could be calculated if necessary. In addition, because the ICER did not consider a per-unit cost, there was no guarantee a new treatment with an economically-attractive ICER was affordable. Debates such as this one occurred in the context of a renewed interest to improve the quality of economic evaluations reported in the general medical literature. 6,14–18 The Health Economics Literature Until 1997, the health economics literature had recommended ICERs be reported,19–22 as had health economists writing for clinical audiences.23 Then 2 articles by Laska et al24,25 The Canadian Journal of Psychiatry, Vol 53, No 4, April 2008 W

appeared in the Health Economics journal supporting the usefulness of ACERs. At the same time, though recommending the use of an ICER,26 the CCOHTA also encouraged readers to explore the ACER methods presented in the article by Laska et al.p 47 Laska et al’s24 first article, which ignited a debate, began with the assertion that the ICER and ACER play an important part in the assessment of competing mental health interventions or treatments. They claimed decision makers could make decisions based on either ACERs or ICERs.25 In a vigorous refutation, Briggs and Fenn argued that a comparison of ACERs offered little guidance to the efficient choice of treatment because decisions over the choice of treatment should be made based on ICERs.27 Laska and colleagues25 had the last word in the exchange, and they concluded the ACER “is a useful summary parameter that characterizes a treatment independent of its comparators . . . Thus (ACERs) play an important role in the evaluation of the cost-effectiveness of treatments.”p 503 The exchange helped to emphasize the importance of identifying what the main objective of CEA is. While seemingly esoteric, the ramifications of the methodological debate are not only important to health economics but hold important implications for health care decision makers as well. To the extent that adoption of technologies or implementation of programs rely on the demonstration of cost-effectiveness, these decisions may be swayed by whether an ACER or an ICER is reported. Thus it becomes crucial for clinicians and other decision makers to understand how the ACER and ICER differ, and what this means for decision making.

Decision Making To illustrate, consider a hypothetical study (see Table 1) comparing an exercise intervention to a new treatment for mild depression (with depression-free weeks used as the effectiveness measure). The decision about whether the new treatment’s cost for effect trade-off is worth it depends on whether one uses the ACER or the ICER to summarize the study results. In this example, the ACERs are $47 ($2000/42.5 weeks) and $250 ($10 750/43 weeks) per depression-free week for exercise and new treatment, respectively. They are the average costs (per unit of effect) over the entire span of the study. The ICER equals the ratio of the cost difference ($10 750 - $2000 = $8750) to the effect difference (43 42.5 = 0.5 weeks), and is $17 500 per additional depressionfree week ($8750/0.5 weeks); it focuses only on the increased depression-free time. Based on an ACER, the new treatment appears to provide a depression-free day for about $36 ($250/7 days). However, according to the ICER the new treatment appears to provide an extra depression-free day for $2500 ($17 500/7 days). Which is the right answer: $36 or $2500? Is the new treatment cost-effective or not? Given 269

Review Paper

Table 1 A comparison of methods for reporting the results of cost-effectiveness analysis using hypothetical data for 2 methods of treating mild depression (with depression-free weeks as a measure of effectiveness) Treatment alternative Exercise program New treatment Difference (D)

Costs

Effect (depression-free weeks)

Economic summary measures I

Monetary value of the effect

Economic summary measures II

$2000.00

42.50

ACER = $47.06

lE = $743 750.00

NB = $741 750.00

$10 750.00

43.00

ACER = $250.00

lE = $752 500.00

NB = $741 750.00

$8750.00

0.50

ICER = $17 500.00

na

INBa = $0.00

lE = the monetary value of effect (E = l ´ E); we have assumed in this example that societal willingness to pay for patient outcome (l) equals $17 500 per depression-free week. NB = net benefit (that is, monetary value of effect – cost). na = not applicable INB is calculated with an assumed value of willingness to pay = $17 500 in this example (that is, effect difference ´ $17 500 – cost difference).

a

health care decisions are increasingly linked to CEA results, the answers to these questions are critical to making the right decision. Assume a decision making authority asks for your opinion about whether the new treatment should be covered for mild depression instead of exercise (an example of a Canadian decision making advisory body that requires clinicians to evaluate new treatments based in part on CEA is the Canadian Expert Drug Advisory Committee,28 which is part of the Common Drug Review Process29). How can the ACER and the ICER approaches help? To begin with, it is important to note the ACER and ICER approaches differ in terms of what questions they answer. Both are focused on the trade-off between the cost and effects; however, the ACER represents an overall measure, the ICER offers a marginal or piece-wise measure. The ACER approach seeks to estimate 2 separate average costs and then tests whether they are different. In this 2-step process, step 1 is calculating the ACER estimates: $47 for exercise and $250 for new treatment. Step 2 is testing whether the ACERs are statistically significantly different. The reasoning is that if the statistical test rejects that the 2 treatment options have equal ACERs, one should chose the treatment with the lower ACER; otherwise, one should be indifferent as to which treatment alternative to use. Basically, one should prefer the treatment alternative with the lower cost per outcome (unless there is no statistical difference). The ICER approach answers the following question: Given what is currently being accomplished, what more do we get from a new treatment? In this example, there is more cost and more effectiveness. With exercise, 42.5 depression-free weeks are obtained. With a new treatment, 0.5 more depression-free weeks (or 3.5 more depression-free days) are obtained. The ICER gives the extra cost of an extra unit of effect. If the unit of effect is a depression-free week, then the ICER is $17 500 per extra depression-free week. The decision making rationale is that one should select the new treatment if one feels an extra depression-free week is worth $17 500 or 270

more. The reason the ACER and ICER approaches give different answers is because the ACER reports the average cost (overall efficiency) and the ICER reports the extra cost per extra unit gained (the incremental efficiency). According to economics, the main issue when deciding whether to buy something is whether what is gained at that point is worth more than what is given up. Within this framework, it is irrelevant that a new treatment’s average cost for patient outcome is $250 (when the cost to produce the first depression-free week is assumed to equal the cost to produce the 40th depression-free week). The relevant parts of the decision are: what is the extra cost? what is the extra patient outcome (effect)? and is the trade-off worth it? In our example, the extra cost is $8750 and the extra outcome is 0.5 depression-free weeks. The trade-off as reported by the ICER is their ratio. Whether the trade-off is economically attractive, is in the eye (wallet) of the beholder. For ICERs, the trade-off involves looking at the additional gain the patient receives (in this case, 3.5 days). Given ACERs spread the cost over the entire quantity of patient outcome while ICERs look at only the extra cost of the extra effect, quite frequently ACERs can obscure what ICERs illuminate. One of the most famous published examples of how ACERs can hide from decision makers what ICERs lay bare, comes from a study30 later adapted as a textbook example.31 At the time, 6 sequential stool guaiac tests had been advocated for screening for colon cancer. The ACERs were $10 550 and $11 600 for a 5- and 6-test battery, respectively.31 However, using the ICER approach, the sixth test cost $123 456 790 per extra case detected.31 This was because the fifth test detected 719.9928 cases and the sixth test detected 719.99928 cases—a marginal gain of 0.00648 cases detected. The extra cost was $800 000. Thus the extra cost of detecting one more extra case was $800 000 per 0.00648 cases detected or $123 456 790 per one extra case detected. While it is true that using a 6-test battery detects each case at an average cost of $11 600, the improvements come at a steadily increasing cost W La Revue canadienne de psychiatrie, vol 53, no 4, avril 2008

A Clinician’s Guide to Correct Cost-Effectiveness Analysis: Think Incremental Not Average

(not a constant average one). For example, the 5-test battery detected 0.0648 more cases than the 4-test battery at an additional cost of $900 000. For the fifth test, the ICER is 13 888 889 ($900 000/0.0648). For the sixth test, the ICER is about 10 times more. Thus one is buying extra patient outcomes at an increasing price. The ACER hides this increasing price by smoothing costs over all patient outcomes that have been achieved (not just the patient outcomes uniquely contributed by the treatment option under consideration). The lesson from the sixth stool guaiac test is that the improvement in patient outcome over what is already achieved with a 5-test battery is infinitesimal. Paying $800 000 extra for this miniscule gain does not appear to represent a good use of scarce health care dollars.

Resolution Through Evolution The focus of CEA should be on estimating the trade-off between extra cost and extra effect. A decision based on CEA must consider whether a new treatment’s trade-off is worth it. The ICER quantifies the trade-off of interest; however, it does not indicate whether the trade-off is worth it. The net benefit approach seeks to address this dilemma.33,34 The net benefit approach takes CEA one step further by reframing the fundamental economic question. An INB calculation determines whether the net benefit of a new treatment surpasses that of usual care. In general, the INB is calculated by valuing DE in dollars and then subtracting the associated DC. For example,

It may be difficult to see how economists arguing about stool might affect decisions. However, empirical studies have found that physicians’ decision making can be affected by whether the ACER or the ICER is reported. Noting that “physicians are increasingly asked to use cost-effectiveness informatio n w h en ev alu atin g alterativ e h ealth care interventions,”32, p 81 Hershey et al presented physicians with ACER or ICER summary measures of 2 screening options in various settings and asked which option they would recommend to their patients. In settings where the clinical scenario of interest was familiar (for example, cervical cancer screening), the presentation of an ACER or an ICER had no effect on physician choices. However, with unfamiliar situations, presenting ICER information significantly reduced selection of the more expensive screening strategy. A key conclusion from this work is that the importance of thinking in incremental terms is not widely understood in the evaluation of clinical alternatives, as demonstrated by the lack of its use in reported studies. If those who report cost-effectiveness results fail to report and defend results based on their incremental consequences, it should not be surprising that physicians who are asked to use these results would not have learned the importance of thinking in incremental terms. The concern this raises is that average-cost information will lead people to underestimate true incremental costs in cases of diminishing returns to scale, and may in turn cause physicians to recommend procedures that are much more costly, and not much more effective, than available alternatives.32, p 87–88 In other words, the use of the ACER may lead to bad decisions on the part of physicians and other health care decision makers. The Canadian Journal of Psychiatry, Vol 53, No 4, April 2008 W

INB = (DE ´ l) - DC where l is society’s willingness to pay for a 1-unit gain of effect. The INB equation computes the net value of patient outcome gained in dollars. When INB is positive, the value of a new treatment’s extra benefits (DE ´ l) outweighs its extra costs (DC). Society values the extra effect more than the extra cost (for example, DE ´ l > DC). Conversely, when INB is less than 0, society does not consider the extra benefit worth the extra cost. Using the hypothetical data reported in Table 1, we explore whether the new treatment provides value for money by calculating the INB as: (0.5 weeks × $l per depression-free week) - $8750 where a value for l must be assumed. What should the analyst use for society’s willingness to pay for an additional depression-free week? It is easy to show that unless l > $17 500, the INB in this case will be negative (the extra cost outweighs the value of the extra effects). When l = $17 500, the INB = 0 (see Table 1). As shown,35, p 419 the INB can be calculated as the difference in net benefits for each treatment option. In other words, INB equals net benefits for a new treatment minus net benefits for exercise. The net benefits for exercise and new treatment are both $741 750 when l = $17 500 (see Table 1). If the willingness to pay for one more week free from depression is less than $17 500, the new treatment is not cost-effective. In our example, a new treatment is probably not cost-effective because it produces a week free from mild depression at an extra $17 500; at this price, a year free from mild depression will cost about 52 times this amount or $912 500. If we assume that someone in full health has twice the quality of life as someone with depression, then the new treatment’s extra cost per QALY is about $1 825 000. It is likely that society’s willingness to pay or l is not this high. 271

Review Paper

0

probability of cost-effectiveness 20 40 60 80

100

Figure 1 Cost-effectiveness acceptability curve

1 10

50

100 willingness to pay (l)

The heavy reliance on the value of l may raise some concerns. Primarily, how does one ever obtain the value of l? Herein lies the strength of the net benefit approach. It forces decision makers to directly consider the issue of valuing additional patient outcomes.33 INB can be computed with various ls and analyzed using multiple regression techniques.36 How sensitive the results are to the assumed l value can be gauged using a CEAC37,38 (for a hypothetical example, see Figure 1). The CEAC shows the probability that a new treatment is cost-effective for different values for l (there are different statistical perspectives from which the CEAC can be interpreted and interested readers are referred elsewhere).39–41 In this hypothetical example (which differs from the previous example), Figure 1 illustrates the probability that new treatment is cost-effective and is quite sensitive for some l values. For example, at l = $10, the probability that new treatment is cost-effective is nearly 70%; however, at l = $1 it is about 25%. In contrast, when l = 50, the probability of costeffectiveness is nearly 95%. Figure 1 illustrates the region in which results are sensitive to assumptions about l. The most dramatic gains in the height of the curve (from 25% to 95%) occur between l = $1 and $50. While we may never know the real value of l, if it is assumed to be at or above $50, there appears to be a very good chance that a new treatment is cost-effective. The CEAC incorporates the statistical uncertainty of the estimate into its presentation. As a result, it allows the decision maker to consider this variability when viewing 272

200

the economic results. New research is seeking to provide decision makers with even greater flexibility in interpreting evidence from economic evaluations.42

Discussion and Conclusion A quarter century ago, Udvarhelyi et al6 found that more than 2 out of every 3 economic evaluation articles they reviewed expressed results in average terms. Their paper has been referenced over 260 times; however, ACERs are still reported in economic evaluations generally43–46 and in mental health.47–49 Current guidelines for economic evaluation continue to promote use of the ICER instead of the ACER,50–52 and reference to the work of Laska and colleagues has been removed from the most recent guidelines by the Canadian Agency for Drugs and Technologies in Health (formally CCOHTA).53 This paper reviewed exchanges about ACERs and ICERs in the medical and health economics literature. We focused our discussion on the heart of the disagreement—the question that should be addressed by CEA. The ACER and ICER answer different questions: the ACER is an overall measure, the ICER an incremental measure. The overall measure is misleading because the ACER distributes the DCs over all subjects and assumes all outcomes are produced at equivalent cost.54 This is only appropriate when the comparator treatment alternative has no cost and no patient outcome (in this case, the ICER = DC/DE = C/E = the ACER). Using a hypothetical example, we demonstrated how reporting the ACER, compared with the ICER, could lead to W La Revue canadienne de psychiatrie, vol 53, no 4, avril 2008

A Clinician’s Guide to Correct Cost-Effectiveness Analysis: Think Incremental Not Average

different conclusions. We discussed the clear advantages the ICER provides, motivating why it is considered the standard summary measure in an economic evaluation. We concluded with a brief discussion of net benefit methods and the CEAC. These recent analytical improvements represent important advancements in economic evaluation. First and foremost, this is because the net benefit reframes the cost-effectiveness question to emphasize more dramatically the need to consider society’s values in any decision making process. Acknowledgements The authors appreciate the thorough, helpful comments and suggestions by Dr David Streiner on earlier versions of this manuscript. Any remaining errors are the sole responsibility of the authors.

References 1. Eddy DM. Oregon’s methods. Did cost-effectiveness analysis fail? JAMA. 1991;266:2135–2141. 2. Laupacis A. Incorporating economic evaluations into decision-making: the Ontario experience. Med Care. 2005;43:15–19. 3. Hoch JS, Dewa CS. An introduction to economic evaluation: what’s in a name? Can J Psychiatry. 2005;50:159–166. 4. Polsky D, Doshi JA, Bauer MS, et al. Clinical trial-based cost-effectiveness analyses of antipsychotic use. Am J Psychiatry. 2006;163:2047–2056. 5. Evers SM, Van Wijk AS, Ament AJ. Economic evaluation of mental health care interventions. A review. Health Econ. 1997;6:161–177. 6. Udvarhelyi IS, Colditz GA, Rai A, et al. Cost-effectiveness and cost-benefit analyses in the medical literature. Are the methods being used correctly? Ann Intern Med. 1992;116:238 –244. 7. Gerard K. Cost-utility in practice: a policy maker’s guide to the state of the art. Health Policy. 1992;21:249–279. 8. Molken MP, Van Doorslaer EK, Rutten FF. Economic appraisal of asthma and COPD care: a literature review 1980 –1991. Soc Sci Med. 1992;35:161–175. 9. Ganiats TG, Wong AF. Evaluation of cost-effectiveness research: a survey of recent publications. Fam Med. 1991;23:457– 462. 10. Doshi JA, Glick HA, Polsky D. Analyses of cost data in economic evaluations conducted alongside randomized controlled trials. Value Health. 2006;9:334–340. 11. Marks RD, Richter JE, Rizzo J, et al. Omeprazole versus H2-receptor antagonists in treating patients with peptic stricture and esophagitis. Gastroenterology. 1994;106:907–915. 12. Harris RA, Nease RF: Economic heartburn: average cost-effectiveness and gastroesophageal reflux disease. Gastroenterology. 1995;108:303–304. 13. Marks RD, Richter JE, Rizzo J. Reply to: omeprazole versus H2-receptor antagonists in treating patients with peptic stricture and esophagitis. Gastroenterology. 1995;108:304. 14. Doubilet P, Weinstein MC, McNeil BJ. Use and misuse of the term “cost effective” in medicine. N Engl J Med. 1986;314:253–256. 15. Provenzale D, Lipscomb J. Cost-effectiveness: definitions and use in the gastroenterology literature. Am J Gastroenterol. 1996;91:1488–1493. 16. Zarnke KB, Levine MA, O’Brien BJ: Cost-benefit analyses in the health-care literature: don’t judge a study by its label. J Clin Epidemiol. 1997;50:813–822. 17. Marshall JK, O’Brien BJ. Cost and cost-effectiveness: consistent terminology is needed. Am J Gastroenterol. 1999;94:1108–1109. 18. Neumann PJ, Stone PW, Chapman RH, et al. The quality of reporting in published cost-utility analyses, 1976–1997. Ann Intern Med. 2000;132:964–972. 19. Petitti DB. Meta-analysis, decision analysis, and cost-effectiveness analysis: methods for quantitative synthesis in medicine. New York (NY): Oxford University Press; 1994. 20. Drummond MF. Methods for the economic evaluation of health care programmes. New York (NY): Oxford University Press; 2005. 21. Gold MR, Siegel JE, Russell LB, et al. Cost-effectiveness in health and medicine. New York (NY): Oxford University Press; 1996. 22. Sloan FA: Valuing health care: costs, benefits, and effectiveness of pharmaceuticals and other medical technologies. New York (NY): Cambridge University Press; 1995. 23. Detsky AS, Naglie IG. A clinician’s guide to cost-effectiveness analysis. Ann Intern Med. 1990;113:147–154. 24. Laska EM, Meisner M, Siegel C. Statistical inference for cost-effectiveness ratios. Health Econ. 1997;6:229–242.

The Canadian Journal of Psychiatry, Vol 53, No 4, April 2008 W

25. Laska EM, Meisner M, Siegel C. The usefulness of average cost-effective ratios. Health Econ. 1997;6:497–504. 26. Canadian Coordinating Office for Health Technology Assessment. Guidelines for economic evaluation of pharmaceuticals, Canada. Ottawa (ON): Canadian Coordinating Office for Health Technology Assessment; 1997. 27. Briggs A, Fenn P. Trying to do better than average: a commentary on ‘statistical inference for cost-effectiveness ratios.’ Health Econ. 1997;6:491–495. 28. Canadian Expert Drug Advisory Committee [Internet]. Ottawa (ON): Canadian Agencies for Drugs and Technologies in Health [cited 2007 Jun 8]. Available from: http://www.cadth.ca/index.php/en/cdr/committees/cedac. 29. Common Drug Review [Internet]. Ottawa (ON): Canadian Agencies for Drugs and Technologies in Health [cited 8 June 2007]. Available from: http://www.cadth.ca/index.php/en/cdr. 30. Neuhauser D, Lweicki AM. What do we gain from the sixth stool guaiac? N Engl J Med. 1975;293:226–228. 31. Getzen TE. Health economics: fundamentals and flow of funds. New York (NY): John Wiley & Sons; 1997. 32. Hershey JC, Asch DA, Jepson C, et al. Incremental and average cost-effectiveness ratios: will physicians make a distinction? Risk Anal. 2003;23:81–89. 33. Stinnett AA, Mullahy J. Net health benefits: a new framework for the analysis of uncertainty in cost-effectiveness analysis. Med Decis Making. 1998;18:S68–80. 34. Tambour M, Zethraeus N, Johannesson M. A note on confidence intervals in cost-effectiveness analysis. Int J Technol Assess Health Care. 1998;14:467–471. 35. Hoch JS, Briggs AH, Willan AR. Something old, something new, something borrowed, something blue: a framework for the marriage of health econometrics and cost-effectiveness analysis. Health Econ. 2002;11:415–430. 36. Hoch JS, Smith MW. A guide to economic evaluation: methods for cost-effectiveness analysis of person-level data. J Trauma Stress. 2006;19:787–797. 37. Hoch JS, Rockx MA, Krahn AD. Using the net benefit regression framework to construct cost-effectiveness acceptability curves: an example using data from a trial of external loop recorders versus Holter monitoring for ambulatory monitoring of “community acquired” syncope. BMC Health Serv Res. 2006;6:68. 38. Fenwick E, Marshall DA, Levy AR, et al. Using and interpreting cost-effectiveness acceptability curves: an example using data from a trial of management strategies for atrial fibrillation. BMC Health Serv Res. 2006;6:52. 39. van Hout BA, Al MJ, Gordon GS, et al. Costs, effects and C/E-ratios alongside a clinical trial. Health Economics. 1994;3:309–319. 40. Lothgren M, Zethraeus N. Definition, interpretation and calculation of cost-effectiveness acceptability curves. Health Economics. 2000;9:623–630. 41. Fenwick E, Claxton K, Sculpher M. Representing uncertainty: the role of cost-effectiveness acceptability curves. Health Econ. 2001;10:779–787. 42. Hoch JS, Blume JD. Measuring and illustrating statistical evidence in a cost-effectiveness analysis. J Health Econ. Forthcoming. 43. Han SH, Martin P, Edelstein M, et al. Conversion from intravenous to intramuscular hepatitis B immune globulin in combination with lamivudine is safe and cost-effective in patients receiving long-term prophylaxis to prevent hepatitis B recurrence after liver transplantation. Liver Transpl. 2003;9:182–187. 44. Wilkins JJ, Folb PI, Valentine N, et al. An economic comparison of chloroquine and sulfadoxine-pyrimethamine as first-line treatment for malaria in South Africa: development of a model for estimating recurrent direct costs. Trans R Soc Trop Med Hyg. 2002;96:85–90. 45. Han SH, Ofman J, Holt C, et al. An efficacy and cost-effectiveness analysis of combination hepatitis B immune globulin and lamivudine to prevent recurrent hepatitis B after orthotopic liver transplantation compared with hepatitis B immune globulin monotherapy. Liver Transpl. 2000;6:741–748. 46. Vergnenegre A, Perol M, Pham E. Cost analysis of hospital treatment—two chemotherapic regimens for non-surgical non-small cell lung cancer. GFPC (Groupe Francais Pneumo Cancerologie). Lung Cancer. 1996;14:31–44. 47. Lehman AF, Dixon L, Hoch JS, et al. Cost-effectiveness of assertive community treatment for homeless persons with severe mental illness. Br J Psychiatry. 1999;174:346–352. 48. Sheidow AJ, Bradford WD, Henggeler SW, et al. Treatment costs for youths receiving multisystemic therapy or hospitalization after a psychiatric crisis. Psychiatr Serv. 2004;55:548–554. 49. Lin EC, Yin TJ, Kuo BI, et al. A comparison of effectiveness and cost between two models of care for individuals with schizophrenia living in Taiwan. Arch Psychiatr Nurs. 2001;15:272–278. 50. Ramsey S, Willke R, Briggs A, et al. Good research practices for cost-effectiveness analysis alongside clinical trials: the ISPOR RCT-CEA task force report. Value Health. 2005;8:521–533. 51. Drummond M, Sculpher M. Common methodological flaws in economic evaluations. Med Care. 2005;43:5–14. 52. Torgerson DJ, Spencer A. Marginal costs and benefits. BMJ. 1996;312(7022):35–36. 53. Canadian Agency for Drugs and Technologies in Health. Guidelines for the economic evaluation of health technologies: Canada. Ottawa (ON): Canadian Agency for Drugs and Technologies in Health; 2006.

273

Review Paper

54. Blades CA, Culyer AJ, Walker A. Health service efficiency: appraising the appraisers—a critical review of economic appraisal in practice. Soc Sci Med. 1987;25:461–472.

Manuscript received May 2007, revised, and accepted August 2007. 1 Associate Professor, Department of Health Policy, Management and Evaluation, Centre for Research on Inner City Health, The Keenan

Research Centre, Li Ka Shing Knowledge Institute, St Michael’s Hospital, University of Toronto, Toronto, Ontario. 2 Associate Professor, Department of Psychiatry, Department of Health Policy, Management and Evaluation, Centre for Addiction and Mental Health, Health Systems Research and Consulting Unit, University of Toronto, Toronto, Ontario. Address for correspondence: Dr JS Hoch, Centre for Research on Inner City Health, 30 Bond Street, Toronto, ON M5B 1W8; [email protected]

Résumé : Un guide du clinicien pour corriger l’analyse coûts-efficacité : les rapports coûts-efficacité moyens et différentiels, et les avantages différentiels nets Objectif : Expliquer le besoin de déclarer correctement les résultats d’une analyse coûts-efficacité (ACE). Méthodes : Les résultats d’un essai clinique hypothétique ont été utilisés pour illustrer comment les différentes façons de déclarer les résultats économiques influent sur la présentation des résultats et la formulation des conclusions. Pour fournir un contexte, nous avons examiné des échanges de premier plan de la documentation scientifique. Résultats : La question essentielle avec laquelle les décideurs sont aux prises a trait aux options introduites par un nouveau produit ou une nouvelle intervention. Spécifiquement, les décideurs sont-ils prêts à payer le coût additionnel pour les résultats additionnels? On ne peut étudier cette question sans une estimation du coût additionnel par résultat additionnel. Les mesures exactes de coûts-efficacité, comme le rapport coûts-efficacité différentiel ou les avantages différentiels nets traitent cette question. Conclusions : Comme les décideurs doivent relever le défi de concilier la demande croissante de soins de santé avec la compression des coûts, il sera primordial d’identifier les façons rentables de prodiguer des soins. Les prestataires de soins de santé et autres décideurs ne devraient pas se laisser berner par les résultats d’ACE mal exécutées. Les décisions portant sur l’adoption de produits pharmaceutiques ou la mise en œuvre de nouveaux programmes ou de nouvelles interventions peuvent être affectées par le choix de la mesure d’un bilan coûts-efficacité. Ainsi, les consommateurs d’ACE doivent avoir une compréhension sommaire de la raison pour laquelle différentes méthodes donnent différents résultats, et de la manière d’interpréter les résultats.

274

W La Revue canadienne de psychiatrie, vol 53, no 4, avril 2008

Suggest Documents