APM Perspectives. APM Perspectives

APM Perspectives APM Perspectives The Association of Professors of Medicine (APM) is the national organization of departments of internal medicine at...
Author: Sydney Williams
8 downloads 0 Views 243KB Size
APM Perspectives

APM Perspectives The Association of Professors of Medicine (APM) is the national organization of departments of internal medicine at the US medical schools and numerous affiliated teaching hospitals as represented by chairs and appointed leaders. As the official sponsor of The American Journal of Medicine, the association invites authors to publish commentaries on issues concerning academic internal medicine. For the latest information about departments of internal medicine, please visit APM’s website at www.im.org/APM.

Outcomes-based Evaluation in Resident Education: Creating Systems and Structured Portfolios Eric S. Holmboe, MD,a William Rodak, PhD,b Glenn Mills, MD,c Michael J. McFarlane, MD,d Henry J. Schultz, MDe Accreditation Council for Graduate Medical Education and American Board of Medical Specialties Quadrad Committee for Internal Medicine, Chicago, Ill; aAmerican Board of Internal Medicine, Philadelphia, Pa; bAccreditation Council for Graduate Medical Education, Chicago, Ill, cLouisiana State University Health Science Center, Shreveport, La, dCase Western Reserve University, Cleveland, Ohio, eMayo Clinic, Rochester, Minn.

In 2001, the Accreditation Council for Graduate Medical Education (ACGME) instituted the 6 general competencies (patient care, medical knowledge, practice-based learning and improvement, interpersonal communication skills, professionalism, and systems-based practice) for residency training.1 During Phase 2 of the ACGME Outcome Project, effective July 1, 2002, residency programs were expected to begin defining the key elements of the 6 competencies into their disciplines and begin implementing appropriate assessment tools for these competencies.1 Starting in July 2006, residency programs are expected to fully integrate the 6 competencies and their assessments with residents’ learning and clinical performance, while subspecialty fellowship programs will begin the initial process of developing and implementing the general competencies.1 No single evaluation method is sufficient to adequately assess the competence of a trainee.2 Furthermore, the results of the evaluations must be synthesized into an accurate, reliable summary of a trainee’s performance. Given the heterogeneity of training programs and the multiplicity of evaluation methods, educators responsible for evaluation of the general competencies need a framework to facilitate develPortions of this work were presented at the ACGME/ABMS Extreme Makeover Quadrad Meeting, Chicago, Ill, May 16, 2005. Reprint requests should be addressed to Eric S. Holmboe, MD, American Board of Internal Medicine, 510 Walnut Street, Suite 1700, Philadelphia, PA 19106. E-mail address: [email protected].

0002-9343/$ -see front matter © 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amjmed.2006.05.031

opment of an effective outcomes-based evaluation system with respect to competencies. The objective of this article is to provide guidance to educators for the development of an outcomes-based evaluation system and a practical framework for compliance with program requirements for competency-based evaluation.

WHAT IS AN EVALUATION SYSTEM? An evaluation system consists of a group of individuals who work together on a regular basis to provide evaluation and feedback to a population of physicians-in-training. This definition is adapted from a high functioning clinical microsystem.3 In an effective evaluation system, evaluators share common educational aims and outcomes, share linked processes and information about resident performance, and produce a physician truly competent to enter independent medical practice or fellowship at the end of training. The evaluation system has a structure (faculty and resources) to carry out processes (teaching, evaluation, and feedback) to produce an outcome (a competent physician at the end of residency).4 This system must provide both summative and formative evaluation for trainees and, at a minimum, summative evaluation for the profession and public.

FORMATIVE AND SUMMATIVE EVALUATION Summative evaluation refers to the final judgment of competence and performance at the end of an educa-

Holmboe et al

Outcomes-based Evaluation in Resident Education

tional activity, such as the end of internship or even the end of a specific rotation. Formative evaluation is the judgments made during an educational activity. What are the essential elements of evaluation? First, evaluation must represent a composite and comprehensive view of a trainee’s competence. No single evaluation method will be sufficient, including the common “global” faculty evaluation forms.5,6 Second, the evaluation should include self-assessment and must stimulate reflection on the part of the trainee.7 Self-assessment and reflection is a necessary skill for lifelong learning; as many have noted, residency training at a minimum should make an individual ready for practice, but it cannot make residents fully “practice ready.” Much learning will occur after formal training, and the evaluation of this new knowledge will be performed independently by the physician, stimulated by practice needs and reflection. Third, the evaluation system must be feasible to execute and maintain. However, rigor and quality must not be sacrificed solely to improve feasibility. Effective evaluation requires faculty skill and hard work. If graduate medical education is to truly embrace accountability and professionalism, substantial work in faculty development is needed to improve summative evaluations.8 For example, the public expects and takes for granted the demonstration of a high level of competence (proficiency) in airline pilots; the medical community must strive for no less. Every physician graduating from residency and fellowship training programs should have documented rigorous evidence that they are truly ready to enter independent practice. The employment of multiple evaluation tools—as described below—provides the mechanism for creating a more rigorous evaluation process. Formative evaluation is equally important. Formative evaluation places greater emphasis on descriptive evaluation and is critically important to facilitate high quality, meaningful feedback for the resident.9-11 This specificity and detail is essential to help the trainee identify both strengths and weaknesses and to learn the skill of reflective practice. Trainees find that receiving evaluation only by scores and numbers is insufficient and frustrating. Formative evaluation should be embedded into everyday activities, such as patient admissions and work rounds, morning report, and even conferences.

ATTRIBUTES OF AN EFFECTIVE EVALUATION SYSTEM

709

or she asks others to do. Research from the quality improvement world supports that “leading by doing” promotes change and gives credibility to the leadership process.12-15 Second, the leader of an evaluation system must be knowledgeable about evaluation and feedback methodologies. Third, the leader needs to interact collaboratively not only with other faculty and residents but also with nurses, administrators, and other staff responsible for comprehensive evaluations of the residents’ competence.13 Educating all these groups promotes a better assessment environment. Fourth, the leader needs to take negative evaluations seriously. Failure to do so can have substantial untoward consequences for the evaluator who had the courage to submit a negative evaluation and, therefore, may squelch further negative evaluations from the individual and others.

Communication of Goals The rationale and goals of the general competencies should be vetted and discussed with all groups to facilitate acceptance of the tools and methods used in competency-based evaluation. Furthermore, groups who share goals and a common vision are more likely to provide meaningful and specific information on evaluations. Shared goals provide a purpose for evaluation and can enable faculty and others to see the evaluation process as helping the resident attain competence instead of being just another task at the end of a specified rotation.

Transparency The evaluation system should be transparent to the residents, faculty, other evaluators, and anyone else involved in the evaluation system. Transparency means that residents should be aware of all the methods being used to evaluate them, who performs the evaluations, how the results will be used, and the consequences of a negative evaluation. Residents should have as much access as feasible to their evaluation file.

Continuous Quality Improvement Evaluation is a process that is never perfect or complete, and any system will need constant refinement. It is important to embed systematic review at regular intervals by all parties involved to ensure the evaluation system is meeting the goal of identifying individual competencies and deficiencies at the appropriate developmental stages of training. Furthermore, an evaluation instrument is only as good as the individual using it. Faculty development will be a crucial component of an evaluation system grounded in quality improvement.

Leadership The leader of an effective evaluation system, such as a program director, should possess several characteristics. First, the leader must be willing to do whatever he

Evaluation Committees No single individual can manage all aspects of a successful evaluation system. Evaluation committees, of-

710

The American Journal of Medicine, Vol 119, No 8, August 2006

ten referred to as clinical competency committees, can be an effective and efficient mechanism to detect deficiencies early, provide “real time” faculty development responses, and promote positive changes in the evaluation culture. However, members must be committed and not view this activity as simply another onerous task. The committee should not be a “rubber stamp” but collect and review meaningful evaluation information to help all trainees progress and improve. A negative or disinterested climate on these committees and a lack of trainee confidentiality regarding information shared at these meetings can have a harmful effect on the entire evaluation program.

tive engagement of the trainee in their own evaluation processes.7 The principal characteristics of a structured portfolio are that it: ●



Importance of Accessible Information An effective evaluation system requires a robust mechanism for the collection, analysis, summation, and archiving of evaluation information. Electronic or webbased products can potentially make the evaluation process more efficient and are particularly effective in allowing the synthesis of multiple formative evaluations into a meaningful summative evaluation. Many web-based programs allow program directors and others to generate reports quickly and easily. However, almost all programs will need to have an assistant or coordinator to manage the data collection process and many programs will continue to depend on some form of paper-based evaluation. Several factors must be present for information to flow efficiently and effectively through an evaluation system. Evaluators should be clear as to when, to whom, and by what method to communicate information about a resident. Program directors, evaluation committee members, chief residents, and others with special roles in the evaluation process should be prepared to receive the information and act upon it in a timely manner. Honesty and a strong commitment to the evaluation of all involved are crucial to ensuring optimal information flow.

THE “STRUCTURED PORTFOLIO”: A FRAMEWORK FOR OUTCOMES-BASED EVALUATION Portfolios have long been used in education to document the activities and progress of the learner. Portfolios are usually collections of work, evaluations, and products of the learner over time.16-20 In contrast to the traditional “learning portfolio,” in which a trainee determines the contents, the contents of a “structured portfolio” are defined by both the training program and the trainee to maximize outcomes-based evaluation. A portfolio does not function as a single evaluation “tool” but represents a framework and process for collecting, analyzing, and documenting the successful acquisition of the general competencies. This process requires ac-



● ●



Employs a multifaceted approach to evaluation. Research has shown repeatedly that an evaluation system heavily weighted toward global faculty evaluations overestimates resident competency.21-23 Global evaluations often suffer from poor validity and reliability and fail to accurately assess domains such as communication skills, practice-based learning, and systems-based practice. Uses the principle of “triangulation.” Evaluation methods, if used effectively and properly, can be used to evaluate more than one general competency. Is longitudinal and comprehensive in scope and truly represents a composite of a trainee’s competence and performance. Includes evidence of trainee self-assessment and reflection. Contains trainee contributions, demonstrating evidence of professional growth and performance to the structured portfolio. Keep in mind that the resident should have full access to the portfolio. Evaluates all 6 general competencies by more than one method per competency.

The minimal components for a structured portfolio would include at least one method of evaluation from each of the following 4 broad evaluation methodologies: ●

● ●



Foundational Evaluations: The longitudinal global ratings and monthly evaluations by faculty. These evaluations should represent a robust composite of multiple assessments by faculty. Direct Observations: Observation of the trainee’s clinical, communication, and interpersonal skills. Practice and Data-Based Learning: Active application by the trainee of personal performance data and system information to improve his or her practice. One example would be a medical record audit of patients accompanied by self-assessment, reflection, and a quality improvement plan. Multi-Source Evaluations: The perspective of patients and nonphysician health care providers should be included as part of the portfolio.

The Table lists the various evaluations that would fall into each of these 4 evaluation methodologies. The evaluation “foundation” would be the global rating scales/monthly evaluation forms completed by faculty. Despite known problems with such forms, they can provide invaluable information if used properly by faculty.24,25 Better training of faculty in evaluation is more needed than new evaluation forms.21,24,26 With this “foundation,” programs can implement various tools

Holmboe et al Table

Outcomes-based Evaluation in Resident Education

711

Framework for an Outcomes-Based Evaluation System

Methodological Category

Evaluation Tools

Competencies

Foundational

Global rating scales/evaluation forms; in-training examination

Direct observation

Mini-clinical evaluation exercise; observation checklists (eg, SEGUE); standardized patients/ OSCE; simulations Clinical care audits (medical record/claims/surveys); chart stimulated recall; practice improvement modules (ABIM); clinical vignettes; clinical care logs; procedure logs; EBM/clinical question logs Patient surveys; nursing surveys; ancillary personnel; peer; self

Potentially all, best for: Patient care Professionalism Medical knowledge Patient care Interpersonal and communication skills Professionalism Practice based learning and improvement Systems-based practice Patient care

Practice and data-based

Nonfaculty perspective

from the other 3 components that are feasible and effective to specific training settings. To further illustrate the evaluation components of structured portfolio framework, Figure 1 exemplifies a basic approach for how a program using existing multifaceted evaluation methods could meet minimum requirements to evaluate the 6 general competencies. Note how one tool can be used to evaluate more than one competency. Figure 1 shows how use of existing tools, namely monthly evaluation forms, mini clinical evaluation exercise (mini-CEX; direct observation), patient and nursing surveys (multi-source/360 evalua-

Patient care Interpersonal and communication skills Professionalism Systems-based practice

tions), the yearly In-Training Examination (ITE) (global evaluation of medical knowledge), a medical record audit, and a log of clinical questions using evidence-based medicine (practice and data-based learning), could address each general competency in multiple ways. An evaluator should start with the monthly global evaluation forms that are best suited to evaluate patient care, medical knowledge, and professionalism. The ITE, given just once a year, is an evidence-based, effective method to measure a trainee’s global medical knowledge.27 While explicitly allowed only for forma-

Systems-based practice

Practice-based learning and improvement

Interpersonal skills and communication

Medical record audit and QI project 1 / year

Patient + Nurse or peer surveys: Twice/year Structured Portfolio

EBM/ Question Log

Mini-CEX: 4-6/year

Patient care

Monthly Faculty Evaluations

ITE: 1/year Medical knowledge

Professionalism

Figure 1

Example of tools for a structured portfolio.

712

The American Journal of Medicine, Vol 119, No 8, August 2006 Clinical Competency Committee • Periodic review – professional growth opportunities for all • Early warning systems

Trainee • Review portfolio • Reflect on contents • Develop yearly personal improvement plan • Review with faculty advisor and perform self-assessment

• • • • • • • •

Structured Portfolio ITE Monthly Evaluations MiniCEX Medical record audit/QI project Clinical question log Patient surveys Nursing evaluations Trainee contributions (personal portfolio) o Research project

Program Director or Faculty Advisor • Review portfolio periodically and systematically • Develop early warning system • Encourage reflection and self-assessment

Summative Assessment Process

Regulatory Bodies • American Board of Internal Medicine (Certification) • State Medical Boards (Licensure) • Health care organizations (Credentialing)

Figure 2

Effective use of the structure portfolio.

tive uses by its creators, ITE has good predictive validity for the American Board of Internal Medicine (ABIM) certification examination. Finally, one study found faculty were actually a poor judge of a trainee’s global knowledge, making ITE a valuable tool for programs.28 The mini-CEX is a validated, reliable tool for assessing clinical skills through direct observation.29,30 If programs have faculty perform just 4 mini-CEXs in the ambulatory clinic and one in each inpatient rotation, the majority of trainees would receive over 10 observations per year. Research suggests that 10-12 mini-CEXs provide a high degree of reliability; just 4 will provide sufficient reliability for “pass-fail” determinations.29 However, other research emphasizes the need to train faculty to be more accurate, rigorous observers of clinical skills.31 A patient survey and assessment by the nursing staff on selected locations or rotations (multi-source/360 evaluations) is an efficient and effective way to include the patient and nonphysician perspective in the trainee’s evaluations.32-34 Multi-source evaluations often provide important and different perspectives on the competencies of a trainee.35,36 For example, peers and nurses are often in a better position to judge work ethic, professionalism, and inter-personal communication skills.37 Patient surveys can provide meaningful formative evaluation around communication skills. Development of competency in practice-based learning and systems-based practice can be substantially facilitated by self-directed activities. For exam-

ple, residents can perform a medical record audit of their practice as an excellent way to teach practicebased learning and improvement and systems-based practice.38 A resident’s self-audit of practice, combined with a structured curriculum, has been shown to improve patient care and change resident behavior.39 Other studies have shown how residents can be involved in quality improvement activities and skill building.40-42 Residents also can keep logs of clinical questions and the resources they used to answer them as another effective technique to teach evidence-based medicine and evaluate practice-based learning and improvement.43 Finally, ABIM has made its practice improvement modules (PIMs) available to residency programs at low cost. PIMs combine medical record audit, patient surveys, and a practice systems survey into a web-based tool that provides programs and trainees with detailed information about the quality of care and systems in their practice.44 In addition, PIMs facilitate the development of quality improvement plans with educational tools available from the Association of Program Directors in Internal Medicine. Faculty time commitment for all of these methods can be quite reasonable. When program directors look at the sum total of the evaluations and other inputs into the portfolio, they must be able to determine that the resident has attained, at a minimum, competence in the 6 general competencies by the end of training. Equally important, residents also must demonstrate through the reflective input of the portfolio that they have assumed the responsibility

Holmboe et al

Outcomes-based Evaluation in Resident Education

and accountability for their performance and continuous professional development. The clinical competency committee can be a useful group to review and integrate the contents of the portfolio. This committee approach is a common method in other programs.16 Figure 2 demonstrates how program directors and training programs could effectively utilize a structured portfolio as part of an effective evaluation system. Note that active trainee involvement and reflection is a vital, essential component of an evaluation system and the structured portfolio. As Figure 2 demonstrates, the trainee should interact and contribute to the portfolio on an ongoing basis. This contribution will promote trainees taking personal responsibility for their own professional growth. The use of the structured portfolio in postgraduate medical training is in its infancy. The most successful examples are found at the undergraduate medical education level,16 and portfolio use in graduate medical education is only now being reported.18 While the various components of the portfolio are grounded in empiric educational and evaluation science,16-20 future research is needed to determine the optimal approach to implementing and operating a portfolio-based evaluation system at the residency program level. One of the important issues will be how to best operationalize the collection of the data from the various evaluation tools and the resident’s interactions and reflections. Webbased technology should enable this process. Current vendor-based evaluation products already allow for programs to customize the evaluations used in the program, the types of reports generated, and how the information can be shared among users. These existing systems could be further modified to allow greater resident interaction and input.45 ACGME is currently developing a web-based portfolio prototype for pilot testing in 2007 (ACGME staff, personal communication). For portfolios to be feasible in residency programs, a web-based interface most likely will be needed. As web-based portfolios are implemented by training programs, the educational community should collaborate to investigate best practices around effectiveness and efficiency, starting with lessons learned in medical schools.16

CONCLUSIONS ACGME and the Residency Review Committee for Internal Medicine expect residency programs (and fellowship programs in 2006) to demonstrate at least 2 methods of evaluation in each of the 6 general competencies. This evaluation can be both rigorous and efficiently administered by carefully selecting evaluation tools from the 4 methodological categories above, by training evaluators, and by using these tools in settings with the highest yield (such as patient evaluations in continuity clinic). By carefully selecting the best tools for the local educational environment and maximizing

713

the use of these tools through training, programs can create an effective evaluation system that benefits the residents, the program, and the public. A structured portfolio is one systematic approach to encourage trainees to continually interact with the evaluation system. Assessment drives learning, and evaluation provides the judgment and rigor to determine competence.

References 1. Accreditation Council for Graduate Medical Education. The Outcome Project. Available at: http://www.acgme.org/Outcome/. Accessed September 30, 2005. 2. Holmboe ES, Hawkins RE. Evaluating the clinical competence of residents: a review. Ann Intern Med. 1998;129:42-48. 3. Nelson EC, Batalden PB, Huber TP, et al. Microsystems in health care: Part 1. Learning from high-performing front-line clinical units. Jt Comm J Qual Improv. 2002;28:472-493. 4. Donabedian A. Explorations in Quality Assessment and Monitoring Vol. 1. The Definition of Quality and Approaches to its Assessment. Ann Arbor, MI: Health Administration Press; 1980. 5. Swing SR. Assessing the ACGME General Competencies: general considerations and assessment methods. Acad Emerg Med. 2002;9:1278-1288. 6. Gray JD. Global rating scales in residency education. Acad Med. 1996;71:S55-S63. 7. Schon DA. The Reflective Practitioner: How Professionals Think In Action. New York, NY: Basic Books; 1983. 8. Holmboe ES, Bowen JD, Green M, et al. Reforming internal medicine residency training: a report from the Society of General Internal Medicine’s task force for residency reform. J Gen Intern Med. 2005;20:1165-1172. 9. Ende J. Feedback in clinical medical education. JAMA. 1983; 250:777-781. 10. Holmboe ES, Williams F, Yepes M, Huot S. Feedback and the mini clinical evaluation exercise. J Gen Intern Med. 2004;19: 558-561. 11. Hemmer PA, Hawkins R, Jackson JL, Pangaro LN. Assessing how well three evaluation methods detect deficiencies in medical students’ professionalism in two settings of an internal medicine clerkship. Acad Med. 2000;75:167-173. 12. Hiss RG, MacDonald R, David WR. Identification of physician educational influentials in small community hospitals. Res Med Educ. 1978;17:283-288. 13. Holmboe ES, Bradley ES, Mattera JA, et al. Characteristics of physician leaders working to improve the quality of care in acute myocardial infarction: a qualitative study. Jt Comm J Qual Saf. 2003;29:289-296. 14. Brennan TA. Physicians responsibility to improve the quality of care. Acad Med. 2002;77:973-980. 15. Gruen RL, Pearson SD, Brennan TA. Physician-citizens—public roles and professional obligations. JAMA. 2004;291:94-98. 16. Friedman Ben David M, Davis MH, Harden RM, et al. AAME Educational Guide 24: portfolios as a method of student assessment. Med Teach. 2001;23:535-551. 17. Challis M. AMEE Medical Education Guide No.11 (revised): portfolio-based learning and assessment in medical education. Med Teach. 1999;21:370-386. 18. Carraccio C, Englander R. Evaluating competence using a portfolio: a literature review and web-based application to the ACGME competencies. Teach Learn Med. 2004;16:381-387. 19. Campbell CM, Parboosingh JT, Gondocz ST, et al. Study of physicians’ use of a software program to create a portfolio of their self-directed learning. Acad Med. 1996;71:S49-S51. 20. Wilkinson TJ, Challis M, Hobma SO, et al. The use of portfolios for assessment of the competence and performance of doctors in practice. Med Educ. 2002;36:918-924.

714

The American Journal of Medicine, Vol 119, No 8, August 2006

21. Turnbull J, Gray J, MacFayden J. Improving in-training evaluation programs. J Gen Intern Med. 1998;13:317-323. 22. Silber CG, Nasca TJ, Paskin DL, et al. Do global rating forms enable program directors to assess the ACGME competencies? Acad Med. 2004;79:549-556. 23. Schwind CJ, Williams RG, Boehler ML, Dunnington GL. Do individual attendings’ post-rotation performance ratings detect residents’ clinical performance deficiencies? Acad Med. 2004; 79:453-457. 24. Holmboe ES, Fiebach NH, Galaty LA, Huot S. Effectiveness of a focused educational intervention on resident evaluations from faculty: a randomized control trial. J Gen Intern Med. 2001;16: 427-434. 25. Pangaro L. A new vocabulary and other innovations for improving descriptive in-training evaluations. Acad Med. 1999;74:12031207. 26. Holmboe ES. Faculty and the observation of trainees’ clinical skills: problems and opportunities. Acad Med. 2004;79:16-22. 27. Garibaldi RA, Subhiyah R, Moore ME, Waxman H. The intraining examination in internal medicine: an analysis of resident performance over time. Ann Intern Med. 2002;137:505-510. 28. Hawkins RE, Sumption KF, Gaglione MM, Holmboe ES. The in-training examination in internal medicine: resident perceptions and correlation between resident scores and faculty predictions of resident performance. Am J Med. 1999;106:206-210. 29. Norcini JJ, Blank LL, Duffy FD, Fortna GS. The mini-CEX: a method for assessing clinical skills. Ann Intern Med. 2003;138: 476-481. 30. Holmboe ES, Huot S, Hawkins RE. Construct validity of the mini clinical evaluation exercise. Acad Med. 2003;78:826-830. 31. Holmboe ES, Hawkins RE, Huot SJ. Direct observation of competence training: a randomized controlled trial. Ann Intern Med. 2004;140:874-881. 32. Matthews DA, Feinstein AR. A new instrument for patients’ ratings of physician performance in the hospital setting. J Gen Intern Med. 1989;4:14-22.

33. Butterfield PS, Mazzaferri EL. A new rating form for use by nurses in assessing residents’ humanistic behavior. J Gen Intern Med. 1991;6:155-161. 34. Lipner RS, Blank LL, Leas BF, Fortna GS. The value of patient and peer ratings in recertification. Acad Med. 2002;77:S64-S66. 35. Violato C, Lockyer J, Fidler H. Multisource feedback: a method of assessing surgical practice. BMJ. 2003;326:546-548. 36. Lockyer J. Multisource feedback in the assessment of physician competencies. J Contin Educ Health Prof. 2003;23:4-12. 37. Norcini JJ. Peer assessment of competence. Med Educ. 2003;37: 539-543. 38. Jamtvedt G, Young JM, Kristoffersen JT, Thomson O’Brien MA, Oxman AD. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2003;(3):CD000259. 39. Holmboe ES, Prince L, Green ML. Teaching and improving quality of care in a residency clinic. Acad Med. 2005;80:571-577. 40. Ogrinc G, Headrick LA, Mutha S, et al. A framework for teaching medical students and residents about practice-based learning and improvement, synthesized from a literature review. Acad Med. 2003;78:748-756. 41. Djuricich AM, Ciccarelli M, Swigonski NL. A continuous quality improvement curriculum for residents: addressing core competencies, improving systems. Acad Med. 2004:79:S65-S67. 42. Ogrinc G, Headrick LA, Morrison LJ, Foster T. Teaching and assessing resident competence in practice-based learning and improvement. J Gen Intern Med. 2004;19:496-500. 43. Wong J, Holmboe ES, Huot S. Teaching and learning in an 80-hour work week: a novel day-float rotation for medical residents. J Gen Intern Med. 2004;19:519-523. 44. American Board of Internal Medicine. Practice improvement modules. Available at: https://www.abim.org/online/pim/ DEFAULT.ASPX. Accessed September 30, 2005. 45. E*value Residency Evaluation System. Available at: http:// www.advancedinformatics.com/?id⫽3. Accessed March 15, 2006.