Although radiologists have been using digital imaging

Validation of Whole Slide Imaging for Primary Diagnosis in Surgical Pathology Thomas W. Bauer, MD, PhD; Lynn Schoenfield, MD; Renee J. Slaw, MBA; Lisa...
Author: Lauren Garrett
2 downloads 1 Views 963KB Size
Validation of Whole Slide Imaging for Primary Diagnosis in Surgical Pathology Thomas W. Bauer, MD, PhD; Lynn Schoenfield, MD; Renee J. Slaw, MBA; Lisa Yerian, MD; Zhiyuan Sun, MS; Walter H. Henricks, MD

 Context.—High-resolution scanning technology provides an opportunity for pathologists to make diagnoses directly from whole slide images (WSIs), but few studies have attempted to validate the diagnoses so obtained. Objective.—To compare WSI versus microscope slide diagnoses of previously interpreted cases after a 1-year delayed re-review (‘‘wash-out’’) period. Design.—An a priori power study estimated that 450 cases might be needed to demonstrate noninferiority, based on a null hypothesis: ‘‘The true difference in major discrepancies between WSI and microscope slide review is greater than 4%.’’ Slides of consecutive cases interpreted by 2 pathologists 1 year prior were retrieved from files, and alternate cases were scanned at original magnification of 320. Each pathologist reviewed his or her cases using either a microscope or imaging application. Independent pathologists identified and classified discrepancies; an

independent statistician calculated major and minor discrepancy rates for both WSI and microscope slide review of the previously interpreted cases. Results.—The 607 cases reviewed reflected the subspecialty interests of the 2 pathologists. Study limitations include the lack of cytopathology, hematopathology, or lymphoid cases; the case mix was not enriched with difficult cases; and both pathologists had interpreted several hundred WSI cases before the study to minimize the learning curve. The major and minor discrepancy rates for WSI were 1.65% and 2.31%, whereas rates for microscope slide reviews were 0.99% and 4.93%. Conclusions.—Based on our assumptions and study design, diagnostic review by WSI was not inferior to microscope slide review (P , .001). (Arch Pathol Lab Med. 2013;137:518–524; doi: 10.5858/ arpa.2011-0678-OA)

A

measured the baseline intraobserver variability for glass microscope slide interpretation to which WSI intraobserver variability should be compared. A committee of the College of American Pathologists temporarily posted for comment 13 draft guidelines for validation of digital imaging for clinical use. Although not readily available and not yet accepted by the College of American Pathologists, those guidelines generally recommended testing intraobserver variability by comparing WSI with glass microscope slide diagnosis after an appropriate ‘‘wash-out’’ interval between interpretations. The purpose of this article is to describe a prospective, intraobserver variability study, approved by the Institutional Review Board, in which we compared whole slide imaging review of previously diagnosed cases with microscope slide review after a minimum 1-year delayed re-review (‘‘washout’’) period from the original diagnosis. The Cleveland Clinic Health System is composed of a large, main hospital (Cleveland, Ohio) with a group of affiliated regional and distant hospitals. There are approximately 42 pathologists at the main hospital, where in 2010 more than 109 000 surgical pathology cases were interpreted, amounting to more than 650 000 microscope slides. Pathologists at the main hospital subspecialize, and most pathologists have one or two subspecialty interests. Pathologists at the affiliated regional hospitals do not subspecialize as much and have practices similar to community pathologists elsewhere. Our validation study was designed to support our intention to potentially use WSI for primary diagnosis in both subspecialty and general

lthough radiologists have been using digital imaging technology for diagnostic purposes for years, technology has only recently advanced to the point that interpretation of scanned images of whole microscope slides for primary diagnosis is feasible. This technology is evolving rapidly, and several manufacturers now offer instruments that can scan slides relatively rapidly and create high-resolution digital images for education, research, archiving, quality assurance, image analysis, and diagnosis. Several previous studies have compared the interpretation of microscope slides with whole slide imaging (WSI) (also known as virtual microscopy or digital microscopy; Table 1) for primary diagnosis, consultation, or interpretation of frozen sections,1–14 but many of those studies have been limited in scope, and few have

Accepted for publication June 1, 2012. Published as an Early Online Release January 16, 2013. From the Departments of Anatomic Pathology (Drs Bauer, Schoenfield, Yerian, and Henricks and Ms Slaw) and Quantitative Health Sciences (Mr Sun), The Cleveland Clinic, Cleveland, Ohio. Dr Bauer is also affiliated with the Departments of Orthopaedic Surgery, The Spine Center, and the Center for Orthopaedic Research, Cleveland Clinic. The authors have no relevant financial interest in the products or companies described in this article. Presented in part at the Digital Pathology Association meeting; October 31, 2011; San Diego, California. Reprints: Thomas W. Bauer, MD, PhD, Department of Anatomic Pathology, Cleveland Clinic, 9500 Euclid Ave, Cleveland, OH 44106 (e-mail: [email protected]). 518 Arch Pathol Lab Med—Vol 137, April 2013

Validation of Whole Slide Imaging—Bauer et al

Table 1.

Definitions

Term

Definition

Whole slide image Concordance Major discrepancy Minor discrepancy Primary diagnosis Wash-out period

Also referred to as virtual microscopy, digital microscopy, etc Essentially complete agreement between 2 diagnoses A difference in diagnosis that would be associated with a difference in patient care A difference in diagnosis that would not be associated with a difference in patient care The original diagnosis (based on review of microscope slides) used for patient care The interval between the primary diagnosis and review as part of the validation study

surgical pathology practices by testing the hypothesis that interpreting with WSI is not inferior to interpreting with microscope slides. MATERIALS AND METHODS

discrepancies with WSI review were not more than 4% when compared with microscope slide review. Put another way, the null hypothesis that we hoped to reject stated that the true difference in major discrepancies between WSI and microscope slide reviews was greater than 4%.

A Priori Power Study

Study Design

Before starting the study, we sought to estimate the number of cases we would need to review to test the hypothesis that interpreting with WSI is not inferior to interpreting with microscope slides. That power study required an estimation of the approximate intraobserver variability we would expect based on previous experience or the literature. Few published studies document true intraobserver variability in surgical pathology. Some studies have reported ‘‘error rates,’’ or discrepancies between frozen and permanent sections, but many of those studies primarily reflect interobserver variability. For example, Raab and coworkers15 reviewed self-reported discrepancies from 75 laboratories participating in the College of American Pathologists Q-Probe program. The overall discrepancy rate was about 7%, and there were 5.3% major discrepancies. Similarly, a retrospective review16 of 715 second-opinion slide reviews obtained for patients being treated at the Sun Yat-Sen Cancer Center yielded a major discrepancy rate of 6%. Another review cited a wide range of ‘‘error rates’’ but concluded that ‘‘[c]urrently, pathology appears to be operating at about a 2.0% error rate.’’17(p624) Although this gives us some information about existing variability in surgical pathology, it is of marginal relevance to validating WSI. Several previous studies, however, have described experiences comparing the use of telepathology and/or WSI with microscope slides. For example, Evans and coworkers2 have described the use of telepathology, and subsequently WSI (‘‘virtual slides’’) to interpret frozen sections throughout a health network system in Toronto, Canada. With an experience of more than 350 interpretations by robotic microscope and 630 by WSI, the deferral rate was 7.7%, and the accuracy rate was 98% for both modalities. Molnar9 compared microscope slide and WSI diagnoses of 61 routine gastric biopsies and 42 routine colon biopsies, and reported a final concordance rate of 95.1%. Jukic et al8 evaluated intraobserver discrepancies among 3 pathologists after each reviewed glass slides and digital images of 101 cases. After excluding highly unusual or difficult cases and those that needed high magnification, the rate of significant discrepancies was 4.4%. Each of the above studies has limitations but still provides us some guidance to the approximate number of discrepancies that might be expected in a prospective validation study. We shared those and other studies with a professional statistician, who then performed a sample-size calculation. Based in part on the literature noted above, we assumed the major discrepancy rate between the original diagnosis and the microscope slide review diagnosis would be 3%. We set the noninferiority margin for WSI review at 4%. A 1-sided, binomial test was used for comparison at a significance level of .05. The power to be achieved was 80%, and the significance level was .05. Based on those assumptions, we calculated that 450 cases (225 per group) would need to be reviewed to establish noninferiority.

There were 2 primary pathologists. One pathologist reviewed cases typical of a subspecialty practice at our main hospital (mostly orthopedic and gastrointestinal cases), whereas the second pathologist reviewed a broader spectrum of cases similar to a community practice. Both pathologists performed several hundred side-by-side reviews of WSI and matched microscope slides to gain experience before the study started. During that training interval, both pathologists easily recognized digital images of previously reviewed glass microscope slides, so for the study we chose to use a delayed re-review (‘‘wash-out’’) period of at least 1 year after glass slide primary diagnosis. Our overall study design is shown in Figure 1. The study coordinator first identified consecutive cases that each of the 2 pathologists had interpreted beginning on November 1, 2009. Outside cases for which microscope slides had been returned and cases with gross-only diagnoses were excluded. The original microscope slides were retrieved from file, and sequentially alternating cases were scanned at an original magnification of 320 using a whole slide imaging system (ScanScope XT, Aperio, Vista, California). At the same time, working drafts of the original surgical pathology reports were reproduced and censored with respect to the original 2009 diagnosis. The working drafts were not censored for other clinical information; the intention of using working drafts was to provide each pathologist with the same amount of background information at the time of review as was available at the time of primary diagnosis (eg, specimen site, gross description, among others). The cases (either microscope slides and paperwork, or paperwork only and notification of a scanned image in queue) were distributed to each pathologist. Each pathologist reviewed the cases, using either a microscope or using whole slide image viewer software (ImageScope, Aperio) on a 24-inch monitor (Dell, Round Rock, Texas), and diagnoses were entered into a spreadsheet. At the completion of the review process, the study coordinator distributed the spreadsheets to an independent, referee pathologist, who determined which cases were concordant and which appeared to be discordant. Once the potentially discordant cases were identified, the original microscope slides and all the data for each case were grouped according to subspecialty and were then passed on to the directors of the appropriate subspecialty section. Each director then reviewed all the available material to establish an independent, gold standard diagnosis and, thereby, determine whether the case was concordant, a minor discrepancy, or a major discrepancy. In the case of discrepant diagnoses, the subspecialty director identified which of the 2 diagnoses (ie, the 2009 primary diagnosis or the review diagnosis—either glass slide or WSI) was most correct. Finally, the entire spreadsheet was passed on to an independent, professional statistician for analysis. As described above, the study involved a number of critical participants. There were 2 primary study pathologists; a study coordinator, who was responsible for case selection and deidentification and maintaining of the database; imaging technicians, for scanning slides and uploading files; and an information technology specialist to coordinate and maintain informatics. In addition, an

Hypothesis We hypothesized that WSI review was noninferior to microscope slide review. It would be considered noninferior if major Arch Pathol Lab Med—Vol 137, April 2013

Validation of Whole Slide Imaging—Bauer et al 519

Figure 1. Flow chart indicating the overall experimental design. Abbreviations: WSI, whole slide image. (Aperio XT and ImageScope, Aperio, Vista, California).

independent, referee pathologist screened the diagnoses for possible discrepancies; subspecialty directors reviewed each potentially discrepant case; and an independent, professional statistician performed the initial a priori power study as well as performing the final analysis of the results.

Cases Versus Parts The distinction between ‘‘cases’’ and ‘‘parts’’ is another consideration in calculating the number of diagnoses necessary to establish noninferiority, as well as the number of discrepant diagnoses. For example, a prostate specimen (case) might contain 6 or more individual biopsies (parts). The interpretation of any one of those could influence patient care, depending on the findings in the remaining biopsies. On the other hand, patient care decisions are usually based on interpreting the combination of all of the parts together. For purposes of our validation study, we chose to set a low threshold for defining discrepancies, and we chose to review a large number of diagnoses so that either cases or parts could be used in subsequent statistical analyses.

Table 2.

We used a low threshold for defining discrepancies. For example, our subspecialty director determined that the difference between a total Gleason score 6 versus a total Gleason score 7 would be a major discrepancy. Interpreting the case illustrated in Table 2 by parts, there are 2 major and 1 minor discrepancies. However, the subspecialty director reviewed the microscope slides from the case and determined that one of the diagnoses from 2009 was better than the subsequent WSI diagnoses (part A), but the WSI review diagnosis of part C was better than the original microscope slide diagnosis. Therefore, if we adjust the discrepancy rate by subtracting the part for which the WSI diagnosis was better, we derive an ‘‘adjusted major discrepancy rate’’ of 1 (part) for this case. The results to be described below will be reported with respect to both parts and cases, as well as with respect to raw and adjusted major and minor discrepancy rates.

Statistics The a priori power study to determine the minimum sample size is described above. At the completion of the study a noninferiority

One Prostate Biopsy With Multiple Partsa

Part

2009 Diagnosis

WSI Review Diagnosis

Expert Diagnosis

Discrepancy by Part

Adjusted Discrepancy

A B C D

3þ3 Benign 3þ3 Benign

3þ4 Benign 3þ4 Focal PIN

3þ3 Benign 3þ4 Benign

Major None Major Minor

Major None None Minor

Abbreviations: PIN, prostatic intraepithelial neoplasia; WSI, whole slide image. The difference between adenocarcinoma, Gleason score 3 þ 3 versus adenocarcinoma Gleason score 3 þ 4 was considered by our subspecialty director to be a major discrepancy. There were 4 parts of this case, with 2 major (parts A and C) and 1 minor (part D) discrepancies. However, 1 of the WSI diagnoses (part C) was interpreted by the independent expert as a better diagnosis than the original microscope slide diagnosis, so the adjusted discrepancy count indicates 1 major discrepancy (part A) and 1 minor discrepancy.

a

520 Arch Pathol Lab Med—Vol 137, April 2013

Validation of Whole Slide Imaging—Bauer et al

Table 3.

Discrepancies by Case

Deferred, Major, No. (%) No. (%) WSI review of original glass slides, n ¼ 303 Microscope slide review, n ¼ 304 Total, N ¼ 607

5 (1.65) 3 (0.99) 8

5 (1.65) 3 (0.99) 8

Major: WSI Better Than Original, No. (%)

Minor, No. (%)

Minor: WSI Review Better Than Original, No. (%)

None

4 (1.32) NA 4

7 (2.31) 15 (4.93) 22

11 (3.63) NA 11

271 (89.44) 283 (93.09) 554

Abbreviations: NA, not applicable; WSI, whole slide image.

test (1-sided binomial test) was performed to compare the major discrepancy rate and minor discrepancy rate between the WSI rereview and the microscope slide rereview. The noninferiority margin was chosen as 4%. Unless otherwise specified, all tests were performed at a significance level of .05. All analyses were performed using SAS software (version 9.2, SAS Institute, Cary, North Carolina).

RESULTS As noted above, our a priori power study indicated that, based on the assumptions described, we needed at least 450 diagnoses (225 in each group) to prove noninferiority of WSI when compared with microscope slide review. We actually reviewed 607 cases, composed of 1025 parts. The results are summarized in Tables 3–7. For the cases, there was complete concordance in 554 of 607 cases (91.3%); 5 cases of WSI reviews (1.7%) and 3 cases of microscope slide reviews (0.99%) were deferred (see below). Of the 304 cases in the microscope slide review group, there were 3 major discrepancies (0.99%). There were a total of 9 major discrepancies in the WSI group (3.0%), but the WSI review diagnosis was better than the original microscope slide diagnosis in 4 of those cases (1.3% of all WSI cases; 44% of the major discrepancies), whereas the original microscope slide interpretation was better than the WSI review in 5 cases (56% of the major discrepancies), for an adjusted major discrepancy rate for WSI review of 1.65%. The baseline minor discrepancy rate for the microscope slide review group was 4.93% (15 of 304 cases). There were 18 cases (5.9%) with minor discrepancies among the WSI cases, but the WSI review diagnosis was better than the original in 11 of those cases (61% of the minor discrepancies) for an adjusted minor discrepancy rate for WSI review of 2.31% (7 of 303 cases). With respect to individual pathologists, the major case discrepancy rate for pathologist 1 was 0.3% (1 of 334 cases). That one case was a WSI review of a previous microscopic slide diagnosis. Pathologist 1 had 2 digital review cases (1.2% of 167 cases) with minor discrepancies in which the original microscope slide interpretation was better and had 6 minor discrepancy cases (3.59% of 167 cases) in which the WSI review diagnosis was better than the original microscope slide interpretation. Pathologist 1 also had 5 minor discrepancies (3% of 167 cases) in the microscope slide rereview group. Pathologist 2 had 8 of 136 cases (5.9%) in Table 4.

which there was a major discrepancy between the WSI and original microscope slide diagnosis. Expert review determined that in 4 of those cases (50% of discrepant cases; 2.9% of all cases), the original microscope slide diagnosis was best, and in the other 4 cases, the WSI review diagnosis was best, for an adjusted major discrepancy rate of 2.94% (4 of 136 cases). Pathologist 2 had 5 digital review cases (3.68% of 136 cases) with minor discrepancies in which the original microscope slide interpretation was better and had 5 minor discrepancy cases (3.68% of 136 cases) in which the WSI review diagnosis was better than the original microscope slide interpretation. Pathologist 2 had 3 major discrepancies (2.20%) and 10 minor discrepancies (7.3%) in the microscope slide rereview group. The 607 cases were composed of 1025 parts (Table 4). There was complete concordance in 92% of those parts; 10 were deferred (5 in the WSI and 5 in the microscope slide review groups). The baseline minor discrepancy rate based on microscope slide review was 4.58% (24 of 524 parts). There were 25 minor discrepancies among the 501 parts (4.99%) in the WSI review group. Of those minor discrepancies, the WSI review diagnosis was better than the original interpretation in 14 parts (2.79% of all cases; 56% of minor discrepancies), whereas the original diagnosis was better than the WSI review diagnosis in 11 parts for an adjusted minor discrepancy rate of 2.20%. As noted above, 8 cases in the study were deferred. Three of those were glass microscope slide reviews, whereas 5 were WSI reviews. The 3 microscope slide reviews were deferred only because they were difficult diagnoses: one was dysplasia in Barrett esophagus, and the second was dysplasia in a patient with ulcerative colitis. Each of those cases had also been deferred to a subspecialty consensus group at the time of original diagnosis in 2009. The other deferred microscope slide review required correlation with radiographic imaging to rule out melorheostosis. Five of the digital image reviews were deferred. Three of those (60%) had also been deferred to a consensus group at the time of original diagnosis: (1) dysplasia in a patient with inflammatory bowel disease, (2) an inflammatory cloacogenic polyp with prolapse changes, and (3) a difficult uterine stromal proliferation. The fourth deferred WSI case required polarization to exclude uric acid crystals. The only WSI case in the study that was deferred because of unsatisfactory

Discrepancies by Part

Major: WSI Review Minor: WSI Review Deferred, Major, Better Than Original, Minor, Better Than Original, No. (%) No. (%) No. (%) No. (%) No. (%) WSI review of original glass slides, n ¼ 501 Microscope slide rereview, n ¼ 524 Total, N ¼ 1025

5 (1.0) 5 (1.0) 10

6 (1.20) 9 (1.72) 15

9 (1.80) NA 9

11 (2.20) 24 (4.58) 35

14 (2.79) NA 14

None, No. (%) 456 (91.02) 486 (92.75) 942

Abbreviations: NA, not applicable; WSI, whole slide image. Arch Pathol Lab Med—Vol 137, April 2013

Validation of Whole Slide Imaging—Bauer et al 521

Table 5.

a

Major Discrepancies by Whole Slide Imaging (WSI)a

Case No.

2009 Diagnosis

WSI Review Diagnosis

1 2

Changes consistent with reflux esophagitis Intramucosal adenocarcinoma (stomach)

3 4

Adenocarcinoma, Gleason 3 þ 3 (prostate) Chronic active colitis

Eosinophilic esophagitis Chronic gastritis with foveolar hyperplasia, favor reactive Adenocarcinoma, Gleason 3 þ 4 Normal

Expert Diagnosis Reflux esophagitis Intramucosal adenocarcinoma Adenocarcinoma, Gleason 3 þ 3 Focal active colitis

These were cases in which expert review determined that the original, microscope slide diagnosis was better than the review diagnosis from the digital image.

imaging was a retinal membrane in an ocular pathology slide. For that case, focus points could not be satisfactorily identified because the hematoxylin-eosin stain had faded over time. Although no other cases were deferred because of image quality, both pathologists requested some cases be rescanned at original magnifications of 340, instead of the default 320, usually to better identify inflammatory cells or microorganisms. The major discrepancies in the WSI group are listed in Table 5. In retrospect, the case of reflux esophagitis that was interpreted as eosinophilic esophagitis might have benefited from review at an original magnification of 340 instead of 320. A gastric biopsy was originally correctly interpreted as intramucosal adenocarcinoma but interpreted as chronic gastritis with foveolar hyperplasia and reactive changes by WSI review of an image scanned at an original magnification of 320 (Figure 2, A). Expert review found the case to be difficult using either slides or digital images, but concluded that the original interpretation was correct. Recent rescanning at an original magnification of 340 improved the resolution, but the case was still difficult (Figure 2, B). All 11 parts with minor discrepancies in which the original microscope slide diagnosis was better than the WSI review diagnosis are listed in Table 6. Finally, the discrepancy rates for microscope slide and WSI review are summarized in Table 7. Whether based on cases or the number of individual diagnostic parts, review by WSI was not inferior to microscope slide review (P , .001). COMMENT Whether or not a test is cleared by the US Food and Drug Administration, it is still appropriate for a laboratory to Table 6.

validate the use of a new diagnostic test, especially one directly related to patient safety. For WSI, the process of validation consists of 2 major parts. The first is the scanner (including a built-in microscope), computer, application software, and monitor, whereas the second is the actual interpretation of the scanned images for diagnosis. Many of the WSI systems are closed, such that verification of the process whereby the scanned image accurately reproduces the content of the microscope slide is controlled and performed by the manufacturer. The second part of validating the interpretation of WSI systems is to demonstrate with a reasonable degree of confidence that patient safety would not be compromised by using the system for patient care. That process necessarily involves comparing diagnoses rendered using the WSI system with diagnoses achieved using microscopes. Several previous studies have reported on the ‘‘accuracy’’ of telepathology and/or WSI technology for diagnosis. For example, with the goal of validating the use of WSI to interpret frozen sections, Fallon and coworkers3 prepared digital slides of 52 consecutive frozen sections that had been obtained to diagnose ovarian lesions. Two pathologists reviewed the images and reported a correlation between WSI and frozen section as 96% with respect to benign versus malignant, borderline, or uncertain. A recent study by Chargari et al1 actually suggested possible superiority with respect to Gleason grading using WSI when compared with conventional glass microscope slides. In a study of 96 cases of skin tumors and tumorlike changes examined by 4 pathologists, Nielsen and colleagues10 reported an accuracy of 89.2% for WSI and 92.7% for conventional microscopy when diagnoses were compared with an expert diagnosis.

Minor Discrepancies by Whole Slide Imaging (WSI)a

Case No.

2009 Diagnosis

WSI Review Diagnosis

Expert Diagnosis

1 2 3 4

Focal, active colitis Active, chronic colitis Mild, reactive gastropathy changes Salpingitis follicularis

Focal, active colitis Focally active, chronic colitis Mild, reactive gastropathy changes Salpingitis follicularis

5 6 7 8 9

Benign prostate tissue Focal high-grade PIN Benign prostate tissue Normal squamous mucosa Prostate adenocarcinoma, Gleason score 3 þ 4 Prostate adenocarcinoma, Gleason score 3 þ 4 Complex, atypical endometrial hyperplasia

Normal colonic mucosa Quiescent chronic colitis Normal gastric mucosa Salpingitis follicularis and endometriosis Focal high-grade PIN Benign prostate tissue Focal high-grade PIN Mild squamous dysplasia (CIN I) Prostate adenocarcinoma, Gleason score 3 þ 5 Prostate adenocarcinoma, Gleason score 4 þ 5 Endometrial adenocarcinoma, FIGO I

10 11

Benign prostate tissueb Focal high-grade PINb Benign prostate tissueb Normal squamous mucosa Prostate adenocarcinoma, Gleason score 3 þ 4b Prostate adenocarcinoma, Gleason score 3 þ 4b Complex, atypical endometrial hyperplasia

Abbreviations: CIN 1, cervical intraepithelial neoplasia, grade 1; FIGO I, F´ed´eration International de Gyn´ecologie et Obst´etrique, stage I; PIN, prostatic intraepithelial neoplasia. a These were parts of cases in which expert review determined that the original, microscope slide diagnosis was better than the review diagnosis from the digital image. b This discrepancy is classified as minor because it resulted in no change in patient care, based in part on other parts of the same specimen. 522 Arch Pathol Lab Med—Vol 137, April 2013

Validation of Whole Slide Imaging—Bauer et al

Table 7.

Differences in Discrepancy Ratesa

Discrepancy Type

Discrepancy Rate for WSI, %

Discrepancy Rate for Slides, %

Major by case Minor by case Major by part Minor by part

1.65 2.31 1.20 2.20

0.99 4.93 1.72 4.58

Difference, % (90% CI) 0.66 2.62 0.52 2.38

(0.86, (5.11, (1.75, (4.23,

2.19) 0.14) 0.71) 0.54)

P Value ,.001 ,.001 ,.001 ,.001

Abbreviations: 90% CI, 90% confidence interval; WSI, whole slide image. a WSI review is not inferior to microscope slide review.

They concluded that there was no significant difference in accuracy between WSI and glass slide review, although it is not clear whether this was a statistical determination. A recent study12 that examined 67 frozen section cases using WSI and a mobile viewing device found 89% accuracy compared with glass slide review. The authors concluded that use of mobile viewing devices to interpret WSI images is feasible for frozen sections, based, in part, on their finding that this accuracy rate compares favorable with other studies using WSI for frozen section interpretation. The level of ‘‘concordance’’ or ‘‘agreement’’ or ‘‘accuracy’’ among diagnoses based on WSI versus glass slides that should be considered acceptable for use of WSI for routine diagnoses remains an open question. For instance, in a study looking at use of WSI for subspecialty consultation, Wilbur and colleagues14 at Massachusetts General Hospital reported an overall concordance rate between WSI and glass slide interpretations of 91%, based on review of 53 consultation-level cases. The authors concluded that WSIbased interpretation is feasible but also cautioned that there is room for improvement because the WSI interpretation was incorrect in 9% of cases. In a study9 of 103 gastrointestinal biopsies, concordances of WSI and glass slides compared with consensus diagnosis were both high, although glass slide concordance was higher (98%) than was WSI concordance (95%). The results of the aforementioned studies have generally demonstrated a high level of accuracy or concordance and generally concluded that the use of WSI is feasible for the use contemplated in the study; however, no clear basis has emerged. In other words, how is it determined that, say, a 90% concordance rate is acceptable for clinical practice? Our study aims to answer the key question implicit in such a determination by demonstrating whether or not WSI is inferior to glass slide review when baseline intraobserver variability for both WSI and glass slide review is taken into account. Although the results of the studies noted above are encouraging, to our knowledge, no previous study performed an a priori power study to estimate the sample size needed to determine noninferiority, and based on our own power study, most previous studies have not included enough cases to statistically demonstrate noninferiority of the WSI diagnosis. For example, one validation study evaluated only 12 cases.5 Other studies have evaluated between 24 and 52 cases,3,6,7 although no specific rationale was provided for selecting that sample size. A critical component of documenting noninferiority is recognizing baseline intraobserver variability for reviewing microscope slides. Without defining that baseline variability, the measured intraobserver variability of WSI review is of limited value. Our intraobserver major discrepancy rate for glass microscope slide review was 0.99% for cases and Arch Pathol Lab Med—Vol 137, April 2013

1.72% for the many individual parts that composed our cases. The corresponding minor discrepancy rates were 4.93% and 4.58%. These data should be of help to future investigators who may want to perform their own validation studies of similar design. A committee of the College of American Pathologists briefly posted for comment draft recommendations for laboratories to consider when validating WSI for primary diagnosis. Among those draft recommendations was a recommended wash-out period of approximately 2 to 3 weeks between microscope slide and WSI diagnosis. Little data are available to support that time interval, although the rationale for limiting the wash-out to several weeks rather than 1 year is because even among experienced pathologists, diagnostic criteria may evolve during 1 year. We initiated our study before the draft recommendations were available. One of our major discrepancies was a case in which a hyperplastic colonic polyp was rediagnosed as a sessile serrated polyp on WSI review. Undoubtedly, our criteria for diagnosing sessile serrated polyp have evolved since the first diagnosis, supporting the premise behind the College of American Pathologists recommendation. One of our major WSI discrepancies was a case in which reflux esophagitis, correctly diagnosed in 2009, was interpreted as ‘‘eosinophilic esophagitis’’ from the digital image. The color balance on the digital image may have rendered the granules in the neutrophils a slightly different color than is customarily seen in routine microscope slides. On the other hand, this study, as well as other experiences with WSI, has suggested to us that certain types of cases may be better reviewed from scans obtained at an original magnification of 340 rather than at 320. Those cases often involve detecting inflammatory cells, especially neutrophils and/or plasma cells, or microorganisms. We agree with previous investigators13 that correctly classifying some inflammatory conditions may be relatively difficult by WSI and may require higher-magnification scans than are needed to diagnose most malignancies. On the other hand, we interpreted many inflammatory conditions in this study (eg, many cases of ‘‘rule out infection’’ during revision total joint arthroplasty and many cases of gastrointestinal biopsies to evaluate inflammatory bowel disease) at original magnifications of 320 without discrepancies. Our study has several limitations. First, we did not include cytopathology, hematopathology, or lymphoid lesions because those cases often require high magnifications and may need numerous immunohistochemical stains. We did not intentionally enrich our study set with difficult cases. Recognizing that some inflammatory conditions in particular may benefit from higher magnification scans, we anticipate that enriching our study set with difficult cases would increase the intraobserver variability for both microscope slide review and WSI review, but our experience Validation of Whole Slide Imaging—Bauer et al 523

diagnoses of all potentially discrepant cases, and we used a professional statistician to compile the results and calculate the statistics. Representing perhaps the most rigorous WSI validation study published to date, our results add to the growing body of literature that help support the safety and efficacy of WSI for primary diagnosis. We greatly appreciate the support of Imaging Systems Analyst Scott Mackie, CET, and Imaging and Histology Technologists Sol Crisostomo, HTL (ASCP), MT (AMT), and Pamela Suydam, MT/ HTL (ASCP); and Subspecialty Expert Pathologists Ana Bennett, MD, Steven Billings, MD, Charles Biscotti, MD, Jennifer Brainard, MD, John Goldblum, MD, Christina Magi-Galluzzi, MD, PhD, and Carmela Tan, MD, without whose help this study would not have been possible. References

Figure 2. A, Whole slide image (WSI), scanned at original magnification of 320, of a gastric biopsy that was originally correctly interpreted as adenocarcinoma but was interpreted as ‘‘foveolar hyperplasia, favor reactive’’ on WSI review. The subspecialty director found the case difficult but concluded that the original diagnosis was correct. B, A repeat scan at original magnification of 340 (displayed at 320) yields a slightly better image, but diagnosis is difficult by either microscope slide or digital image (hematoxylin-eosin, [A and B]).

to date suggests that a case that is difficult on an imaging review is also difficult on microscope slide review. As another limitation, we did not attempt to quantify other aspects of workflow (eg, the time it takes to interpret WSI cases) in this study. Finally, both of the study pathologists had interpreted several hundred WSI cases with the matched microscope slides before the validation study. This experience helped them recognize, for example, the importance of requesting an original magnification of 340 scans in selected cases and helped minimize the potential affect of a learning curve on WSI validation. Our study had several attributes, including the a priori power calculation to determine sample size and the a priori focus on intended use and patient population. We also established a baseline intraobserver variability rate to which the WSI variability could be compared. Finally, we used an independent referee pathologist to screen diagnoses and identify potential discrepancies, we used independent subspecialty experts to determine the ultimate correct

524 Arch Pathol Lab Med—Vol 137, April 2013

1. Chargari C, Comperat E, Magn´e N, et al. Prostate needle biopsy examination by means of virtual microscopy. Pathol Res Pract. 2011;207(6): 366–369. 2. Evans AJ, Chetty R, Clarke BA, et al. Primary frozen section diagnosis by robotic microscopy and virtual slide telepathology: the University Health Network experience. Hum Pathol. 2009;40(8):1070–1081. 3. Fallon MA, Wilbur DC, Prasad M. Ovarian frozen section diagnosis: use of whole-slide imaging shows excellent correlation between virtual slide and original interpretations in a large series of cases. Arch Pathol Lab Med. 2010; 134(7):1020–1023. 4. Fine JL, Grzybicki DM, Silowash R, et al. Evaluation of whole slide image immunohistochemistry interpretation in challenging prostate needle biopsies. Hum Pathol. 2008;39(4):564–572. 5. Furness P. A randomized controlled trial of the diagnostic accuracy of Internet-based telepathology compared with conventional microscopy. Histopathology. 2007;50(2):266–273. 6. Gilbertson JR, Ho J, Anthony L, Jukic DM, Yagi Y, Parwani AV. Primary histologic diagnosis using automated whole slide imaging: a validation study. BMC Clin Pathol 2006;6:4–19. doi:10.1186/1472-6890-6-4. 7. Ho J, Parwani AV, Jukic DM, Yagi Y, Anthony L, Gilbertson JR. Use of whole slide imaging in surgical pathology quality assurance: design and pilot validation studies. Hum Pathol. 2006;37(3):322–331. 8. Jukic DM, Drogowski LM, Martina J, Parwani AV. Clinical examination and validation of primary diagnosis in anatomic pathology using whole slide digital images. Arch Pathol Lab Med. 2011;135(3):372–378. 9. Molnar B, Berczi L, Diczhazy C, et al. Digital slide and virtual microscopy based routine and telepathology evaluation of routine gastrointestinal biopsy specimens. J Clin Pathol. 2003;56(6):433–438. 10. Nielsen PS, Lindebjerg J, Rasmussen J, Starklint H, Waldstrom M, Nielsen B. Virtual microscopy: an evaluation of its validity and diagnostic performance in routine histologic diagnosis of skin tumors. Hum Pathol. 2010;41(12):1770– 1776. 11. Ozluk Y, Blanco PL, Mengel M, Solez K, Halloran PF, Sis B. Superiority of virtual microscopy versus light microscopy in transplantation pathology [published online ahead of print September 29, 2011]. Clin Transplant. 2012; 26(2):336–344. doi:10.1111/j.1399-0012.2011.01506.x. 12. Ramey J, Fung K, Hassell LA. Use of mobile high-resolution device for remote frozen section evaluation of whole slide images. J Pathol Inform. 2011;2: 235–238. doi:10.4103/2153–3539.84276. 13. Slodkowska J, Pankowski J, Seimiatkowska K, Chyczewski L. Use of the virtual slide and the dynamic real-time telepathology systems for a consultation and the frozen section intra-operative diagnosis in thoracic/pulmonary pathology. Folia Histochem Cytobiol. 2009;47(4):679–684. 14. Wilbur DC, Madi K, Colvin RB, et al. Whole-slide imaging digital pathology as a platform for teleconsultation: a pilot study using paired subspecialist histocorrelations. Arch Pathol Lab Med. 2009;133(12):1949–1953. 15. Raab SS, Nakhfeh RE, Ruby SG. Patient safety in anatomic pathology: measuring discrepancy frequencies and causes. Arch Pathol Lab Med. 2005; 129(4):459–466. 16. Tsung JSH Institutional pathology consultation. Am J Surg Pathol 2004; 28(3):399–402. 17. Frable WJ. Surgical pathology—second reviews, institutional reviews, audits, and correlations: what’s out there?: error or diagnostic variation? Arch Pathol Lab Med. 2006;130(5):620–625.

Validation of Whole Slide Imaging—Bauer et al