Automatic Stitching of Digital Radiographies using Image Interpretation

Automatic Stitching of Digital Radiographies using Image Interpretation Andr´e Gooßena∗, Thomas Pralowb, Rolf-Rainer Grigata aVision Systems, Hamburg ...
Author: Ann King
0 downloads 0 Views 3MB Size
Automatic Stitching of Digital Radiographies using Image Interpretation Andr´e Gooßena∗, Thomas Pralowb, Rolf-Rainer Grigata aVision Systems, Hamburg University of Technology, 21079 Hamburg, Germany bGeneral X-Ray, Philips Medical Systems, 22335 Hamburg, Germany

Abstract. In digital radiography oversized images have to be assembled from multiple exposures. As the patient may have moved in between subsequent exposures, an external feature is exposed together with the anatomy. The exposures typically have a very small overlap which complicates the registration. We present an algorithm for fast automatic registration featuring robustness against noise, feature masking and feature displacement. Pivotal for this algorithm is an actual interpretation of an external stitching feature instead of a simple detection. The proposed method has been evaluated on 1900 pairs of clinical radiographs.

1

Introduction

When imaging long parts of the human body, e.g. legs or spine, in conventional screen-film technique, special cassettes and films of extended length are utilized. Migration to digital radiography limits the image size due to the sensitive area of flat-panel detectors. In order to reproduce the behaviour of conventional radiography a large image is assembled from multiple exposures with a small spatial overlap. This technique is commonly referred to as stitching. Due to the high rate of examinations it is necessary to reduce manual interaction of the operator to a minimum and automatically stitch radiographs. The patient dose is directly connected to the size of the overlap between subsequent exposures. Hence it is desirable to reduce overlaps to a minimum size. However, this also reduces the image content contributing to the registration algorithm. As the detector has to be moved to the next position between subsequent exposures there is a risk that patient movement or breathing might produce inconsistent content. Refer to Figure 1 for a depiction of an acquisition. The ruler has to be easily removable for non-stitching exposures and there are various different examinations with depending ruler positions, e.g. standing/lying patient, centered for leg and lateral for spine stitches, etc. The ruler therefore can not be fixed with respect to the detector and might occur displaced and even rotated within the images.

X-ray ruler X-ray tube

detector

(a) Stitching radiograph acquisition

(b) Exposures and composite oversized radiograph

Figure 1: (a) Multiple radiograph acquisition. The patient stands in front of the flat-panel detector. While the detector moves up and down to reach the different exposure positions, the X-ray tube is panned around its rotational axis to expose the detector. The constant focal point avoids distortions at the image borders. An X-ray ruler brought between patient and detector serves as the feature for later composition. (b) The oversized radiograph is assembled from multiple overlapping exposures. ∗ [email protected]

2

State-of-the-Art

There are various known techniques for automatic registration of images, most of them targeting the creation of panoramic images [1]. For the given problem of image registration in DR however presumptions of these techniques generally do not hold. While in photography there are three colour channels, in radiographic imaging the information is stored in only one intensity channel. Moreover the influence of noise is negligible in photographic image, but certainly inherent in radiography due to the afford to keep applied radiation doses as low as possible. The methods for registration of medical images either register target images that are completely contained within the reference image [2], low-noise images recorded by CCD cameras [3], or images containing considerable and rigid structure within the anatomy [4]. In radiography stitching, images only have a very small overlapping area, are noisy and might not contain any relevant structure within the overlapping area. Two methods for stitching of multiple cassette images in computed radiography (CR) have been published. The method presented in [5] relies on a visible grid that is detected and registered. For this method there is yet no evaluation on a large number of images. In [6] the authors introduce a series of similarity measures, but do not achieve acceptable performance in terms of failure rate. Moreover CR acquisition features two advantages compared to DR: the films are exposed in on single shot and the images have a nearly constant overlap. DR image acquisition in contrast allows the patient to move in between the exposures and usually have varying overlap sizes caused by the limited accuracy of the hardware. Although applicable in computed radiography (CR), feature-matching algorithms are not transferable because of this possible movement. Plain image-similarity on the other hand has to deal with ambiguities of the similarity measure within the region of possible solutions, e.g. caused by recurring structure of the spine and rips or smooth structure of thigh and shank. Even external registration features, e.g. X-ray rulers, typically are periodically and thus do not solve this problem. The spatial overlap of subsequent radiographs might vary by a few centimetres. Every image-similarity measure therefore requires a qualified a priori estimation within the large space of possible overlaps. Landmarkbased algorithms depend on known content or extractable features. Hence they are promising for the registration of an external feature but fail for the arbitrary anatomic content that is considered extremely difficult or impossible to model [7]. Furthermore the anatomy within the small overlap might not contain enough rigid or characteristic structure to extract features and define landmarks. The proposed method overcomes these problems by combining feature-based registration and similarity measurement. However instead of computing inter-image correspondences our method derives correspondences between image coordinates and real world coordinates by not only detecting but also interpreting the feature, metering its information. This procedure even operates in very small overlaps or for an invisible feature within the overlapping area.

3

Methods and Materials

Our method consists of two complementing stages. The ruler recognition algorithm locates the feature, i.e. the Xray ruler. It interprets markers and digits on this feature to extract the global meaning and computes a feature-based estimation. The content-based registration refines the translation for subsequent images to match anatomy. Refer to Figure 2 for a flow chart of the algorithm.

Rl

Ruler Recognition Feature-Based Registration

Rl+1

Content-Based Registration

O = Rl ∪ Rl+1

Ruler Recognition

Figure 2: Complementing steps of the proposed stitching algorithm. Two exposures Rl and Rl+1 are combined to an oversized radiograph O via feature-based registration followed by content-based registration.

x

r

y (a) Interest operator

(b) Processed radiograph

(c) Extracted regions of interest

(d) Detected ruler geometry

Figure 3: Interest Operator. (a) For the values of two opposed pixels lying on a circle of radius r exceeding the centre pixel value and an additional threshold, the centre pixel is considered belonging to a high contrast object. (b) An examination of pelvis and thighs containing an X-ray ruler. (c) Regions of interest that have been extracted by the proposed operator. (d) Detected ruler geometry. Masked ruler geometry is automatically extrapolated by the algorithm.

3.1

Feature-Based Registration

The ruler is recognized and interpreted by performing the following steps: Image segmentation, character recognition, and feature interpretation. The objective of the image segmentation stage is to locate the ruler within the radiograph and the positions of the scale markers and the corresponding scale labels. Therefore regions of interest (ROI) are extracted from the radiograph. This is achieved by applying a very simple but nonetheless efficient interest operator that steps pixel-wise through the image I. For each pixel it is checked, whether the recorded radiation dose increases by a sufficient level ∆I when moving to the opposite neighbouring pixels of the current pixel (cp. Figure 3). With the dose exceeding the threshold level for both neighbours, the current pixel is considered belonging to a high contrast object and contributes to the ROI:  1 I(x − r, y) > t ∧ I(x + r, y) > t     I(x, y − r) > t ∧ I(x, y + r) > t  I (x − r, y − r) > t ∧ I (x + r, y + r) > t , t = I(x, y) + ∆I (1) ROI(x, y) =   I (x − r, y + r) > t ∧ I (x + r, y − r) > t    0 else The ruler does not necessarily have to be most radio-opaque object within the image. With the radius r of neighbours adapted to the ruler’s structure size, this operator primarily marks pixels belonging to the X-ray ruler. Contributions due to noise and artefacts are removed by subsequently applying a morphological opening operation ROI ◦ S = (ROI S) ⊕ S, with S denoting a stripe-shaped structuring element oriented vertically to preserve ruler structure. To locate the ruler line, a discrete Radon transform [8] is computed on the ROI-image: Z∞ Z∞ R(d, ρ) =

  ROI(x, y) · δ d − x · cos(ρ) + y · sin(ρ) dx dy

(2)

−∞ −∞

The maximum of the Radon transform R(d, ρ) determines the position of the ruler line. With the angle ρ and distance to the origin d known, a sub-part containing the ruler may be deskewed and cropped out of the source image. The resulting image is invariant against translation and rotation of the X-ray ruler, and spatially varying image intensity and forms the input for subsequent processing stages. P With the ruler detected, a profile along the line is generated by projection onto the vertical axis, P (y) = x I(x, y). The recurring scale markers and labels produce periodic maxima within this profile. The period λ hence can be determined by autocorrelation of the profile P (y) ? P (y + λ). The initial phase is determined by a successive hit-ormiss transformation that also reveals masked or covered scale markers.

The positions of these markers are reconstructed by linear regression for the detected markers. This reconstruction allows assembly of images even when no feature information is visible within the image overlap. To detect numbers at the refined marker locations, a search for connected pixels, so-called blobs, is performed in each row containing a marker. The resulting groups of connected pixels serve as candidates for digit recognition. The bounding box size of these blobs forms a criterion for filtering out noise (e.g. very small blobs) and invalid geometry (e.g. very large blobs, wrong aspect ratios, etc.) and for picking the selection to perform the optical character recognition on. The second stage performs optical character recognition using a dedicated template matching algorithm [9]. Detected blobs are scaled to the size of digit templates and compared pixel-wise. The template with maximum congruence determines the classified digit. In order to restore missing digits and correct false recognitions, a virtual ruler is moved along the detected digits in the interpretation stage. Matching digits increase the score for a certain position, mismatched digits degrade it. Even for very weak contrast, high noise level and thus only few detected digits the geometry is recovered using this technique. Due to the high redundancy, the displacement of maximum congruency determines the positioning of the ruler, i.e. the ruler ”snaps in” at the correct position yielding a set of global correspondences of image coordinates to real-world coordinates. These correspondences are utilized in the content-based registration stage. The algorithm is invariant against ruler translations, rotations and scaling as well as the scale periodicity. It operates on the three different ruler types in current systems and is robust against changes to the scale font. Missing geometry is extrapolated automatically to restore information in covered overlap areas.

3.2

Content-Based Registration

In this stage the result of ruler recognitions for subsequent images are utilized to determine the shift between the rulers and hence create an a-priori estimation for the transformation between the images. A subsequent gradient correlation [10, 11] refines the horizontal translation and compensates patient movement against the X-ray ruler.

4

Results

In [12] we present the results of the proposed method compared to an inter-observer study. Manual references form the ground truth for evaluation. 1611 out of 1814 image pairs are registered with a deviation of less than 1 mm. The translations for 99.3% of all valid image pairs lie within the tolerance range of 3 mm deviation. Thus the failure rate for the proposed algorithm sums up to 0.7%. Figure 4 depicts the results of the evaluation for the proposed method. The average processing time is 158 ms per computed translation for an image pair on an Intel Pentium D 2.8 GHz. Figure 5 contains three resulting stitching operations pointing out the capabilities of the proposed method. The left image pair contains a heavily masked ruler within the overlapping area. While conventional methods fail, image interpretation extracts the information necessary for correct registration. The second image pair to the right has a very small overlapping area. With the proposed method it is even possible to register images without an overlap. The third image pair demonstrates invariance against ruler type and position and contains moved anatomy.

94.8%

75

100%

1500

1000

50%

occurrence [%]

1900

inter observer study proposed method

50

25

500 0.7%

3.9%

0.6%

False Match

True Reject

False Reject

0

0 0%

True Match

(a) acceptance and rejection rates

0

0-1

1-2

2-3

3-5

>5

deviation [mm]

(b) automatic vs. manual stitching accuracy

Figure 4: Results of automatic stitching for 1900 image pairs. (a) 86 pairs have been rejected by the algorithm. The true rejection rate of 3.9% corresponds to operational errors introduced by the medical staff, e.g. missing, flipped or heavily tilted rulers. (b) Deviations for the remaining 1814 image pairs against the manual reference are compared to the inter-observer deviations. The error rate sums up to 0.7%.

(a) masked feature, ∆ = 0.1mm

(b) small overlap, ∆ = 0.2mm

(c) ruler variant, ∆ = 0.2mm

Figure 5: Three results of the proposed algorithm. Correct registration of an image pair with (a) masked X-ray ruler within the overlap area and (b) very small overlap area < 15 mm. (c) Ruler variant with right-side scale in a spine examination. Patient movement and breathing have been compensated by the proposed method.

5

Discussion

The proposed method combines two classical techniques, feature-based registration and similarity measures, to achieve a high accuracy and automation level for medical radiograph stitching. It yields a failure rate of 0.7% for automatic stitches. As is apparent from the results, the performance comes close to the lower boundary formed by the accuracy of manual stitching references. The low processing time allows real-time application of the proposed algorithm. The high robustness has been proved by processing a large number of clinical images. User interaction and manual stitching is strongly reduced by the proposed method and the risk of false automatic stitches, possibly resulting in erroneous medical treatment, is minimized. The few remaining failures could be identified to result from noticeable patient movement causing ambiguous registration maxima. To avoid these failures it would be necessary to either introduce non-rigid transformations or provide a-priori knowledge about the position of diagnostically relevant anatomy.

References 1. C.-Y. Chen & R. Klette. “Image stitching - comparisons and new techniques.” In CAIP ’99: Proceedings of the 8th International Conference on Computer Analysis of Images and Patterns, pp. 615–622. Springer-Verlag, London, UK, 1999. 2. V. Kaynig, B. Fischer, R. Wepf et al. “Fully automatic registration of electron microscopy images with high and low resolution.” Microscopy and Microanalysis 13(2), pp. 198–9, 2007. 3. V. Rankov, R. Locke, R. Edens et al. “An algorithm for image stitching and blending.” Proc SPIE 5701, pp. 190–9, 2005. 4. B. Ma, T. Zimmermann, M. Rohde et al. “Use of autostitch for automatic stitching of microscope images.” Micron 38(5), pp. 492–9, 2007. ˇ 5. M. Capek, R. Wegenkittl & P. Felkel. “A fully automatic stitching of 2D medical data sets.” BIOSIGNAL 16, pp. 326–8, 2002. 6. M. Gramer, W. Bohlken, B. Lundt et al. “An algorithm for automatic stitching of CR X-ray images.” Advances in Medical Engineering 114, pp. 193–8, 2007. 7. T. Lehmann. “Medizinische Bildverarbeitung.” In R. Kramme (editor), Medizintechnik: Verfahren, Systeme, Informationsverarbeitung, pp. 588–612. Springer, second edition, 1997. ¨ 8. J. Radon. “Uber die Bestimmung von Funktionen durch ihre Integralwerte l¨angs gewisser Mannigfaltigkeiten.” Ber. Ver. Sachs. Akad. Wiss. Leipzig, MathPhys. Kl. 69, pp. 262–7, 1917. 9. L. Harmon. “Automatic recognition of print and script.” Proc IEEE 60(10), pp. 1165–76, 1972. 10. C. Heipke. “Overview of image matching techniques.” OEEPE Official Publications 33, pp. 173–89, 1996. 11. B. Zitov´a & J. Flusser. “Image registration methods: a survey.” Image and Vision Computing 21(11), pp. 977–1000, 2003. 12. A. Gooßen, M. Hensel, M. Schl¨uter et al. “Ruler-based automatic stitching of spatially overlapping radiographs.” Bildverarbeitung f¨ur die Medizin 2008: Algorithmen, Systeme, Anwendungen pp. 192–6, 2008.

Suggest Documents