Applications of image processing in image-guided radiation therapy

Investigations and research Applications of image processing in image-guided radiation therapy D.A. Jaffray K.K. Brock R. Ferrari V. Pekar Departmen...
Author: Bertina Warner
4 downloads 0 Views 306KB Size
Investigations and research

Applications of image processing in image-guided radiation therapy D.A. Jaffray K.K. Brock R. Ferrari V. Pekar

Department of Radiation Physics, Radiation Medicine Program, Princess Margaret Hospital, Toronto, Ontario, Canada. Philips Research North America, Markham, Ontario, Canada.

Targeted radiation therapy (RT) is a central component of cancer treatment in both the radical and palliative settings. A central and compelling feature of RT is its capacity to deliver a therapeutic effect for cancerous targets that lie deep within the body. Full exploitation of this effect has been limited by the clinical uncertainties in identifying the geometric boundaries of the target and by the lack of precision in the delivery of radiation dose to these internal targets over the many (20 - 40) fractions of a typical treatment regimen (once per day). This lack of precision has required the use of excessive safety margins to ensure coverage of the disease. Unfortunately, this results in the surrounding normal tissues receiving excessive radiation, which in turn prevents the escalation of the dose to the tumor.

Figure 1. In broad terms, the treatment process in radiation oncology includes establishment of clinical intent, development of a disease and normal tissue model, and continual revision of the model during therapy delivery. The wealth of non-invasive information provided through imaging is allowing this process to become more dynamic with the potential for rational adaptation of the intervention. Handling the quantity of image-based information is a significant challenge in this transition to image-guided intervention and image-based assessment. The move to modelbased approaches for handling these rich datasets is a requirement if the time-varying nature of the underlying biological processes is to be captured and exploited.  32

MEDICAMUNDI 52/1 2008/07

Recent advances in imaging technology and the integration of these imaging technologies into the radiation oncology workflow have resulted in an aggressive restructuring of the RT process to employ these images for both increased accuracy in target and normal tissue identification, and for greater precision in the delivery of the planned radiation fields at the time of treatment. 1

The past decade has seen the entire field transition from employing a set of radiographs, a grease marker, and hand-placed lead shielding blocks, to the use of hundreds of Computed Tomography (CT), Magnetic Resonance (MR), and Positron Emission Tomography (PET) slices, computerized dose design, and on-line imagebased targeting with robotic adjustment. In parallel with these developments, the field has embraced image-processing methodologies to accelerate the integration of this additional image information into the workflow, thereby improving patient care and lowering the workload associated with their use. The progression from simple geometric targeting to a more comprehensive process is illustrated in Figure 1. The frequency with which imaging is being applied throughout the treatment process highlights the dynamic nature of the disease during intervention. As a result, the field is being driven towards using model-based approaches that employ images to populate patient-specific models that can evolve with the changing anatomy over the course of intervention. The workflow and workload challenges associated with the integration of such dynamic image

information are significant. The development of effective and efficient image processing approaches, which integrate with these models, promises to be an exciting and fruitful area of research in the years to come. In this review, applications of image processing technology in radiation oncology are addressed with an eye towards model-based approaches.

Image segmentation

Therefore, they are very sensitive to image noise and sampling artifacts, commonly present in medical images. As a result, in general these techniques either fail completely or require some form of post-processing to correct invalid object boundaries in the segmentation results. The clinical use of intensity-based techniques is thus typically limited to very simple homogeneous structures with well-defined boundaries. 2

Accurate organ delineation is a very important required procedure in the RT planning process. The aim is to provide an accurate definition of the cancerous target volume in order to deliver maximum radiation dose to the tumor volume while sparing the surrounding healthy tissues [1]. The traditional manual slice-by-slice contouring of three-dimensional (3D) images using simple drawing tools is extremely labor intensive and can take many valuable hours of clinician’s time. In addition, recent advances in RT such as the transition from conformal methods to intensity-modulation RT methods and the introduction of 4D CT and adaptive radiotherapy have further amplified the burden of organ delineation [2]. The development of robust and reliable automatic or semi-automatic segmentation techniques has the potential to substantially facilitate the planning process and significantly increase the patient throughput in the clinic [3]. In terms of technology, the current trend in image segmentation is moving from low-level intensity-based segmentation techniques towards advanced model-based approaches exploiting prior knowledge about the anatomical shape, appearance and spatial relationships between structures of interest (Figure 2). A brief description of the most common segmentation techniques applied in RT is given below. Intensity-based segmentation

Segmentation techniques using threshold and grayscale information, such as region growing, are common examples of intensity-based techniques [4-6]. Thresholding approaches are simple and fast and segment scalar images by creating a binary partitioning of image intensities. Region growing is used to extract an image region, which is identified based on pre-defined criteria, such as intensity or edges in the image. A seed point, which can be manually or automatically selected, is used for the algorithm to iteratively connect subsequent similar pixels. The main drawback of intensity-based methods is that they do not take into account global properties of the region of interest (ROI) being segmented, for example the shape of the ROI.

Knowledge-based segmentation

Knowledge-based methods [7, 8] make use of prior information to differentiate one class type (tissue class or anatomical structures) from another. In general, two sources of information are used to distinguish between the different classes inside an image: image data information, such as pixel intensity and edges, and information from the anatomy space, including expected shapes and placement of specific tissues in an image. Next, incremental refinements are performed on the easily identifiable tissues located first, instead of trying to achieve the final segmentation results at once. Segmented regions, which are deemed undesirable, are removed from further consideration, allowing the methods to “focus” on the remaining image regions, where subtle trends may become clearer.

 Figure 2. Evolution of segmentation approaches in RT.

Probabilistic models

Statistical models [9-11], such as Gaussian mixture models and Hidden Markov models, have been intensively used in the segmentation

MEDICAMUNDI 52/1 2008/07

33

computational cost of these methods is another drawback and a limiting factor when used in clinical applications.

3

Deformable model-based segmentation

 Figure 3. Comprehensive deformable atlas of the risk structures in the head and neck region. Courtesy M. Kaus (Philips Healthcare), and J. Kim (Princess Margaret Hospital, Toronto, Canada).

Figure 4. Image registration framework for application in RT.  4

34

MEDICAMUNDI 52/1 2008/07

of medical images. The basic idea behind these methods is to associate with a given incomplete-data problem (i.e., image intensity without corresponding labels) a completed one (i.e., pixel intensity associated to a meaningful class label), which can be iteratively solved by using the expectation-maximization algorithm. At the first stage, the model parameters are initialized randomly or by using some prior knowledge of the problem. The class labels of the pixels are then determined by maximizing the a posteriori probability of the segmentation, given the image data. In the next stage, the model is improved by computing the maximum likelihood estimates of the model parameters. The process continues interactively until a convergence criterion is reached. Difficulties associated with this approach are the proper initialization of the model parameters, which can result in unstable results, and the selection of the number of cluster classes. The

Active shape models [12-17] are computergenerated curves or surfaces that can move within an image under the influence of internal forces, which are defined within the curve or surface itself, and external forces, which are derived from the image data. By using these two force components, this method can incorporate both local image information (described by the external forces, e.g. image gradient, edge flow, etc.) and global information (formulated in the internal forces, e.g. organ shape) into the segmentation. The main advantages of deformable models are their ability to directly generate closed parameterized curves or surfaces from images and their incorporation of a smoothness constraint that provides robustness to noise and spurious edges. The main drawbacks of these models are their dependency on accurate initialization and the choice of their deformation parameters. Reducing sensitivity to initialization by using automatically detected image landmarks to facilitate the initialization of the deformable models has been an active topic of research, which has demonstrated excellent success [18]. Atlas-based segmentation

Atlas-based methods [19-22] rely on the existence of a reference image volume (the atlas), which is generated by compiling information on the structures of interest that have been

segmented manually, generally by an expert radiologist. The standard atlas-guided approach treats segmentation as a registration problem, which is addressed in more detail in the following section. The segmentation starts by finding a one-to-one transformation that maps a pre-segmented atlas image to the new dataset. This transformation is then used to project the labels assigned previously to the structures in the atlas onto the image volume to be segmented. The performance of these methods greatly depends on the registration accuracy. One particular difficulty with this approach arises when pathologies are present in the image. Because these structures have no equivalent in the atlas, they usually lead to alignment errors. Hybrid approaches

The most advanced approaches currently known use a combination of different strategies to improve robustness and accuracy. For instance, incorporation of prior information about the appearance of the anatomical structures of interest into deformable shape models [15, 16], as well as knowledge about the spatial relationships between different anatomical structures combined with atlas-guided initialization [18] creates a powerful instrument to address the most complex segmentation tasks in RT planning (Figure 3).

Image registration and fusion Image registration aims to establish voxel-tovoxel correspondences between images by applying geometric transformations to the spatial voxel locations. The transformation types can vary from rather simple rigid and affine transformations to complex non-linear transformations, which often cannot be adequately described by a limited number of parameters [23]. The principal aims of applying image registration in RT (Figure 4) are: • to improve definition of the target and risk structures through the use of complementary information from different modalities in a fused environment • to facilitate the cumbersome process of contouring by transferring pre-segmented anatomical structures from a registered atlas, and • to reduce uncertainties in radiation delivery due to the changes in patient set-up and organ motion taking place during the treatment period. This is done through temporal tracking of the deforming anatomy, which is used in turn to modify the delivered dose patterns.

The use of rigid transformation models is typically limited to compensating for global changes, e.g. changes in patient set-up with respect to the planning dataset, or compensating for the global motion, e.g. in multi-modal registration of the head. Modeling internal organ motion and other factors, which influence the patient’s anatomy throughout the treatment, for example weight loss, requires methods based on more complex geometric transformations, which are usually referred as “non-linear” or “deformable”.

 Atlas-based segmentation can lead to misalignment when pathologies are present.

Mathematical foundations of non-linear transformations used in medical image registration are quite versatile. Generally, one can distinguish between the methods based on purely geometric transformations, e.g. various spline models [24-27], and the methods based on principles derived from continuum mechanics, the so-called physics-based registration methods [28-30]. In addition to the transformation model, registration methods can also be classified according to the way of evaluating image similarity. Here, in parallel to image segmentation methods, technological developments are moving from purely intensity-based methods towards complex model-based approaches, where inclusion of prior knowledge of the application domain is playing an increasingly important role. Intensity-based methods

Intensity-based registration methods [30-33] typically employ an iterative optimization strategy to estimate the transformation minimizing the gray-value differences between the images to be registered. Their principal advantage is that they provide an unsupervised approach where no additional pre-processing is required. On the other hand, the assumption of image content invariance, which is essential in this case, is often violated, e.g. in multi-modal registration or after surgical interventions. Especially in the context of deformable registration, the use of intensity-based methods is typically restricted to mono-modal registration, e.g. registration of intra- and inter-fractional image data.

 Model-based registration uses explicit knowledge of the corresponding structures.

Model-based methods

Model-based registration methods [34-38] make use of explicit knowledge about the corresponding structures, e.g. the shape, surface correspondences, and, possibly, biomechanical properties. A necessary pre-processing step for model-based registration is 3D segmentation of the corresponding ROI. The segmentation can be carried out manually or (semi-) automatically, where model-based segmentation methods are advantageous since they also automatically provide

MEDICAMUNDI 52/1 2008/07

35

5

 Figure 5. Deformable image registration methods allow dose accumulation in the context of deforming anatomy.

the correspondences between the segmented shapes [38]. The non-linear transformations used in model-based registration can be, for instance, based on spline models [38], where the discrete point correspondences can serve as the control points. Alternatively, the transformations can be derived using discretizations of partial differential equations of continuum mechanics, e.g. finite element method, where point correspondences are integrated as specific boundary conditions [38].

36

MEDICAMUNDI 52/1 2008/07

Model-based registration methods are particularly attractive for clinical use, since they operate in an interactive setting, where the user can control the registration outcome by manipulating the corresponding ROIs (Figure 5). This makes them applicable, for instance, in multi-modality registration problems, where intensity-based evaluation of similarity is often infeasible. Compared to intensity-based registration methods, model-based approaches are computationally more efficient. Hence, the user can obtain visual feedback on registration results at interactive speed and revise the procedure, if necessary [38].

Validation strategies

Validation of deformable registration algorithms is an important step prior to the integration of these techniques into clinical practice. Several qualitative approaches exist, such as color image overlays, difference images, and checkerboard or sliding window views of the predicted and actual images. Quantitative analysis is more challenging. Three main approaches exist: • identifying naturally occurring or implanted fiducials in the image • applying a known deformation, and • using deformable phantoms with known deformations. Deformable phantoms can be designed to closely resemble human anatomy, however, remaining differences may compromise the ability of these phantoms to validate physics-based registration algorithms [39]. Implanted fiducials, which can be masked in the images, can provide quantitative validation of deformable registration. Applying a known deformation to an image and attempting to recover that exact deformation permits quantitative validation at all points in the image, however, this may represent a best case scenario as the intensity in both images are

identical. Detection of naturally occurring or implanted fiducials in clinical images provides a quantitative measure of accuracy using the true images. Accuracy is limited to areas where fiducials can be identified. Reproducibility in the detection of these fiducials has been shown to be within the voxel resolution [31, 32].

Outcome assessment Assessment of RT treatment outcome for cancer patients is traditionally performed by measuring changes in the tumor size several months after therapy has been administered [40]. The ability to use non-invasive imaging during the early stages of fractionated therapy to determine whether a particular treatment is effective would provide an opportunity to optimize individual patient management and make adjustments or even switch to a more beneficial treatment, if necessary. In this context, image processing can greatly help in monitoring the disease progress and in implementing on-line plan adjustments. Simulation of tumor expansion or shrinkage, for instance, has been investigated using statistical analysis and machine-learning techniques [40, 41]. In addition to the optimization of the spatio-temporal treatment, such techniques can also help to enhance the understanding of cancer behavior.

Online planning for adaptive radiation therapy Atlas-based and deformable-model image segmentation has also been extensively investigated for on-line plan adjustments in image-guided RT [17, 42-44]. In this case, planning CT images and ROI contours are used as the atlas. By using non-rigid deformable registration techniques, the ROIs on the planning images are mapped onto daily images. Deformable-models are then used to deform the planning images to accommodate daily changes in the ROIs. In this way, temporal changes of the ROIs can be accounted for during fractionated RT.

Discussion and summary The integration and expansion of image processing tools in radiation oncology is necessary if advances in imaging are to have their full impact on the field. Significant effort has been invested in the development of registration algorithms to address rigid transformations between datasets. These tools are routinely available, also as commercial products, and have been extremely

valuable for the field. The extension of these tools to non-rigid geometry has been underway for almost a decade [45] and is finally starting to make its way from experimental into the commercial domain. The creation of intensity-modulated radiation therapy (IMRT) has placed significant pressure on clinicians to provide 3D segmentation for the inverse planning task. This is currently one of the most pressing issues in radiation oncology. The development of automated methods promises to provide savings in workload as well as the potential for more consistent segmentation results. The variability in segmentation is still a major issue and the development of automated methods may be a significant advancement in this regard.

 Intensity-modulated radiation therapy requires 3D segmentation for the inverse planning task.

In terms of automated segmentation and tracking, medicine has lagged far behind the machine vision field in bringing robust algorithms into practice. This is somewhat expected given the additional challenges of dealing with a highly heterogeneous set of subject matter. This is even more problematic in the context of cancer, wherein neoplasms can alter the anatomical configuration of the normal anatomy and present morphologically unique forms. The introduction of multi-modality datasets opens new opportunities for autosegmentation. For example, the development of time-dependent or 4D PET imaging sequences promises to provide new information for automated classification of tissues [44]. Similar methods are being employed in functional CT and dynamic contrast enhancement MR studies. It can be expected that these higher dimensional datasets will increase the capacity of automated algorithms to perform at higher levels. Furthermore, these datasets will take the field into a new area in which the human observer will not necessarily provide the ground truth. The human observer exploits their vision system and priors to contribute to the segmentation process. However, these temporal analyses will employ fitting methods with sensitivities that will not necessarily be exceeded by the human observer. The growing availability of prior imaging data in the patient population does provide significant opportunities for the community to exploit priors for some of the image processing tasks. The repetitive nature of the fractionated radiation treatment course makes this an opportunity to really bring forward approaches that rely heavily on priors.

 Fast, interactive segmentation and registration may provide the opportunity for semiautomated planning.

MEDICAMUNDI 52/1 2008/07

37

Speed and interactivity for segmentation and registration methods may provide a new opportunity to establish hybrid systems for semi-automated schemes. These should not be underestimated given the need for humans to evaluate the segmentation results regardless of the method.

Conclusion In conclusion, RT has experienced dramatic developments in recent years, wherein image processing has helped to increase the quality of treatment. It will certainly continue to play a crucial role in future advancements in RT 

References [1] Xing L et al. Overview of Image-guided Radiation Therapy. Medical Dosimetry 2006; 31(2): 91-112. [2] Mageras GS, Machalakos J. Planning in the IGRT Context: Closing the Loop. Semin Radiat Oncol 2007; 17: 268-277. [3] Xing L, Siebers J, Keall P. Computational Challenges for Imageguided Radiation Therapy: Framework and Current Research. Semin Radiat Oncol 2007; 17: 245-257. [4] Drever L et al. Comparison of Three Image Segmentation Techniques for Target Volume Delineation in Positron Emission Tomography. J Appl Clin Med Phys 2007; 8(2): 93-109. [5] Mazonakis M et al. Image Segmentation in Treatment Planning for Prostate Cancer using the Region Growing Technique. Br J Radiol 2001; 74: 243-248. [6] Mancas M, Gosselin B, Macq B. Segmentation using a Region Growing Thresholding. In: Proc of SPIE Med Imaging, San Diego, CA, USA. [7] Archip A et al. A Knowledge-based Approach to Automatic Detection of the Spinal Cord in CT Images. IEEE Trans Med Imaging 2002; 21(12): 1504-1516. [8] Clark MC et al. Automatic Tumor Segmentation using Knowledgebased Techniques. IEEE Trans Med Imaging 1998; 17(2): 187-201. [9] Zhang Y, Brady M, Smith S. Segmentation of Brain MR Images through a Hidden Markov Random Field Model and the Expectation Maximization Algorithm. IEEE Trans Med Imaging 2001; 20(1): 45-57. [10] Solomon J, Butman JA, Sood A. Segmentation of Brain Tumors in 4D MR Images using the Hidden Markov Model. Comput Methods Programs Biomed 2006; 84(2-3): 76-85. [11] Hatt M et al. Fuzzy Hidden Markov Chains Segmentation for Volume Determination and Quantification in PET. Phys Med Biol 2007; 52: 3467-3491. [12] Pasquier D et al. Automatic Segmentation of Pelvic Structures from Magnetic Resonance Images for Prostate Cancer Radiotherapy. Int J Radiat Oncol Biol Phys 2007; 68(2): 592-600.

38

MEDICAMUNDI 52/1 2008/07

[13] Price G, Moore C. Comparative Evaluation of a Novel 3D Segmentation Algorithm on In-treatment Radiotherapy Cone Beam CT Images. In: Proc of SPIE Med Imaging, San Diego, CA, USA. [14] McInerney T, Terzopoulos D. Deformable Models in Medical Image Analysis: A Survey Medical Image Analysis. Med Image Anal 1996; 1(2): 91-108. [15] Pekar V, McNutt TR, Kaus MR. Automated Model-based Organ Delineation for Radiotherapy Planning in Prostatic Region. Int J Radiat Oncol Biol Phys 2004; 60(3): 973-980. [16] McNutt TR, Kaus MR, Spies L. Advances in External Beam Radiation Therapy. In: Advances in Healthcare Technology. G Spekowius, and T Wendler, Eds. Springer Netherlands: Netherlands 2006; 201-216. [17] Zhang T et al. Automatic Delineation of On-line Head-and-Neck Computed Tomography Images: Toward On-line Adaptive Radiotherapy. Int J Radiat Oncol Biol Phys 2007; 68(2): 522-530. [18] Leavens C et al. Validation of Automatic Landmark Identification for Atlas-based Segmentation for Radiation Treatment Planning of the Head-and-Neck Region. In: Proc of SPIE Med Imaging, San Diego, CA, USA 2008. [19] Stefanescu R et al. Non-rigid Atlas to Subject Registration with Pathologies for Conformal Brain Radiotherapy. Proc of Int Conf Med Image Comput and Comput Assist Intervention - MICCAI 2004. Saint-Malo, France 2004. [20] D’Haese PFD et al. Automatic Segmentation of Brain Structures for Radiation Therapy Planning. In: Proc of SPIE Med Imaging, San Diego, CA, USA 2003. [21] Parraga A et al. Non-rigid Registration Methods Assessment of 3D CT Images for Head-Neck Radiotherapy. In: Proc of SPIE Med Imaging, San Diego, CA, USA 2007. [22] Cuadra MB et al. Dense Deformation Field Estimation for Atlasbased Segmentation of Pathological MR Brain Images. Comput Methods Programs Biomed 2006; 84(2): 66-75. [23] Modersitzki J. Numerical Methods for Image Registration. Oxford University Press 2004.

[24] Bookstein F. Principal Warps: Thin-plate Splines and the Decomposition of Deformations. IEEE Trans Pattern Anal Mach Intell 1989; 11: 567-585.

[36] Liang J, Yan D. Reducing Uncertainties in Volumetric Image based Deformable Organ Registration. Med Phys 2003; 30(8): 21162122.

[25] Fornefett M, Rohr K, Stiehl H-S. Radial Basis Functions with Compact Support for Elastic Registration of Medical Images. Image Vis Comput 2001; 19: 87-96.

[37] Brock KK et al. Feasibility of a Novel Deformable Image Registration Technique to Facilitate Classification, Targeting, and Monitoring of Tumor and Normal Tissue. Int J Radiat Oncol Biol Phys 2006; 64(4): 1245-1254.

[26] Kohlrausch J, Rohr K, Stiehl H-S. A New Class of Elastic Body Splines for Nonrigid Registration of Medical Images. J Math Imag Vis 2005; 23: 253-280. [27] Rueckert D et al. Nonrigid Registration using Free-form Deformations: Application to Breast MR Images. IEEE Trans Med Imaging 1999; 18(8): 712-721. [28] Bajcsy R, Kovacic S. Multi-Resolution Elastic Matching. Comput Vis Graph Image Proc 1989; 46: 1-21. [29] Christensen GE, Rabbitt RD, Miller MI. Deformable Templates using Large Deformation Kinematics. IEEE Trans Image Process 1996; 5(10): 1435-1447. [30] Pekar V, Gladilin E, Rohr K. An Adaptive Irregular Grid Approach for 3D Deformable Image Registration. Phys Med Biol 2006; 51(2): 361-377. [31] Brock KM et al. Automated Generation of a Four-dimensional Model of the Liver using Warping and Mutual Information. Med Phys 2003; 30(6): 1128-1133. [32] Coselmon MM et al. Mutual Information Based CT Registration of the Lung at Exhale and Inhale Breathing States using Thin-plate Splines. Med Phys 2004; 31(11): 2942-2948. [33] Lu W et al. Fast Free-form Deformable Registration via Calculus of Variations. Phys Med Biol 2004; 49(14): 3067-3087. [34] Foskey M et al. Large Deformation Three-dimensional Image Registration in Image-guided Radiation Therapy. Phys Med Biol 2005; 50(24): 5869-5892.

[38] Kaus MR et al. Assessment of a Model-based Deformable Image Registration Approach for Radiation Therapy Planning. Int J Radiat Oncol Biol Phys 2007; 68(2): 572-580. [39] Kashani R et al. Technical Note: A Physical Phantom for Assessment of Accuracy of Deformable Alignment Algorithms. Med Phys 2007; 34(7): 2785-2788. [40] Moffat BA et al. Functional Diffusion Map: A Noninvasive MRI Biomarker for Early Stratification of Clinical Brain Tumor Response. Proc of the Natl Acad Sci USA 2005; 102(15): 5524-5529. [41] Marianne M et al. A Classification-based Glioma Diffusion Model using MRI Data. In: Advances in Artificial Intelligence. Springer Berlin / Heidelberg 2006; 98-109. [42] Yan D. Image-guided Adaptive Radiotherapy. In: New Technologies in Radiation Oncology, W Schlegel, T Bortfeld, and AL Grosu, Eds. Springer 2006; 321-336. [43] Montgomery DWG, Amira A, Zaidi H. Fully Automated Segmentation of Oncological PET Volumes using a Combined Multiscale and Statistical Model. Med Phys 2007; 34(2): 722-736. [44] Kim J et al. Segmentation of VOI from Multidimensional Dynamic PET Images by Integrating Spatial and Temporal Features. IEEE Trans Inf Technol Biomed 2006; 10(4): 637-646. [45] Yan D, Jaffray DA, Wong JW. A Model to Accumulate Fractionated Dose in a Deforming Organ. Int J Radiat Oncol Biol Phys 1999; 44(3): 665-675.

[35] Zhang T et al. Technical Note: A Novel Boundary Condition using Contact Elements for Finite Element Based Deformable Image Registration. Med Phys 2004; 31(9): 2412-2415.

MEDICAMUNDI 52/1 2008/07

39

Suggest Documents