Development of blind image deconvolution and its applications

Journal of X-Ray Science and Technology 11 (2003) 13–19 IOS Press 13 Development of blind image deconvolution and its applications Ming Jianga,∗ and...
Author: Edgar Paul
6 downloads 0 Views 71KB Size
Journal of X-Ray Science and Technology 11 (2003) 13–19 IOS Press

13

Development of blind image deconvolution and its applications Ming Jianga,∗ and Ge Wangb a School

of Mathematical Sciences, Peking University, Beijing 100871, China Laboratory, Department of Radiology, University of Iowa, Iowa City, IA 52242, USA

b CT/Micro-CT

Abstract. This paper is a supplement and update to the reviews by Kundur and Hatzinakos [7,8] on blind image deconvolution. Most of the methods reviewed in [7,8] require that the PSF and the original image must be irreducible. However, this irreducibility assumption is not true in some important types of applications, such as when the PSF is Gaussian, which is a good model for many imaging systems. After a brief summary of existing blind deconvolution methods, we report the recent development in this field with an emphasis on Gaussian blind deconvolution and its clinical applications.

1. Introduction The following convolution model is widely used in many imaging applications: g(x) = p(x) ⊗ λ(x)

(1)

where g(x) is the observed data, p(x) characterizes the imaging system and is called the point spread function (PSF), and λ(x) is the original image. Usually, the observed data is corrupted with noise and can be modeled as a collection of random variables indexed by the spatial variable x with mean values p(x) ⊗ λ(x), which are the observed data in the ideal noiseless case. Image deconvolution/deblurring/restoration is to recover the original image from the observed degraded data. There are numerous image deconvolution methods [1,2,16]. Image deconvolution typically assumes that the PSF p(x) is known. An estimate v(x) of the original image λ(x) is called a deconvolved, deblurred or restored image. p ⊗ v(x) gives an estimate of the mean value or the ideal observed data. Blind deconvolution/restoration/deblurring is to find a degraded image without requiring prior determination of the associated system PSF or using merely partial information about the PSF. Blind deconvolution was recently reviewed in [7,8]. This paper reviews several important methods discussed in [7,8] and emphasizes methods after the publication of or not included in [7,8]. As summarized in [7], there are several motivating reasons for the use of blind deconvolution in imaging applications, most of which are met in medical image applications: A. “In practice, it is often costly, dangerous, or physically impossible to obtain a priori information about the image scene. In addition, the degradation from blurring cannot always be accurately specified.” ∗ Ming Jiang, PhD, School of Mathematical Sciences, Peking University, 5 Summer Palace Street, Beijing 100871, China. Tel.: +86 10 627 56 797; Fax: +86 10 627 51 801; E-mail: [email protected]; URL: http://ct.radiology.uiowa.edu/˜jiangml.

0895-3996/03/$8.00  2003 – IOS Press. All rights reserved

14

M. Jiang and G. Wang / Development of blind image deconvolution and its applications

B. “In real-time image processing, such as medical video-conferencing, the parameter of the PSF cannot be pre-determined to instantaneously deblur images. Moreover, on-line identification techniques used to estimate the degradation may result in significant error, which can create artifacts in the restored image.” C. “In other applications, the physical requirements for improved image quality are unrealizable. In X-ray imaging, improved image quality occurs with increased incident X-ray beam intensity, which is hazardous to a patient’s health. Thus blurring is unavoidable.” D. “Finally, for applications such as astronomy, adaptive-optics systems may be used to compensate for blurring degradation, although the high cost of these systems makes imaging impractical for some observational facilities. Use of less expensive, partially compensating systems may result in phase errors. In either situation, post-processing such as blind deblurring is required for improved image quality.” In all the cases, blind deconvolution is a viable alternative for improving image quality without requiring complicated calibration methods. Obviously, not all the blurring causes can be simultaneously captured in an efficient computable model. However, by the central limit theorem, the net result of the complex interplay among these independent random factors, though not rigorously valid in general, can often be approximated by a Gaussian PSF with standard deviation parameter σ :   1 |x|2 Gσ (x) = exp − 2 . (2) 2πσ 2 2σ The term blind deconvolution was first coined in [19]. The general concept of blind deconvolution dates back to the work in [15]. Over the years, there are many methods developed for this goal. However, this is a difficult ill-posed problem still not fully understood. Most of the algorithms are not satisfactory in terms of stability, robustness, uniqueness and convergence. Nevertheless, this is a fast developing field because of its importance and wide applications. In principle, it is closely related to the system identification, independent component analysis, and source separation problems in other applications. The paper is organized as follows. In Section 2, we first discuss some methods reviewed in [7,8] and several methods not covered in [7,8]. Then we report recent methods for blind deconvolution and their clinical applications. In Section 3, we discuss relevant issues and future research possibilities. 2. Methods Important blind deconvolution methods reviewed in [7,8] include iterative blind deconvolution (IBD), simulated annealing (SA), and nonnegativity and support constraints recursive inverse filtering (NASRIF). These methods require that the image support is known a priori, and the PSF and the original image must be irreducible: an irreducible signal cannot be exactly expressed as the convolution of two or more component signals of the same family, on the understanding that the two-dimensional delta function is not a component signal. Existing techniques suffer from poor convergence properties, lacking reliability and strong assumptions about the image and PSF: “The major drawback of the IBD method is its lack of reliability. The uniquess and convergence properties are, as yet, uncertain.” “The major disadvantage (of SA) is the convergence to the global minimum of the cost function is slow.” “The NASRIF algorithm shows some noise amplification at low SNRs, but premature termination of the algorithm may be employed to prevent this.” [7]. Improvements of the NAS-RIF algorithm to overcome some of its limitations are reported in [14]: “it is unable to deal robustly with variations in gray-level ranges,

M. Jiang and G. Wang / Development of blind image deconvolution and its applications

15

compromises accuracy of restoration with speed of convergence and requires an accurate estimate of the support of the object of interest.” One popularly used technique not covered in [7,8] is the following double iteration algorithm developed by Holmes et al. [9], based on the EM algorithm. The EM algorithm is an efficient image restoration algorithm and has been widely used in many applications under different names such as “Lucy” and “Richard”, etc [18]. In the case of a linear space-invariant imaging model Eq. (1), the EM algorithm iterates as follows:   g(x) λk+1 (x) = λk (x) · p(−x) ⊗ (3) . p(x) ⊗ λk (x) Since the original image and the PSF are symmetric in Eq. (1), one gets the following double iteration algorithm by exchanging the image and PSF:   g(x) λk+1 (x) = λk (x) · pk (−x) ⊗ p(x) ⊗ λk (x) (4)   pk (x) g(x) · λk (−x) ⊗ pk+1 (x) = Λ p(x) ⊗ λk (x)  where Λ = g(x). Although the convergence of the EM algorithm is well established [18], the x

convergence of the double iteration algorithm is not guaranteed. In [9], heuristic constraints were added after each iteration to incorporate some prior information about the PSF into the algorithm, such as symmetry, band-limit constraints. However, implementation of these constraints may destroy the monotonic increasing property of the underlying likelihood function and ruin the convergence of the EM algorithm. The work by Fish et al. demonstrated that the post-processing after each iteration has no advantage over the pure double iteration scheme [6]. For the PSF with a known parametric form, e.g., the Gaussian PSF with an unknown variance parameter, Fish et al. proposed a semi-blind algorithm based on the double iterative algorithm. In their algorithm, a number of blind iterations were performed, and then a least square fit to the PSF parameters was performed. New PSF was then created with the fitting parameters, and another cycle of iterations was performed, with this new PSF as the starting point. This procedure was repeated for a specific number of times. “Although the PSF was fitted in each iteration with a Gaussian of the correct size, the result was not good; in fact the pure double iteration results were better.” [6]. The case when the PSF is a Gaussian function may be the one of the most difficult cases in blind deconvolution, because a Gaussian function is reducible: G σ = Gσ1 ⊗ Gσ2 if σ2 = σ12 + σ12 , although the Gaussian PSF is of known parametric form and there is only one parameter to be estimated. Even in the noiseless case, the problem is well known as the notorious ill-posed inverse problem of heat transfer, which has caught the attention of researchers for years. Existing algorithms are not very successful for Gaussian blind deconvolution. The major disadvantage of the blind deconvolution of the double iteration kind is that the solution depends on the initial values. In the Gaussian blurring case, given an observed image g(x) and any σ 0 , we can construct a pseudo-original image λ 0 such that g(x) = Gσ0 ⊗ λ0 (x).

(5)

Then it is easy to see that λ k (x) = λ0 (x) and pk (x) = Gσ0 (x) is the iterative solution to Eq. (4). Hence, the solution of the double-iteration approach strongly depends on the initial guess λ 0 (x) and

16

M. Jiang and G. Wang / Development of blind image deconvolution and its applications

p0 (x), and there are infinite number of solutions to the double iteration scheme without any rule to select the initial values. Markham and Conchello reported the same problem with the double iteration scheme [13]. Much worse, with an inappropriate initial guess, the PSF p k (x) may converge to the δ function and the restored image can be just the observed image g(x) [20]! An improvement, the multiple blind deconvolution (MBD) method, does not solve this problem, though it may work for specific images with some manual interactive adjustment [20]. There are many publications using the Bayesian approach for blind deconvolution that utilizes prior information about the image and PSF. However, the following questions arise: How to determine the hyper-parameters? Which or what kind of priors to use for the image and PSF, respectively? The total variation blind deconvolution approach [4] is formulated to minimize   1 min p ⊗ λ − g2 + α1 |∇λ| + α2 |∇p| (6) λ,p 2

that is, in addition to minimizing the mean square error between the observed data and its mean values, the result is regularized such that the total variation in the image and the PSF is also minimized. This method works well with blocky images, where there are large constant regions. However, this approach introduces the new problem of estimating the hyper-parameters α 1 and α2 . A heuristic method was discussed to adjust α 1 and α2 in [4]. The general method for estimating the parameters based on Bayesian statistical inference is quite computing expensive. In [4], an alternative minimization approach for the image and the PSF is used for Eq. (6), but the convergence is open. In [22,23], other regularization priors were used. In [3], a spectrum domain approach was studied for blind deconvolution for the Gaussian or Lorentzian PSF. For such blurring, the PSF is detected from one-dimensional Fourier analysis of a selected onedimensional section of a blurred image. A non-iterative image deconvolution technique called “slow evolution constraint backwards (SECB)”, by minimizing an appropriate energy functional, uses this detected PSF to restore the image, which is a Wiener-like filtration but with different control parameters. This approach works for a special class of images, but it requires much interactive work to adjust the parameters. The edge-to-noise ratio principle recently introduced in [10–12] is based on the classical work on the EM algorithm [17] and is developed for the PSF of known parametric form. The noise and edge effects, which are the “noise artifacts” in a flat region and “overshoot at edges”, were analyzed in [17]. In [10– 12], the axiomatic discrepancy measure theory is used to measure those effects. For two non-negative distributions u(x) and v(x) and, the discrepancy measure consistent with Csisz a´ r’s axioms [5] is the I-divergence I(u, v) =

 x

u(x) log

u(x)  − [u(x) − v(x)]. v(x) x

(7)

We will describe this algorithm only for the Gaussian PSF in the following. The reader can generalize it to other PSFs of known parametric form easily. The central problem is how to find an estimate of the blurring parameter σ . The noise effect is measured as the discrepancy between the observed data and an estimated mean value of it. The noise effect for a σ used with the EM algorithm after n iterations is quantified by N (σ, n) = I(g, Gσ ⊗ λEM (g, n, σ))

(8)

M. Jiang and G. Wang / Development of blind image deconvolution and its applications

17

where λEM (g, n, σ) is the deblurred image by the EM algorithm with n iterations and the parameter σ . The edge effect, or more precisely the measure of deblurring, is measured as the discrepancy between the deblurred image λEM (g, n, σ) and the estimated mean value of the observed data: E(σ, n) = I(λEM (g, n, σ), Gσ ⊗ λEM (g, n, σ)).

(9)

E(σ, n) measures not only the edge effect but also the noise effect. In other words, the edge artifacts as measured by E(σ, n) include the noise effect as measured by N (σ, n). Hence, the net edge enhancement may be approximated by E(σ, n) − k · N (σ, n), where k · N (σ, n) represents a certain amount of the noise effect, k is a positive weighting constant. Choosing σ by simply maximizing E(σ, n)−k·N (σ, n) may result in a restored image with exaggerated edges and unacceptable noise, since the noise is not controlled in the object function. The following “edge-to-noise” ratio (in the same spirit of the well-known signal-to-noise ratio) is introduced to balance the edge improvement and the noise effect

ENR(σ, n) =

E(σ, n) − k · N (σ, n) . N (σ, n)

Since maximizing (σ, n) is equivalent to maximize the parameter. Let DNR(σ, n) =

(10) E(σ,n) N (σ,n) ,

one can simply use the latter to estimate

E(σ, n) N (σ, n)

(11)

which is termed simply as the deblurring to noise ratio. The criterion for choosing the parameter σ is: DNR Principle: Given, σ should be so chosen such that the deblurred image maximizes DNR (σ, n).

The search for the optimal deblurring σ value can be formulated as a one-dimensional maximization problem. There are some sophisticated algorithms for this task. Since the definition of the DNR depends on the iteration number, the choice of the iteration number n is important to find an optimal estimate of σ . The optimal iteration number depends on the image content and can be found for a class of images through simulation and/or experiments using an appropriately designed phantom. This is demonstrated in [10–12]. The major disadvantages of this method are (1) it requires knowledge of the image under processed, which is represented in the phantom; and (2) enormous simulation with the phantom is needed to find the optimal iteration number. However, that iteration number can be used for the class of images, hence the complete blind deconvolution without user interaction can be achieved. In [11,21], spiral CT imaging is shown as a spatially invariant linear system with an approximately isotropic 3D Gaussian PSF. As a result, an arbitrary oblique cross-section in an image volume can then be approximated as in (1). The above DNR principle was applied to enhance the spiral CT images for cochlear implantation [10–12]. Figure 1 shows reconstructed and blindly deconvolved spiral CT images of the temporal bone using this DNR method. It can be clearly observed that after fully automatic blind deblurring anatomical features are substantially refined.

18

M. Jiang and G. Wang / Development of blind image deconvolution and its applications

Fig. 1. Blind deblurring example: original spiral CT image (left) and blind deblurred image (right).

3. Discussion and conclusion The problem of image restoration by blind deconvolution is ill-posed. To produce a unique solution, a priori information must be utilized, such as the support of the image and PSF, nonnegativity of the image, the parametric form of the PSF, and other properties of the image and PSF. The a priori information could be explicitly formulated as explicit constraints for the solution or implicitly incorporated into the object function to be optimized. Although instability is a nature of this ill-posed problem, a priori information can be used to remedy this problem, as is widely used in many other inverse problems. In medical image applications, this is as important as and closely related with reliability and repeatability. Structural information should be helpful to establish precise a priori information and induce robust algorithm. However, existing algorithms does not pursue enough in this aspect. The edge-to-noise ratio is a primary attempt along this direction. All the algorithms are either computationally expensive or of much preparatory work. Some of them rely on an expertizer user to run the program intelligently or need significant interactions for algorithmic calibration, neither of which is desirable/practical in medical applications. Algorithms that are “completely blind/automatic” are mostly welcome in real applications. Another direction is to develop a parallel version of the algorithm if possible. In conclusion, there are several important blind deconvolution methods developed for different situations. Due to the ill-posed nature, these algorithms are often lack of the stability, robustness, uniqueness and convergence. With various application backgrounds, blind deconvolution remains a fascinating and challenging field. As far as the theoretical development is concerned, we believe that the creative use of knowledge is a major key. Given ever-improving PC techniques, we predict that there will be more application software packages for blind deblurring in near future. Acknowledgements This work is supported by an NIH grant (R01 DC03590). References [1]

H.C. Andrews and B.R. Hunt, Digital Image Restoration, Prentice-Hall, Englewood Cliffs, NJ, 1977.

M. Jiang and G. Wang / Development of blind image deconvolution and its applications [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23]

19

M.R. Banham and A.K. Katsaggelos, Digital Image Restoration, IEEE Signal Processing Magazine 14(2) (March 1997), 24–41. A.S. Carasso, Direct blind deconvolution, Internal Report NISTR 6428, Mathematical and Computational Sciences Division, National Institute of Standards and Technology, US Department of Commerce, 1999. T.F. Chan and C. Wong, Total variation blind deconvolution, IEEE Transactions on Image Processing 7(3) (March 1998), 370–375. I. Csisz´ar, Why least squares and maximum entropy? An axiomatic approach to inference for linear inverse problems, The Annals of Statistics 19(4) (1991), 2032–2066. D.A. Fish, A.M. Brinicombe and E.R. Pike, Blind deconvolution by means of the Richardson-Lucy algorithm, J. Opt. Soc. Am. A 12(1) (January 1995), 58–65. D. Kundur and D. Hatzinakos, Blind image deconvolution, IEEE Signal Processing Magazine 13(3) (May 1996), 43–64. D. Kundur and D. Hatzinakos, Blind image deconvolution revisited, IEEE Signal Processing Magazine 13(6) (November 1996), 61–63. T.J. Holmes, Blind deconvolution quantum-limited incoherent imagery: maximum-likelihood approach, J. Opt. Soc. Am. A 9 (1992), 1052–1061. M. Jiang, G. Wang, M.W. Skinner, J.T. Rubinstein and M.W. Vannier, Blind deblurring of spiral CT images, Proceedings of the 35th Asilomar Conference on Signals, Systems, and Computers, 2001, pp. 1692–1696. M. Jiang, G. Wang, M.W. Skinner, J.T. Rubinstein and M.W. Vannier, Blind deblurring of spiral CT images, IEEE Transactions on Medical Imaging (2001), (in press). M. Jiang, G. Wang, M.W. Skinner, J.T. Rubinstein and M.W. Vannier, Blind deblurring of spiral CT images – study of different ratios, Medical Physics 29(5) (May 2002), 821–829. J. Markham and J.-A. Conchello, Parametric blind deconvolution: a robust method for the simultaneous estimation of image and blur, J. Opt. Soc. Am. A 16(10) (October 1999), 2377–2391. C.A. Ong and J.A. Chambers, An Enhanced NAS-RIF algorithm for blind image deconvolution, IEEE Transcations on Image Processing 8(7) (July 1999). A.V. Oppenheim, R.W. Schafer and T.G. Stockham, Nonlinear filtering of multiplied and convolved signals, Proc. IEEE 56 (1968), 1264–1291. J.A. O’Sullivan, R.E. Blahut and D.L. Snyder, Information Theoretic Image Formation, IEEE Transactions on Information Theory 44(6) (October 1998), 2094–2123. D.L. Snyder, M.I. Miller, L.J. Thomas and D.G. Politte, Noise and edge artifacts in maximum-likelihood reconstructions for emission tomography, IEEE Transactions on Medical Imaging 6 (1987), 228–238. D.L. Snyder, T.J. Schulz and J.A. O’Sullivan, Deblurring subject to nonnegativity constraints, IEEE Transactions on Signal Processing 40 (1992), 1143–1150. T.G. Stockham Jr., T.M. Cannon and R.B. Ingebresten, Blind deconvolution through digital signal processing, Proc. IEEE 63 (April 1975), 678–692. F. Tsumuraya, N. Miura and N. Baba, Iterative blind deconvolution method using Lucy’s algorithm, Astronomy and Astrophysics 282 (1994), 699–708. G. Wang, M.W. Vannier, M.W. Skinner, M.G.P. Cavalcanti and G. Harding, Spiral CT Image Deblurring for Cochlear Implantation, IEEE Transactions on Medical Imaging 17 (1998), 251–262. Y.L. You and M. Kaveh, A regularization approach to joint blur identification and image restoration, IEEE Transactions on Image Processing 5(3) (March 1996), 416–427. Y.L. You and M. Kaveh, Blind image restoration by anisotropic regularization, IEEE Transactions on Image Processing 8(3) (March 1999), 396–407.

Suggest Documents