Wavelet Domain Image Separation Ali Mohammad-Djafari and Mahieddine ICHIR
Laboratoire des Signaux et Systèmes, Supélec, Plateau de Moulon, 91192 Gif-sur-Yvette, France Abstract. In this paper, we consider the problem of blind signal and image separation using a sparse representation of the images in the wavelet domain. We consider the problem in a Bayesian estimation framework using the fact that the distribution of the wavelet coefficients of real world images can naturally be modelled by an exponential power probability density function. The Bayesian approach which has been used with success in blind source separation gives also the possibility of including any prior information we may have on the mixing matrix elements as well as on the hyperparameters (parameters of the prior laws of the noise and the sources). In our knowledge, even the Bayesian approach has been used for blind source separation either in time and in Fourier domain, it has not yet been used in wavelet domain. We consider two cases: first the case where the wavelet coefficients are assumed to be i.i.d. and second the case where we model the correlation between the coefficients of two adjacent scales by a first order Markov chain. The estimation computations are done via a Monte Carlo Markov Chain (MCMC) procedure. Some simulations show the performances of the proposed method.
INTRODUCTION Blind source separation is an active area of research in signal and image processing. Different approaches have been proposed: Principal component analysis (PCA) [1], Independent factor analysis (IFA) [2, 3, 4], Independent component analysis (ICA) [5, 6, 7], Maximum likelihood estimation [8, 9, 10, 11, 12, 13, 14, 15] and Bayesian estimation [16, 17, 18, 19, 20, 20, 21, 22]. All these methods use in general independance, sparsity and diversity of the sources either in time or in Fourier domain. Wavelets have also been used in many fields of the signal and image processing. Donoho in [23], Wan and Nowak in [24] have used the multiscale representation of signals to solve the inverse problems. Antoniadis, Leporini and Pesquet in [25] have used wavelets for signal denoising. They established a close connection between the maximum a posteriori estimation approach and wavelet thresholding. Recently, some authors considered the diversity and sparsity (atomicity) of the wavelet domain coefficients of the sources for blind source separation [26]. In this paper, we transport the problem of image separation to the wavelet domain and propose to use the Bayesian estimation framework. This approach comes in a natural way by the fact that the distribution of the wavelet coefficients of a great class of real world images can naturally be modelled by an exponential power probabilty density function (pdf). Thus, independance, sparsity and diversity which are the main hypotheses of all the source separation technics are not required for the sources themselves, but rather for their wavelet coefficients. The Bayesian approach which has been used with success in blind source separation
gives also the possibility of including any prior information we may have on the mixing matrix elements as well as on the hyperparameters (parameters of the prior laws of the noise and the sources) of the problem. In our knowledge, even the Bayesian approach has been used either in time and Fourier domain, it has not yet been used in wavelet domain. In this work, we make use of the fast wavelet transform developed by Mallat [27] to have a non-redundant multiscale representation. This paper is organized as follows: In section 2, we first present the general source separation problem using a notations which can be used either in 1D, 2D or m-D case. Then, we write the same problem in wavelet domain and explicite our hypotheses about the prior distributions of the noise and wavelet coefficients. In section 3, we present the Bayesian approach and give the main expressions of the prior and posterior probability density functions. In section 4, first we give the basics of the MCMC algorithem and then apply it to our case. In section 5, we present a few simulation results to show the performances of the proposed method and give some comparison with other known and classical approaches. Finally, in section 6, we present our conclusions.
PROBLEM FORMULATION Blind image separation consists of estimating sources from a set of their linear mixtures.
which are instantaneous linear The observations consist of images mixtures unknown sources , possibly corrupted by additive noise !" # $of : % '&)(* (1) &,+.-0/12
where isthe mixing !matrix. To be able to consider 1D, 2D or even m-D signals, we assume that , and contain each 3 samples representing either 3 samples of pixels of an time series or 3 image or, more generally, 3 voxels of an m-D signal. Thus, is a 4576)38 matrix and and are 4 6)398 matrices. The blind source separation problem is to estimate both the mixing matrix : and the sources from the data and some assumptions about noise distribution and some prior knowledge of sources distributions. Different approaches have been proposed: Principal component analysis (PCA) [28, 29] mainly assume the problem without noise and Gaussian distribution for sources, Independent component analysis (ICA) [28, 30] and Maximum likelihood estimation [29] assume again the problem without noise but different non Gaussian distributions for sources, Factor analysis (FA) methods take account of the noise, but assume Gaussian priors both for the noise and the sources. The Bayesian approach is a generalisation of the FA with the possibility of any non Gaussian priors for noise and sources as well as the possibilty of accounting any prior knowledge on the elements of the mixing matrix and the hyperparameters of the problem. In addition, it allows us to jointly estimate the sources , the mixing matrix : and even the hyperparameters ; of the problem through the posterior: < 4 = :
;?>
8A@
< 4 >
= :
;B8 < 4 >C;A8 < 4":D>E;A8 < 4";A8
(2)
We have used this approach before with different priors < 4 >E;A8 such as Gaussian [31] and mixture of Gaussian [32, 33]. We also used this approach in multispectral image separation in astronomy for separating the cosmological microwave background (CMB) from other cosmological microwave activities [34, 35, 36, 37, 38, 39]. In this paper, we are going to use the same Bayesian approach, but doing the separation using the independance and diversity of the wavelet domain coefficients of the sources. Noting by the vector F the 3 samples of one of the sources, by G the discrete wavelet transform matrix, and by H the complete wavelet coefficients of the signal we have F GIH (3) Now, using the fact discrete wavelet transform is a linear and unitary that the complete M GLGKJ 8 , the problem of source separation can be easily transoperator 4"GKJ5G ported to the wavelet domain and written as: NO '& NQP ( NSR
(4)
The main advantage of using this last equation in place of the original source separation NTP problem is that we than for itself. For can more easily assign simple prior laws for example, when contains discontinuity or nonstationary, still its wavelet coefficients distribution can be modeled by a simple generalized exponential (GE) probability density function (pdf) while it is harder to model appropriately signal samples distribution by a simple pdf. Indead, it has been reported by many authors that the distribution of the wavelet coefficients of real world images are well modeled by a GE pdf: < 4 >CU V 8 XW=Y 4U "V 8 Vh
Z
V U\[]4 Vi
_^V 8
c^
exp `ab> Ud> egf
(5)
Z
Note that gives an exponential pdf and corresponds to a Gaussian pdf. We are going to use this prior probability law in our Bayesian estimation framework.
BAYESIAN FORMULATION In a first step, we assume that the sources and the noise wavelet coefficients are i.i.d. . Thus, to simplify the notations, we note, respectively, by jd4k8 , F#4"k8 and lm4"k8 the vectors containing the wavelet coefficients ( of the data, the sources and the noise for a given index k . Thus, we have jd4k8 :nFA4k8 lm4"ko8 . Hereafter, we omit the index k and note it only when needed. To proceed with the Bayesian approach, we have to assign the prior laws. In the following we assume: •
! W=Y " V The noise wavelet coefficients l are assumed independent and < 4 8 4U 8. Then -
< 45jp>C:
F
U
"V 8
q
V sr Z U=[]4 _^tV 8u exp va
-
w
4x> y
^ a{z|:*F~} > U=8 e#
(6)
of the sources are also assumed independent and < 4 8
• The coefficients F W Y wavelet = "V
4U
8 . Then
1
•
E
of the mixing matrix : The elements E E values and variances B :
Therfore, we may note by
< 4":D>E
where and
8
Zt 4
w
r Z U []4 _^V 8 u exp v a
< 4
E 8
•
1
V
q
< 4Fm> U _"V 8
^ 4>E > U 8 e
(7)
are assumed i.i.d. and Gaussian with mean
C 8 exp v a Z
45
E
an
E
8
(8)
- 1 q Zt 8 4 x a Z 4 Vect 4":Ia*88 J 4 Vect 4":Ia*88
(9)
E , Vect 4"8 means a vector containing the elements of the matrix
+-1/-12
All the hyperparameters 4 ¦¨ § Gamma prior distributions:
diag 4 x ¡ x £¢
¤¥ 8
¦ § 8 are assumed independent and assigned standard
< 4y#8 'W 4 Z 8 y
©
4aªy#8
The joint a posteriori law of the sources coefficients F , the mixing matrix : hyperparameters ; is then given by: < 4"F :
;]>Cj]8«@
< 4j>EF :
;B8 < 4"F«>E;B8 < 4":D>E;#8 < 4;B8 b
where we noted all the hyperparameters ¬U e U e
(10) and the (11)
by ; .
MCMC IMPLEMENTATION
Once the expression of the joint a posteriori law < 4F : ;]>Cj]8 of all the unknowns has been derived, we can use it to infer them. However, in general, the computation of the normalization factor needs a huge dimensional integration. When the MAP estimation is chosen, this normalization factor is not needed, but it is formally needed for other estimation rule such as the posterior mean. The MCMC algorithms are then the basic tools to generate samples from the posterior law. The main idea is to generate successively
+c®2#¯
+.®2
< 4Fm>E:
+.®2
+.®2g¯
< 4":D>EF
+.®2
+c®2
+.®2 ¯ 2 posterior +c®2 the samples from+.®the laws F ; j]8 , : ; j]8 < 4";]>CF : j]8 and then estimate their expected values by averaging these and ; samples. We use the Hasting-Metropolis algorithm combined to a Gibbs sampler to obtain an ergodic chain, and then approximate the ensemble expectation of any quantity ° by its ± empirical mean:
4"°8m²
(µ
45³´ah3
+ 2 < 4 »#> 8 . where ° J are samples VS from V0 Z Z
+ 2 4° J 8
w
8X¶ J¸·¹dº
Noting that, when and , the posterior laws for the sources and for the elements of the mixing matrix are Gaussians, we can use these Gaussians as the trial (or instrumental) pdf. Thus, to simplify the presentation of the proposed algorithm, we give here the expressions of these Gaussian posterior laws: •
The trial posterior pdf of the sources is Gaussian ¼A4"Fm>C; j?8 F¾
and
P ¾
where •
Z Z : J:
U
+1/1¨2
¦¿
4"F¾
P ¾ 8 with
P ¾ : Jj
U 4
S½
(
(12)
Z
¦¿ 8
diag 4U U
(13) 1 U 8
(14)
The trial posterior pdf of the mixing matrix elements is Gaussian ¼A4 Vect 4:*8>E; j]8 ½ 4 Vect 4Á À 8 ¾ 8 with Vect 4 À 8
and
U
w
Vect 4 ®
Z
¾
Z
¾ ÃÂ
Â
diag Â
U
w ®
jd4k8F J 4"k88
F#4"k8F J 4k8 Ä
(S
(S
Vect 4"8ÅÄ
Ä
(15)
(16)
The proposed MCMC algorithm is then the following:
F Æ :iÆ ; Æ and repeat the following steps until convergence Initialize F : ; to Á « Ç • Sampling FA4k8 , for k :
•
ÈÊÉ
Q½ ¼A4 È >E; j]8
4 F¾
P ¾ 8
P where F¾ and ¾ are given, respectively by eq. (12) and eq. (13) and +
2
F J¸Ë 4k8
+ È2 F J 4k8
with probability Ì with probability aiÌ
with Ì
min r
where < 4 È > jd4"k8 : • Sampling : : where À
< 4 È C> jd4"k8 : B ; 8 + 2 < 4"F k 8>Cjd4"ko8 : A ; 8ÎÍ J 4"
;B8 is given by eq.(7). Q½ ¼A4 È >E; j]8
ÈÊÉ
4 Vect 4 À 8
¾ 8
with Ì
2
J¸Ë :
min r
Mat+ 4 È 2 8 :
with probability Ì with probability anÌ
J
< 4 È C> j F ;«8 + 2 < 4 Vect 4": ; 8 Í J 8_> j F B
where < 4 È > jd4"k8 : ;B8 is given by eq. (9). • Sampling ; + 2 ¦ § : W
; J¸Ë É
with
Sampling ;
and ¾ are given, respectively by eq. (15) and eq. (16) and +
•
¼A+ 4 È2 8 ¼A4"F J 4 k 88 u
Ó
Ç V (
Z
and
9 «
¦ § , for
ÏÐ
V
8 + 2 ¼ 4 Vect 4": J 8 8tu B
8
: 2 ¸ J Ë É ;
Ç
Ï
È
-] Ñ w (µ Â ® > y 4"ko8ga{z|:*F#4"k8Ò} > e Ä
with
4
¼A4
+
W
4
Ï 8
Ñ ( Z
and
ÏÐ
w ¬ ®
(µ >E "4 k8> e ·
SIMULATION RESULTS To illustrate the proposed method, we have mixed two images of ÔÎÕÖ6hÔÎÕ pixels (the pictures of lena and cameraman presented in figure (1)) with a mixing matrix: .Ú ×Ø)Ù Ù Õ Ù ÜÎ Z Ü Ù Ù Õ~Þ Ü
&'
.Û Û Û ÜÛ Ù .ÝÙ Ù Ú Ü Ô*àß Ù Õ
R KÛ
and gaussian noise of zero mean and a variance a £ white Z U Ô . Figure (2) shows the mixed images obtained. Ü Ü 10
10
20
20
30
30
40
40
50
50
60
which correspond to
60 10
20
30
40
50
60
10
20
30
40
50
60
FIGURE 1. The original source images
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60 10
20
30
40
50
60
60 10
20
30
40
50
60
10
20
30
40
50
60
FIGURE 2. The mixed images with a white gaussian noise (áãâbä_åçæ~é è âbê )
First, we applied the proposed method directly on the mixed images where we assumed noises and original images to be i.i.d. and Gaussian. Figure estimatedb shows the separated images. Then, we accounted for the local correlation between neighboring pixels of the original images. Figure estimatedc shows the separated images. Then, we applied the method in the wavelet domain. What follows give more detailed description of the numerical experiments.
FIGURE 3. Estiamted source images obtained directly assuming i.i.d. Gaussian priors for noise and images
FIGURE 4. Estiamted source images obtained directly accounting for local spatial correlations
+ We 2 + apply 2 the MCMC algorithm described above to obtain ergodic chains of and : J F J ; J . We used the following values for initialization: .Û cÛ + 2 + 2 + 2 + 2 ×Ø Ù .Û Ù cÛ $ ð Z ñ Æ Æ C ë ì í E ë ì í E ë ì í Æ : Æ # a ï î U U U Ù .Û Ù cÛ ßà Ù Ù
The final estimated values obtained by averaging the last 100 samples after 500 iterations are the following:
×Ø)Ù Þ ÜÛ ÕÞ Þ Þ :¾ Ù Õ Î Z Õ Ù ÜÙ
cÛÝ Ô Ô Ù cÝ Ù Ù ßà Ù Õ Ù Õ
U¾
Ü
¸ Ô
Z
XÚ Á Û Z Õ Ù ÔÔ ÀU Ù ÀU
Û ÕÎÕÕ Þ Ô Ù
We may also note that cÝ the estimated Ý values of U and U directly from the original Þ Ù Ü Ô Ù ,U Õ~Ô Ù Ù Ü Ô . images are U Figure (5) shows the estimated images. Figure (6) shows the convergence of the elements of the matrix : and figure (7) shows the convergence of the hyperparameters.
10
10
20
20
30
30
40
40
50
50
60
60 10
20
30
40
50
60
10
20
30
40
50
60
FIGURE 5. Estiamted source images obtained in wavelet domain 1
1
0.9
0.5
0.8
0
50
100
150
0
200
1
0
50
100
150
200
0
50
100
150
200
0
50
100
150
200
1 0.8
0.5 0.6 0
0
50
100
150
0.4
200
0.4
1
0.3
0.8
0.2
0.6
0.1
0.4
0
0
50
100
150
0.2
200
FIGURE 6. Convergence of the elements of ò 400
1000
350
900
300
800
250
700
200
600
150
500
100
400
50
0
300
0
20
40
60
80
100
120
140
160
180
200
200
0
20
40
60
FIGURE 7. Convergence of the hyperparameters ó : Left: ô ,
80
100
120
140
Right: ôõ and ô
160
180
200
è.
Figure (8) shows the histograms of the original, mixed and estimated images and Figure (9) shows the histograms of the wavelet coefficients of original, mixed and
estimated images. 300
600
250
500
200
400
150
300
100
200
50
0
100
0
50
100
150
300
200
250
0
300
0
50
100
150
300
250
200
250
300
0
50
100
150
150
200
250
300
300
350
250
250
200
200 200
150
150 150
100
100 100
50
0
50
50
0
50
100
150
200
250
300
0
0
50
100
150
350
350
300
300
250
250
200
200
150
150
100
100
50
0
FIGURE 8.
200
250
300
0
200
250
300
50
0
50
100
150
200
250
0
300
0
50
100
The histogram of original source images, the mixed images and the estimated images 2500
3000
2500 2000
2000 1500
1500
1000 1000
500 500
0 −500
−400
−300
−200
−100
0
2000
100
200
300
400
0 −500
500
−400
−300
−200
−100
0
2500
2500
2000
2000
1500
1500
1000
1000
100
200
300
400
500
1800
1600
1400
1200
1000
800
600
400
500
500
200
0 −500
−400
−300
−200
−100
0
100
200
300
400
500
0 −500
−400
−300
−200
−100
0
100
2500
2500
2000
2000
1500
1500
1000
1000
300
400
500
0 −500
−400
−300
−200
−100
−400
−300
−200
−100
0
100
200
300
400
0
100
200
300
400
500
500
500
0 −500
200
−400
−300
−200
−100
0
100
200
300
400
500
0 −500
500
FIGURE 9. The histogram of the wavelet coefficients of the original source images, the mixed images and the estimated images
CONCLUSIONS In this contribution we proposed an approach to jointly estimate the mixing matrix and the original source images. We transported the problem to the wavelet domain using a Bayesian approach where the wavelet coefficients of real world images are naturally modeled by generalised exponential distributions. Independance of the wavelet coefficients of signals is more realistic than the independance of the signals themselves. In a first step, we assumed all the wavelet coefficients to be independent and identically distributed. However, inter-scale correlation can be modeled in the prior laws of the signals wavelet coefficients and the above algorithm can be extended and the equations rewritten for this case.
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.
19. 20.
Tipping, M. E., and Bishop, C. M., Neural Computation, 11, 443–482 (1999). Attias, H., Neural Computation, 11, 803–851 (1999). Press, S. J., Applied Multivariate Analysis: Using Bayesian and Frequentist Methods of Inference, Robert E. Krieger Publishing Company, Malabar, Florida, 1982. Press, S. J., and Shigemasu, K., “Bayesian Inference in Factor analysis”, in Contributions to Probability and Statistics, Springer-Verlag, 1989, chap. 15. Cardoso, J.-F., Proceedings of the IEEE. Special issue on blind identification and estimation, pp. 2009–2025 (1998). Cardoso, J.-F., Neural Computation, 11, 157–192 (1999). Cardoso, J.-F., and Comon, P., “Independent Component Analysis, a survey of some algebraic methods”, in Proc. ISCAS’96, 1996, vol. 2, pp. 93–96. Ziskind, I., and Wax, M., IEEE Trans. Acoust. Speech, Signal Processing, ASSP-36, 1553–1560 (1988). Stoica, P., Ottersten, B., Viberg, M., and Moses, R. L., Signal Processing, 44, 96–105 (1996). Wax, M., IEEE Trans. Signal Processing, 39, 2450–2456 (1991). Cardoso, J.-F., IEEE Letters on Signal Processing, 4, 112–114 (1997). Lacoume, J.-L., “A survey of source separation”, in Proc. First International Conference on Independent Component Analysis and Blind Source Separation ICA’99, Aussois, France, 1999, pp. 1–6. Oja, E., “Nonlinear PCA criterion and maximum likelihood in independent component analysis”, in Proc. First International Conference on Independent Component Analysis and Blind Source Separation ICA’99, Aussois, France, 1999, pp. 143–148. MacLeod, R. B., and Tufts, D. W., “Fast maximum likelihood estimation for independent component analysis”, in Proc. First International Conference on Independent Component Analysis and Blind Source Separation ICA’99, Aussois, France, 1999, pp. 319–324. Bermond, O., and Cardoso, J.-F., “Approximate likelihood for noisy mixtures”, in Proc. First International Conference on Independent Component Analysis and Blind Source Separation ICA’99, Aussois, France, 1999, pp. 325–330. Rajan, J. J., and Rayner, P. J. W., IEE Proceedings - Vision, Image, and Signal Processing, 144, 116–123 (1997). Knuth, K., “Bayesian source separation and localization”, in SPIE’98 Proceedings: Bayesian Inference for Inverse Problems, San Diego, CA, edited by A. Mohammad-Djafari, 1998, pp. 147–158. Knuth, K., and Vaughan JR., H., “Convergent Bayesian formulation of blind source separation and and electromagnetic source estimation”, in MaxEnt 98 Proceedings: Int. Workshop on Maximum Entropy and Bayesian methods, Garching, Germany, edited by F. R. von der Linden W., Dose W. and P. R., 1998, p. in press. Lee, S. E., and Press, S. J., Communications in Statistics – Theory And Methods, 27 (1998). Roberts, S. J., IEE Proceedings - Vision, Image, and Signal Processing, 145 (1998).
21. Knuth, K., “A Bayesian approach to source separation”, in Proceedings of the First International Workshop on Independent Component Analysis and Signal Separation: ICA’99, Aussios, France, edited by C. J. J.-F. Cardoso and P. Loubaton, 1999, pp. 283–288. 22. Lee, T., Lewicki, M., Girolami, M., and Sejnowski, T., IEEE Signal Processing Letters, p. in press (1999). 23. Donoho, D. L., IEEE Transactions on Signal Processing (1992). 24. Wan, Y., and Nowak, R. D., IEEE Transactions on Image Processing (2001). 25. Antoniadis, A., Leporini, D., and Pesquet, J., IEEE Transactions on Signal Processing (1996). 26. Zibulevsky, M., and Pearlmutter, B. A., Blind Source Separation by Sparse Decomposition, Tech. rep., Computer Science Dept, FEC 313, University of Mexico, Albuquerque, NM 87131 USA (1999). 27. Mallat, S., a Wavelet Tour of Signal Processing, Academic Press, 1999. 28. Hyv ÷ ö rinen, A., Karhunen, J., and Oja, E., Independant Component Analysis, John Wiley & Sons Inc., 2001. 29. Mohammad-Djafari, A., “A Bayesian Approach to Source Sepration”, 19th Int. workshop on Bayesian and Maximum Entropy methods, MaxEnt, Boise, Idaho, USA, 1999. 30. Lee, T.-W., Independant Component Analysis "Theory and Applications", Kluwer Academic Publishers, Boston, Dordrecht, London, 1998. 31. Mohammad-Djafari, A., “Bayesian Inference and Maximum Entropy Methods”, in Bayesian Inference and Maximum Entropy Methods, edited by A. Mohammad-Djafari, MaxEnt Workshops, Gifsur-Yvette, 2000. 32. Snoussi, H., and Mohammad-Djafari, A., “Bayesian source separation with mixture of Gaussians prior for sources and Gaussian prior for mixture coefficients”, in Bayesian Inference and Maximum Entropy Methods, edited by A. Mohammad-Djafari, Proc. of MaxEnt, Gif-sur-Yvette, 2000, pp. 388– 406. 33. Snoussi, H., and Mohammad-Djafari, A., Approche bayésienne pour la séparation de sources, Rapport de stage de DEA - ATS, GPI – L 2 S (2000). 34. Snoussi, H., and Mohammad-Djafari, A., Dégénérescences des estimateurs MV en séparation de sources, Technical report ri-s0010, GPI – L 2 S (2001). 35. Snoussi, H., and Mohammad-Djafari, A., “Séparation de sources par une approche bayésienne hiérarchique”, in 18, Toulouse, 2001. 36. Snoussi, H., and Mohammad-Djafari, A., “Penalized maximum likelihood for multivariate Gaussian mixture”, in Bayesian Inference and Maximum Entropy Methods, MaxEnt Workshops, 2001. 37. Snoussi, H., and Mohammad-Djafari, A., “Bayesian separation of HMM sources”, in Bayesian Inference and Maximum Entropy Methods, MaxEnt Workshops, 2001. 38. Snoussi, H., Patanchon, G., Macías-Pérez, J., Mohammad-Djafari, A., and Delabrouille, J., “Bayesian blind component separation for cosmic microwave background observations.”, in Bayesian Inference and Maximum Entropy Methods, MaxEnt Workshops, 2001. 39. Snoussi, H., and Mohammad-Djafari, A., “Unsupervised learning for source separation with mixture of Gaussians prior for sources and Gaussian prior for mixture coefficients.”, in Neural Networks for Signal Processing XI, edited by D. J.Miller, IEEE workshop, 2001, pp. 293–302.