Technical Notes and Correspondence

460 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 3, MARCH 2006 Technical Notes and Correspondence_______________________________ The Wonham ...
Author: Rudolf Foster
0 downloads 0 Views 209KB Size
460

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 3, MARCH 2006

Technical Notes and Correspondence_______________________________ The Wonham Filter With Random Parameters: Rate of Convergence and Error Bounds X. Guo and G. Yin Abstract—Let ( ) be a finite-state continuous-time Markov chain with = ( ) and state space = ... , generator where for are distinct real numbers. When the state–space and the generator are known a priori, the best estimator of ( ) (in terms of mean square error) under noisy observation is the classical Wonham filter. This note addresses the estimation issue when values of the state–space or values of the generator are unknown a priori. In each case, we propose a (suboptimal) filter and prove its convergence to the desired Wonham filter under simple conditions. Moreover, we obtain the rate of convergence using both the mean square and the higher moment error bounds. Index Terms—Approximation, error bounds, rate of convergence, Wonham filter.

I. INTRODUCTION Given a probability space ( ; F ; P ) and t 2 [0; T ] for some T > 0. Suppose that (t) is a finite-state continuous-time Markov process with generator Q = (q ij ) 2 m2m and state space M = fz 1 ; . . . ; z m g, where z i for i  m are distinct real numbers, so that the transition probabilities are

P ij (h) = P ( (t + h) = z j j (t) = z i ) i P ij (h) = 1ij0 q h + o(h); i = j h ! 0 i 6= j h ! 0 q h + o (h );

(1) (2)

where q i = i6=j q ij . Assume that the Markov process (t) is observed with the observation process y (t) such that

dy(t) = (t)dt + (t)dw(t) y(0) = 0 w:p: 1

priori and fixed, the Wonham filter [9] provides the optimal filter in the sense of mean square error. It was the first finite-dimensional filter for non-Gaussian processes, and remains as one of the very few known finite-dimensional filters to-date. (The first rigorous development of nonlinear filters for diffusion-type processes was [5]. For a more detailed treatment of filtering problems, see [7, Vols. 1, 2]; for a more recent reference on filtering, see [2]; for the use of Wonham filter in adaptive control and estimation, see [1] and references therein; for the use of Clark’s transformation, see [8]; for some recent references on filtering and estimation, see [10] and [12]. It is natural to take a step further and consider the estimation problem when there are additional uncertainties in the value of the state space z 1 ; . . . ; z m or of the generator (qij ). For instance, one may consider the case when z i ’s are unknown a priori, and their values are known only up to a certain distribution. This problem arises frequently in for example, Bayesian statistics, where priors could be drawn from a distribution. One possible solution is to apply a Monte Carlo approach which entails the replacement of states in the Wonham filter by their simulated or approximated values. Such an approach raises important questions such as the rate of convergence and error estimates. To some extent, many of these problems may be viewed as a robustness issues—this is the central topic of this note. Our first main result is the construction of approximating filters when only noisy or simulated values z i or (q ij ) are available. We then prove that these suboptimal filters converge to the desired Wonham filter under simple ergodicity conditions. We also evaluate the accuracy of the approximations and derive the approximation error bounds. These bounds, including both mean square (or L2 bounds) as well as bounds based on higher moments, provide results on the rate of convergence and enable one to assess the quality of the approximations. The rest of the note is organized as follows. Section II contains the main results. Proofs of the main theorems are in Section III. Section IV concludes the note with some remarks. II. MAIN RESULTS

(3)

where w( 1 ) is a standard one-dimensional Brownian motion that is 7! , with (t)  c for all independent of (t), and that  ( 1 ) : t 2 [0; T ] and some c > 0, is a continuously differentiable function. Note that (3) indicates that the basic observation model has the form of “signal plus noise,” and that the distribution of y (t) is non-Gaussian but is a Gaussian mixture due to the jump processes (t). In this framework, one of the classical results known as the Wonham filter concerns estimating (t) based on the observation y ( 1 ). When the values of the states z 1 ; . . . ; z m and the generator Q are known a Manuscript received December 10, 2003; revised May 21, 2005. Recommended by Associate Editor L. Gerencser. The work of X. Guo was supported in part by the National Security Agency under Grant H98230-05-1-0084. The work of G. Yin was supported by the National Science Foundation under Grant DMS-0304928. X. Guo is with School of ORIE, Cornell University, Ithaca, NY 14853 USA (e-mail: [email protected]). G. Yin is with Department of Mathematics, Wayne State University, Detroit, MI 48202 USA (e-mail: [email protected]). Digital Object Identifier 10.1109/TAC.2005.864192

A. Wonham Filter Let (t) be a finite-state continuous time Markov process with state–space M = fz 1 ; . . . ; z m g and generator Q = (q ij ) 2 m2m , as defined earlier. Given (3), define

p(t) = (p1 (t); . . . ; pm (t)) 2 12m with i = 1; . . . ; m; pi (0) = p0i : pi (t) = P ( (t) = z i j y(s); 0  s  t); It was proved in [9] that this conditional density satisfies the following system of stochastic differential equations:

dpi (t) =

m j =1

pj (t)qji dt 0 02 (t) (t)[z i 0 (t)]pi (t) dt

+ 02 (t)[z 0 (t)]p (t) dy(t); i

i

i = 1; . . . ; m

where

(t) = hp(t); zi def =

0018-9286/$20.00 © 2006 IEEE

m i=1

z i p i ( t) ;

z = (z 1 ; . . . ; z m )0

(4)

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 3, MARCH 2006

and v 0 denotes the transpose of v . Adopting a vector notation, define

A(t) def = diag(z 1 0 (t); . . . ; z m 0 (t)) 1 z 0 (t) ..

=

.

z m 0 (t)

:

Then the Wonham filter can be rewritten as shown in (5) at the bottom of the page. B. Approximate Wonham Filter Using fz^n g Now, let us assume that z i ’s are not available, and that only their noise corrupted measurements or observations or distributional information are at our disposal. We assume further that (q ij ) remains unchanged and known a priori. In particular, we assume that a sequence of observations of the form m21 such that E z^ = z n

z^n = (^zn1 ; . . . ; z^nm )0 2

can be obtained. For example, z^n = z + n , where fn g is a sequence of m -valued zero mean observation noise satisfying appropriate conditions. This essentially means that although the information about states of the Markov chain is inaccurate, one can repeat many experiments to get samples of the states. For instance, such a sequence may be obtained by simulation using Markov chain Monte Carlo (MCMC). Based on this assumption, we proceed to construct the approximate filter. First, define

zn =

1

n

n j =1

z^j :

(6)

Then in lieu of (5), we have a sequence of approximations pn (t) given by (7), as shown at the bottom of the page, where

n (t) = hpn (t); zn i An (t) n (t); . . . ; znm 0  n ( t) : = diag zn1 0

To obtain the desired limit result, we impose the following conditions. A1) fz^n g is a stationary ergodic sequence that satisfies E z^n = z and is uniformly bounded. The sequence fz^n g is independent of the Markov chain ( 1 ) and the Brownian motion w( 1 ). Remark 1: The uniform boundedness of the sequence fz^n g is not a restriction since one may use, for example, truncated normal random distributions. Although no independence is assumed for the sequence fz^n g, in simulation one often uses i.i.d. sequences for simplicity. Ergodicity implies that zn ! z w.p.l. The conditions cover a large class of processes. Examples of interests include the case when fz^n g is a stationary -mixing sequence, which in turn is ergodic (see [4, pp. 488–489]). That is, zn ! z w.p.l. Theorem 2: Under assumption A1), sup0tT E jen (t)j2 ! 0 as n ! 1. It is well known that convergence in L2 implies convergence in probability. Thus, the following is immediate. > Corollary 3: Under assumption A1), for any  0; limn!1 P (jen (t)j   ) = 0. Next, define en (t) = n en (t) for any 0 <   1=2. Then, the following estimates hold. Theorem 4: Under assumption A1), as n ! 1

sup

0tT

E jen (t)j2 =

o(1); 0 <  < 1=2 O(1);  = 1=2:

e n ( t ) = p n ( t ) 0 p( t ) : Then, en (t) satisfies

sup

0tT

E jen j2` =

sup

0tT

o(1); 0 <  < 1=2 O(1);  = 1=2:

!1

E exp(je~n (t)j) = O(1):

^ng C. Approximate Wonham Filter Using fQ ij Now, we assume uncertainty in (q ) with its sequence of noise cor^ n = (^ qnij ) 2 m2m . As in the previous section, rupted observations Q the approximate filter can be built accordingly by defining

Q n = (8)

(9)

 (t)j2 ! 0 uniformly in t, whereas In this, o(1) means that E jen  2 O(1) indicates that E jen (t)j is bounded uniformly. To proceed, we obtain higher moment bounds. Theorem 5: Assume assumption A1). i) For any positive integer ` > 1, as n ! 1

1=2 ii) For  = 1=2, denote e~n (t) = en (t). Then as n

Since fpn (t)g is a sequence of approximations of the posterior density p(t), we may appropriately normalize it to ensure its boundedness [9]. Define

den (t) = en (t)Qdt 0 02 (t)[ n (t) 0 (t)]pn (t)An (t)dt 002 (t) (t)en (t)An (t)dt 002 (t) (t)p(t)[An (t) 0 A(t)]dt + 02 (t)en (t)An (t)dy (t) + 02 (t)p(t)[An (t) 0 A(t)]dy (t) en (0) = 0:

461

1

n

n l=1

Q^ l

(10)

and by redefining a sequence of approximations pn (t) with (11), as shown at the bottom of the next page, where

n (t) = hpn (t); z i An (t) = diag(z 1 0 n (t); . . . ; z m 0 n (t)):

dp(t) = p(t)Qdt 0 02 (t) (t)p(t)A(t)dt + 02 (t)p(t)A(t)dy(t) p(0) = p0

(5)

dpn (t) = pn (t)Qdt 0 02 (t) n (t)pn (t)An (t)dt + 02 (t)pn (t)An (t)dy(t) pn (0) = p0

(7)

462

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 3, MARCH 2006

Let en (t) =

def

pn (t) 0 p(t). Then

B. Review of Some Inequalities

Lemma 10: Suppose that f (t) is Ft -measurable and that it satisT fies 0 E jf (t)j2 dt < 1, where Ft is the  -algebra generated by fw(s); (s) : s  tg. Then, with y(s) is given by (3)

den (t) = en (t)Q n dt + p(t)(Q n 0 Q)dt 002 (t)[ n (t) 0 (t)]pn (t)An (t)dt 002 (t) (t)en (t)An (t)dt 002 (t) (t)p(t)[An(t) 0 A(t)]dt + 02 (t)en (t)An (t)dy (t) + 02 (t)p(t)[An (t) 0 A(t)]dy (t) en (0) = 0: A2)

(12)

In a similar fashion, we can derive the following results. Theorem 6: Assume A2). Then, sup0iT E jen (t)j2 ! 0 as n ! 1. Remark 7: For ease of exposition, the assumption A2) is stronger than needed. For instance, Theorem 6 holds when the uniform bound^ n g is replaced by E jQ ^ n j2 < 1. This remark applies to edness of fQ the following theorems as well when the uniform boundedness is replaced by appropriate higher moment conditions. Theorem 8: Assume A2). i) For any positive integer ` > 1, as n ! 1

sup

tT

E jen (t)j2` =

o(1); 0 <  < 1=2 O(1);  = 1=2:

1=2 ii) For  = 1=2, denote e~n (t) = en (t). Then, as n

sup

tT

0

0

f (s) dy(s)

2

 KT

t 0

E jf (s)j2 ds:

Proof: It is straightforward that

fQ^ ng is a stationary ergodic sequence such that it is uni^ n = Q, and fQ ^ n g is independent of formly bounded, E Q the Markov chain ( 1 ) and the Brownian motion w( 1 ).

0

t

E

(13)

!1

E exp(je~n (t)j) = O(1):

Remark 9: Theorems 4 and 8 provide a rate of convergence result. It asserts that the convergence speed is of the order n1=2 . For any  < 1=2, a trivial limit is obtained. Note that the convergence is in the sense of mean square convergence. Thus, Theorems 4 and 8 tell us how the mean square error depends on the size of the sample. Finally, the same techniques can be used to build Wonham filters ^ n g. Formally, the approximate filters so deusing both fz^n g and fQ signed have the same form as (11). Assuming Al) and A2), and the ^ n g, then the conclusions of Theorem 8 independence of fz^n g and fQ continue to hold. III. PROOFS This section presents the proofs of results. Since the proof of Theorem 4 uses similar techniques as that of Theorem 5, in order not to dwell on it, the proof of Theorem 5 proceeds that of Theorem 4. A. Notation Throughout the rest of this note, we use the notation K; KT ; KT;` to denote some generic positive constants, in which the subscripts highlight their dependence on the indicated quantities. These constants are generic in the sense their values may be different for different usage. That is, the convention K + K = K and KK = K is used. Before going through the proofs, here are some basic inequalities used throughout.

t

E

0

f (s)dy(s) t

=E

0

 KE  KT

f (s)[ (s)ds + (s)dw(s)] t

 KE

2

0

t

0

t

f (s) (s) ds

2

t

+ KE t

jf (s)j ds + K 2

0

2

0

f (s)(s) dw(s)

2

E jf (s)(s)j2 ds

E jf (s)j2 ds:

0

The lemma follows. Lemma 11 (Gronwall’s Inequality): Let u(t) and g (t) be nonnegative continuous functions on [0; T ] for which the inequality

u ( t)  C +

t 0

t 2 [0; T ]

g(s)u(s) ds;

holds, where C is a nonnegative constant. Then

u(t)  C exp

t 0

t 2 [0; T ]:

g(s) ds ;

Proof: See [3, p. 36]. Lemma 12: For any real numbers J1 ; J2 ; J3 ; J4 ; J5 , and r

1 j J + J + J + J + J j r  K r ( jJ jr + jJ jr + jJ j r + jJ jr + jJ j r ) 1

2

3

4

5

1

2

3

4

5

where Kr is a positive constant depending on r . Proof: This is obtained via repeated applications of the following inequality: For any real numbers a; b, and r  1; ja + bjr  2r01 (jajr + jbjr ). C. Proof of Theorem 2 Writing (8) in a variational form and noting en (0) = 0, we obtain

e n ( t) = 0

0 0 + +

t

02 (s)[ n (s) 0 (s)]pn (s)An (s)eQ(t0s) ds

t

02 (s) (s)en (s)An (s)eQ(t0s) ds

t

02 (s) (s)p(s)[An (s) 0 A(s)]eQ(t0s) ds

t

02 (s)en (s)An (s)eQ(t0s) dy(s)

t

02 (s)p(s)[An (s) 0 A(s)]eQ(t0s) dy(s):

0

0

0

0

0

dpn (t) = pn (t)Q n dt 0 02 (t) n (t)pn (t)An (t)dt + 02 (t)pn (t)An (t)dy(t) pn (0) = p0

(14)

(11)

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 3, MARCH 2006

a suitable function g ( 1 ) (e.g., Ft -measurable E jg(t)j2 dt < 1), by the Cauchy–Schwarz inequality

For T

0

t

E

g(s) ds

0

2

t



0

 KT

t

ds t 0

0

and

E jen (t)j2`  K` (E jJ1 j2` + E jJ2 j2` + E jJ3 j2` + E jJ4 j2` + E jJ5 j2` ) (20)

(15) where

Thus, we arrive at

E jen (t)j2

D. Proof of Theorem 5 1) Part i): Using (14) and Lemma 12, we obtain that for a positive integer ` with r = 2` > 2

E jg(s)j2 ds

E jg(s)j2 ds:

463

t

J1 =

 KT E j0 (s)j j n (s) 0 (s)j pn (s)j 2 jAn (s)j jeQ t0s j ds t + E j0 (s)j j (s)j jen (s)j jAn (s)j jeQ t0s j ds 2

0 2

(

2

0

+ + +

0

t t

0

0

t

2

2

2

J2 =

2

2

2

(

J3 =

) 2

E j02 (s)j2 j (s)j2 jp(s)j2 jAn (s) 0 A(s)j2 jeQ(t0s) j2 ds

J4 =

E j02 (s)j2 jen (s)j2 jAn (s)j2 jeQ(t0s) j2 ds

J5 =

E j02 (s)j2 jp(s)j2 jAn (s) 0 A(s)j2 jeQ(t0s) j2 ds :

(16)

t

+

0

0

t

E jAn (s) 0 A(s)j ds +

0

E jen (s)j ds : 2

t

E j n (s) 0 (s)j2 ds  KT

+ Likewise t E jAn (s) 0 A(s)j2 ds  KT

t

0

0

t 0

n 02 (s) (s)p(s)[An (s) 0 A(s)]eQ(t0s) ds

t

n 02 (s)en (s)eQ(t0s) dy(s)

t

n 02 (s)p(s)[An (s) 0 A(s)]eQ(t0s) dy(s):

t 0

E jzn 0 z j ds : 2

t 0

n =

0

E jzn 0 z j

tT

0

E jen (s)j ds t 0

!0

E je n ( t) j2  K T  n ! 0

The desired result then follows.

t 0

E jg ( s ) j

2`

1=(2`)

E jg(s)j2` ds

 KT;`

2

E jzn 0 z j2 ds :

E jJ2 j2`  KT;` E jJ3 j2`  KT;`

(19)

2`

(21)

t

n2` E j02 (s)j2` j n (s) jpn (s)j2jAn (s)j2`jeQ(t0s) ds

0 2`

t

0

n2` E j n (s) 0 (s)j2` ds:

(22)

as n ! 1:

t 0

0

t

E jen (s)j2` ds n2` E jAn (s) 0 A(s)j2` ds:

(23)

T Let f ( 1 ) be a function satisfying 0 E jf (t)j2l dt < 1. Similar to the proof of Lemma 10, with the use of [7, Vol. I, p. 131, Lemma 4.12], we obtain

E

as n ! 1:

Recall that KT is a generic positive constant depending on T . We have

sup

t 0

(18)

where 2

ds

0(1=(2`)))

1=(1

0 (s)j

 KT n exp(KT t)  KT n exp(KT T ) T

2`

g(s) ds

E jJ1 j2`  KT;`

Substituting (18) and (19) into (17), by Gronwall’s inequality

E je n ( t) j

t

Likewise, we obtain

+

2

02 (s) (s)en (s)An (s)eq(t0s) ds

 KT;`

E jen (s)j2 ds

0

0

t

where KT;` > 0 is a generic positive constant depending on T and `. This inequality, together with the fact that  02 ( 1 ); pn ( 1 ), and An ( 1 ) are uniformly bounded in the interval [0; T ], and that eQ(t0s) is bounded for 0  s  t  T , suggests

n (s) 0 (s) = hpn (s) 0 p(s); zn i + hp(s); zn 0 z i: Thus t

0



(17)

Indeed, note that

0

E

E j n (s) 0 (s)j2 ds 2

0

n 02 (s)[ n (s) 0 (s)]pn (s)An (s)eQ(t0s) ds

 (t)j2 and in particular to (15), we Similar to the estimates for E jen have, from the Hölder’s inequality

Since we are working with the finite horizon [0; T ], the uniform boundedness of  02 (s); pn (s); An (s); and eQ(t0s) on 0  s  t  T then implies

t

0

) 2

2

E je n ( t) j 2  K T

0

t

t 0

f (s) dy(s)

2`

 KT;`

t 0

E jf (s)j2` ds:

Thus

E jJ4 j2`  KT;` E jJ5 j2`  KT;`

t 0

0

t

E jen (s)j2` ds: E jAn (s) 0 A(s)j2` ds:

(24)

464

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 3, MARCH 2006

Combining the estimates obtained thus far and separating en (s) and jzn 0 zj as done in (18) and (19), we arrive at t

E jen j2`  KT;`

0

t

n2` E jzn 0 z j2` +

0

E jen (s)j2` ds : (25)

Note that

E jzn 0 z j2` 

1

n

n l=0

E jz^l 0 z j2` :

ACKNOWLEDGMENT

An application of the Gronwall’s inequality yields that

Ej

j 

en (t) 2`

The authors would like to thank A. Dembo for stimulating discussions and references. They would also like to thank the referees for carefully reading the manuscript and for many constructive suggestions.

KT;`n exp(KT;`T )

and in addition

sup

tT

0

E jen (t)j2`  KT;` n exp(KT;` T )

REFERENCES

where

n =

T 0

n2` E jzn 0 z j2` ds:

Therefore

n =

O(1);  = 1=2 o(1); 0 <  < 1=2:

2) Part ii): In part i), we have shown that je~n (t)j2` = O(1) and that the bound is uniform in t 2 [0; T ]. Similar to [6, p. 142], E je~n (t)j2`01  E (2`01)=(2`) je~n (t)j2` = O(1) uniformly in t 2 [0; T ]. In view of (21), we can make KT;l  KT2l . Thus

E exp(je~n (t)j) =

1 E je~ (t)jl n l! l=0

K

1 Kl T < 1: l! l=0

(26)

The proof is concluded. Proof of Theorem 4: Similar to (17), we obtain

E jen (t)j2  KT +

t 0

t 0

E (n2 j n (s) 0 (s)j2 ) ds

E (n2 jAn (s) 0 A(s)j2 ) ds +

t 0

E jen (s)j2 ds :

Note that by virtue of Remark 1

E (n2 j n (t) 0 (t)j2 ) = E (n2 jAn (t) 0 A(t)j2 ) =

where y (t) 2 r ; g ( 1 ) : M ! r ;  ( 1 ) : r2r ! r ; w( 1 ) is standard r -dimensional Brownian motion, (t) is a continuous-time Markov chain taking values in M = f1; . . . ; mg. Then one can proceed with the corresponding Wonham filter and approximation when the state values are observed with noise. The results can be carried over with few modifications. Another interesting problem is to construct approximation of Wonham filter based on observations taken at discrete time and/or by noisy observation of the underlying system instead of the noisy observation of the unknown system parameters. For a recent paper on this direction, the reader is referred to [11].

o(1); 0 <  < 1=2 O(1);  = 1=2 o(1); 0 <  < 1=2 O(1);  = 1=2

and the above bounds hold uniformly in t 2 [0; T ]. The rest of the proof is similar to the proof of previous theorems. IV. FURTHER REMARKS An approximation algorithm has been developed in this work. For simplicity, it was setup as a scalar problem. Extensions to vector-valued problems are straightforward. For example, one may consider the observation of the form

dy(t) = g( (t))dt + (t)dw(t);

y(0) = 0

w:p:1

[1] P. E. Caines and J. F. Zhang, “On the adaptive control of jump parameter systems via nonlinear filtering,” SIAM J. Control Optim., vol. 33, pp. 1758–1777, 1995. [2] R. J. Elliott, Stochastic Calculus and Applications. New York: Springer-Verlag, 1982. [3] J. K. Hale, Ordinary Differential Equations, 2nd ed. Malabar, FL: R.E. Krieger, 1980. [4] S. Karlin and H. M. Taylor, A First Course in Stochastic Processes, 2nd ed. New York: Academic, 1975. [5] H. J. Kushner, “On the differential equations satisfied by conditional probability densities of Markov processes with applications,” SIAM J. Control, vol. 2, pp. 106–119, 1964. [6] H. J. Kushner and G. Yin, Stochastic Approximation Algorithms and Applications, 2nd ed. New York: Springer-Verlag, 2003. [7] R. S. Liptser and A. N. Shiryayev, Statistics of Random Processes I & II. New York: Springer-Verlag, 2001. [8] W. P. Malcome, R. J. Elliott, and J. van der Hoek, “On the numerical stability of time-discretized state estimation via Clark transformations,” in Proc. 42nd IEEE Conf. Decision Control, 2003, pp. 1406–1412. [9] W. M. Wonham, “Some applications of stochastic differential equations to optimal nonlinear filtering,” SIAM J. Control, vol. 2, pp. 347–369, 1965. [10] G. Yin and S. Dey, “Weak convergence of hybrid filtering problems involving nearly completely decomposable hidden Markov chains,” SIAM J. Control Optim., vol. 41, pp. 1820–1842, 2003. [11] G. Yin, Q. Zhang, and Y. J. Liu, “Discrete-time approximation of Wonham filters,” J. Control Theory Appl., vol. 2, pp. 1–10, 2004. [12] O. Zeitouni and A. Dembo, “Exact filters for the estimation of the number of transitions of finite-state continuous-time Markov processes,” IEEE Trans. Inform. Theory, vol. 34, no. 4, pp. 890–893, Jul. 1988.