Sensor Applications of Digital Signal Processing

dig SENSORS www.aspbs.com/eos Sensor Applications of Digital Signal Processing Roman Z. Morawski Warsaw University of Technology, Faculty of Electro...
2 downloads 0 Views 22MB Size
dig

SENSORS www.aspbs.com/eos

Sensor Applications of Digital Signal Processing Roman Z. Morawski Warsaw University of Technology, Faculty of Electronics and Information Technology, Institute of Radioelectronics, Warsaw, Poland

CONTENTS 1. 2. 3. 4. 5. 6. 7. 8. 9.

Introduction Origins and Scope of Digital Signal Processing (DSP) Mathematical Modeling for DSP Applications Methodology for Measurement Applications of DSP DSP Methods in Solving Sensor-Related SR Problems DSP Methods in Solving Sensor-Related DR Problems DSP Methods in Solving Sensor-Related QR Problems Other Sensor Applications of DSP Methods Conclusion Glossary References

1. INTRODUCTION Digital signal processing (DSP) is a branch of information science and technology, focused on the methods and techniques for processing digital signals, the signals being distinctive by their double discreteness, in time and in magnitude, and therefore fit for computer manipulations. Both general-purpose computers and more-or-less specialized computers may be used for implementation of DSP. Depending on the required complexity and speed of calculations, the following options should be considered: microprocessors, microcontrollers, digital signal processors, digital signal controllers or application-specific processors; the latter realized using standard integration technologies or socalled field programmable gate arrays. DSP is a well established, but still quickly developing, field of knowledge with fuzzy borders differentiating it from many other fields of knowledge such as signals and systems, statistical inference, system identification, or time series analysis. As stated in the Preface of the handbook [1], digital signal processing is concerned with the theoretical and practical aspects of representing information bearing signals in digital form and with using computers or special purpose digital hardware either to extract that information or to transform signals

ISBN: 1-58883-065-9 Copyright © 2006 by American Scientific Publishers All rights of reproduction in any form reserved.

in useful ways. This general definition, to be properly interpreted should be read in the context of the 17-page-long table of contents of the quoted handbook. The majority of items in that table are well-known chapters of traditional literature on digital signal processing; there are also numerous sections that did not exist in the DSP literature of the 80's, such as: Inverse Problems and Signal Reconstruction, Time Frequency and Multirate Signal Processing, and Nonlinear and Fractal Signal Processing. The first of them is of special importance for sensor applications of DSP since it is directly related to the fundamental models of sensor functions (as explained in Section 4). The hybrid or eclectic structure of the DSP field of knowledge is up to certain extent the result of its historical development (as shown in Section 2). The role of DSP in sensorics is strictly related to the principal function of sensors, i.e., to the conversion of the physical nature of signals, aimed at detection, classification or measurement. Since the latter function is the most sophisticated one (logically absorbing detection and classification), it will be addressed in the majority of examples of sensor applications of DSP, given in this paper. The measurement-related terminology will be used in accordance with International Vocabulary of Basic and General Terms in Metrology [2]; selected definitions are provided in a glossary at the end of the paper. It should be noted, however, that—without special comments—it is generalized on multidimensional quantities. This applies in particular, to the term measurand which is understood as the name for a generalized quantity to be measured—applied not only to a scalar quantity, but also to a vector of scalar quantities or their functional relationship. By using this term, one may define the unique objective of measurement as the estimation of a measurand, resulting in its approximate value and uncertainty of estimation [3]. For the sake of linguistic convenience, the term measurand is applied also to mathematical models of physical quantities to be measured if this practice does not imply ambiguity. The paper is structured as follows. After presentation of the scope of issues treated in the domain of DSP and their

Encyclopedia of Sensors Edited by C. A. Grimes, E. C. Dickey, and M. V. Pishko Volume 9: Pages (135-163)

136 historical origin (Section 2), a systematic methodology for applying DSP in sensorics is outlined in Sections 3 and 4. It includes the principles of DSP-oriented mathematical modeling of sensors and classification of problems to be solved by means of DSP. Two fundamental problems are introduced and categorized, viz. the problem of measurand reconstruction and the problem of calibration of measurement channels. The results of their classification are considered in Section 5 (static problems), Section 6 (dynamic problems) and Section 7 (mixed problems). The applications of DSP in uncertainty evaluation and execution of auxiliary operations related to the sensor functioning are reviewed in Section 8. Some recommendations for the future development of sensor applications of DSP are given in Section 9. A glossary, following this Section, contains not only definitions of basic terms but also explanations of basic acronyms used in the paper. The applications of DSP are illustrated in this article with numerous examples related to broadly understood sensors, including sensor arrays and sometimes also measuring devices, instruments or systems, being natural candidates for miniaturization in the nearest future. By simple extrapolation of the up-to-now development of sensorics, one may predict that what is today a measuring system will be implemented yesterday as an intelligent or smart sensor, and will be called sensor within a decade. To make this article informative and accessible for the readers with various backgrounds, it has been composed of passages differing in the usage of expert language and mathematics. They are arranged in such a way that the less prepared readers may get a quite complete (qualitative) image of the topic even if they skip over more advanced passages distinctive by the presence of mathematical formulas. On the other hand, the readers interested in a more profound and more practical insight into the methodology of DSP applications in sensorics are recommended to follow also the mathematical considerations.

2. ORIGINS AND SCOPE OF DIGITAL SIGNAL PROCESSING (DSP) For the majority of engineers, DSP started 25 years ago with the appearance of a first digital signal processor, i.e., a specialized microprocessor with an instruction set optimized for the rapid execution of most typical DSP operations. It was after two unsuccessful attempts of Intel and AMI to introduce on the market rudimentary versions of such processors (Intel 2920 in 1978 and AMI S2811 in 1979), when NEC initiated in 1980 the production of /iPD7710, the world's first complete digital signal processor. Three years later, however, the leadership was taken over by Texas Instruments with the release of TMS32010. In fact, broadly understood DSP is as old as the first statistical procedures introduced by Blaise Pascal and Christiaan Huygens, as the numerical recipes developed by Isaac Newton, and the infinite sine-cosine series invented by JeanBaptist Joseph Fourier. There are three main sources of scientific knowledge that have contributed to the development of DSP as a mature field of knowledge: numerical analysis, statistics, and signals and systems theory. Their history will be briefly reviewed on this section.

Sensor Applications of Digital Signal Processing

2.1. Heritage of Numerical Analysis and Computing An algorithm is a well-defined procedure, i.e., a finite set of instructions, designed for accomplishing some tasks, in particular—computational tasks. The word algorithm was derived from Latin algorismus which came from the name of the Persian mathematician Abu Ja'far Mohammed ibn Musa al-Khwarizmi (ca. 780-ca. 845), the author of the book Rules of Restoration and Reduction that introduced algebra to the West. Although the ideas of certain numerical algorithms may be traced back to Antiquity (the Euclid's algorithm, the Heron's algorithm), the modern understanding of numerical algorithms appeared in the seventeenth century in parallel with the development of empirical sciences. Isaac Newton (1642-1727), building on the work of many earlier mathematicians, such as his teacher Isaac Barrow (1630-1677), developed some numerical tools to push forward the study of nature. His work contained, in particular, a wealth of new numerical inventions that may be considered foundations of modern numerical analysis, to mention only the most conspicuous: the Newton's methods for solving nonlinear algebraic equations. The next major contributor to the development of numerical analysis was Leonhardt Euler (1707-1783) who proposed, in his work Institutionum Calculi Integralis (1768), a simple numerical method for solving ordinary differential equations, called today Euler's method. A fundamental progress in approximation techniques should be attributed to Karl F. Gauss (1777-1855) who showed, in the second volume of his book Theoria motus corporum coelestium in sectionibus conicis Solent ambientium (1809), how to estimate a planet's orbit by means of a leastsquares method of approximation, the method underlying also many contemporary algorithms of DSP. In fact, there is evidence that he had used this method since 1801, hence his priority over Adrien-Marie Legendre (1752-1833) who published its version in 1805. The repertory of numerical methods for solving problems of approximation was significantly enhanced by the idea of spline approximation, put forward already by the end of the nineteenth century, but formalized and systematically studied by Isaac Schoenberg (1903-1990) only in 1946. The first computer-oriented algorithm was written in 1842 by Augusta Ada Byron (1815-1852) for the analytical engine designed by Charles Babbage (1791-1871); however, since that engine was never completed, her algorithm was never implemented on it. The early understanding of algorithms suffered from the lack of mathematical rigor. The problem of the strict interpretation of the algorithm definition, referring to a "well-defined procedure," was solved only in 1934 when Alan M. Turing (1912-1954) proposed an abstract model of a computer. Since then, a formal criterion for an algorithm has been that it is a procedure implementable on a completely-specified Turing machine or one of the equivalent formalisms.

2.2. Heritage of Statistical Analysis The omnipresence of uncertainty in the world was realized by the thinkers of Antiquity, but the formal means for quantitatively dealing with it appeared only in the seventeenth

137

Sensor Applications of Digital Signal Processing

century. The key concept of mathematical expectation and the first algorithm for its estimation on the basis of a repeated experiment is usually attributed to Christiaan Huygens (1629-1695). His discovery, published in 1657, fundamentally changed the scientific approach to uncertainty in experimentation, including measurement. The next step, equally important in this respect, was done hundred years later by Thomas Bayes (1702-1761) who introduced the idea of a priori probability distribution, and proposed a rule of reasoning, known today as the Bayes theorem. It enables one to assess the probability of an event on the basis of uncertain data representative of that event and some a priori knowledge of that event. The latter may be not only uncertain but also subjective. Therefore, the fathers of contemporary statistics—Karl Pearson (1857-1936), Ronald A. Fisher (1890-1962), Jerzy Neyman (1894-1981), Egon S. Pearson (1895-1980) and Abraham Wald (1902-1950)—avoided the use of a priori information when developing their methods of statistical inference. However, after years of abandon, the Bayesian approach has recently become a subject of increased interest in the domain of measurement data processing due to the re-appreciation of a priori knowledge in solving ill-conditioned numerical problems, and due to the widespread availability of high computing power, necessary for making proper use of such information in the algorithms for solving those problems. When working on mathematical methods for studying the processes of heredity and evolution, Karl Pearson developed around 1900 the method of regression analysis and the chi-square test of statistical significance. His method of parameter estimation was next improved by Ronald A. Fisher, and is known today as the method of maximum likelihood. On the other hand, Andrei A. Markov (18561922) studied time sequences of random variables, called today Markov chains, in which the state of a future variable is determined by the states of preceding variables; his work launched the theory of stochastic, processes further developed by Norbert Wiener (1894-1964) and Andrei N. Kolmogorov (1903-1987). At this point the statistical tradition of DSP met the signal tradition of DSP.

2.3. Heritage of Signal and Systems Theory The mathematical basis for the analysis of signals in the frequency domain has its origins in the works of Isaac Newton who discovered that sunlight passing through a glass prism was expanded into a band of many colors. He also introduced in 1671 the word spectrum as a scientific term to describe this band of colors, and presented the first mathematical treatment of the periodicity of wave motion. The solution to the wave equation for the vibrating musical string was developed by Daniel Bernoulli (1700-1782) in 1738, and it had the form of an elementary trigonometric series. Leonhardt Euler demonstrated in 1755 the method for computing coefficients of that series. Jean-Baptiste J. Fourier (17681830), in his memoir On the Propagation of Heat in Solid Bodies (1807) and his thesis Analytical Theory of Heat (1822), extended this idea on an infinite summation of sine and cosine terms, i.e., on what is called today Fourier series.

The next important step in spectral analysis of signals was made by Arthur Schuster (1851-1934) who introduced the concept of periodogram in 1898. Thirty years later, George U. Yule (1871-1951) proposed an alternative method for characterization of random data, the method based on the use of linear regression analysis for modeling a time series in order to find one or two periodicities in data. Thus, G. U. Yule devised the basis of what has become known as the parametric approach to spectral analysis: characterizing the measurement data as the output of some time-series model. In 1931, Gilbert T. Walker (1868-1958) used the Yule's technique to investigate a damped sinusoidal time series. Since then, the normal equations arising out of the least-squares analysis of spectra have been called Yule-Walker equations. In 1930, Norbert Wiener (1894-1964) published his classic paper Generalized Harmonic Analysis providing statistical foundations for treatment of random processes. In this paper, he gave precise statistical definitions of autocorrelation and power spectral density for stationary random processes. These two functions of a random process were shown to be related via a continuous Fourier transform, which is the basis of what is known today as the Wiener-Khintchine theorem. The use of the Fourier transform, rather than the Fourier series of traditional harmonic analysis, enabled Wiener to define spectra in terms of a continuum of frequencies, rather than as discrete harmonic frequencies. In 1948, he introduced the new area of cybernetics by publishing the book Cybernetics or Control and Communication in the Animal and the Machine where the ideas of signals and their processing by systems got their contemporary form. John W. Tukey (1915-2000), who introduced modern techniques for the estimation of spectra of time series, is best known as the inventor of the Fast Fourier Transform (FFT) algorithm that he published—together with James W Cooley (1926)—in Mathematics of Computation, in 1965. While there is evidence of contributions by many researchers to the development of FFT, this concise paper focused the attention of the digital processing community on the practicality of efficient Fourier transform computation. The FFT, perhaps more than any other contribution, has led to an increased usage of digital spectral analysis as a signal processing tool. An alternative for the Fourier analysis of signals appeared only in the early 80's when Jean Morlet had discovered a new way to represent geophysical signals using wavelets.

2.4. Basic Models of DSP and Their Generalizations An elementary digital signal is a detectable physical quantity (such as a voltage or a light intensity) by which information can be transmitted. The requirement of detectability means that two states of the quantity may be distinguished—its absence and its presence—and two different messages may be attributed to those states. Such an elementary signal is called binary signal. A set of m binary signals enables one to transmit M = 2m different messages; a time sequence of N such sets makes possible the transmission of MN different messages. This is a digital signal the traditional DSP is focused on. It may be mathematically modeled using a vector or sequence of scalar variables, each assuming M

138 values: 0, 1, 2 , . . . M - 1 or 0, q, 2q,... (M - \)q, where q is a real-valued constant called quantization step. For the convenience of expression, such a model is usually identified with a physical signal, and this convention will be used throughout this article. Another widely accepted convention, also motivated by convenience, consists in the representation of digital signals with sequences of real-valued variables, and separate consideration of quantization effects if they are important. Discrete signals have been used for communication purposes from the dawn of human civilizations, but their full technical potential was discovered and studied only in the twentieth century. It was Claude E. Shannon (1916-2001) who founded, in his paper A Mathematical Theory of Communication published in 1948, the subject of information theory and put forward the idea of transmitting pictures, words, sounds etc. by sending a stream of Is and 0s down a wire, thus by using binary signals. In engineering practice related to sensors, a digital signal is as a rule a quantized (thus, approximate) representation of a non-quantized continuous-time signal. The accuracy of this representation is mostly dependent on the sampling frequency and on the number of bits used for quantization. The original signal is usually sampled at regular intervals, and all its values in each interval are represented by a discrete constant. The sampling frequency controls the temporal "behavior" of the discrete signal. The Shannon's sampling theorem [4], a fundamental theorem of signal processing, states that a sampled signal cannot unambiguously represent signal components with frequencies above half the sampling frequency. The signal spectral components corresponding to frequencies above this frequency can be observed in the digital signal, but their magnitudes cannot be properly assessed. The most common operation performed on the digital signals is their filtering that consists in some transformation of a number of samples surrounding the current sample of the input and/or output signal. The following are the most important types of digital filters: • A "linear" filter performs a linear transformation of input samples. It satisfies the superposition condition, i.e., if an input signal is a linear combination of different input signals, then the output signal is also a linear combination of the corresponding individual output signals with the same coefficients. Other filters are called "non-linear." • A "causal" filter uses only previous samples of the input or output signals. Filters that use future input samples are "non-causal." Adding a delay will transform many non-causal filters into causal filters. • A "time-invariant" filter has invariable properties over time, i.e., the result of their operation does not depend on the time shift of the input signal. Other filters are called time-varying filters. • "Finite impulse response" (FIR) filters use only the input signal; so-called "infinite impulse response" (IIR) filters use both the input signal and previous samples of the output signal. Another common DSP operation performed in the time domain is the determination of the autocorrelation function characterizing the interdependence of the behavior of

Sensor Applications of Digital Signal Processing

a digital signal for two time instants. Formally, the autocorrelation function is defined for random processes. So, the result of DSP is an estimate of the autocorrelation function rather than this function itself, the estimate obtained on the basis of one realization of a hypothetical random process statistically modeling the digital signal under consideration. The same applies to the cross-correlation function frequently determined by DSP procedures designed for analyzing the interdependence of two signals. The discrete Fourier transform (DFT) is the most often used tool for spectral analysis of digital signals. By means of this transform a sequence of real-valued samples is converted into a sequence of complex numbers, each corresponding to a point on the frequency axis. This sequence, called spectrum, may be represented as a pair of two real-valued sequences: the real and imaginary part of the spectrum or (alternatively) the magnitude and phase of the spectrum. The most common purpose of using the DFT is to get information on which frequency components are present in the input signal and which are missing. The DFT is also a key operation underlying the definition of another important transform of digital signals, viz. their cepstrum. In generation of the cepstrum, a signal is converted to the frequency domain through the DFT, and then the logarithm is taken of the spectrum, which is converted back to the original domain through the inverse DFT. In the cepstrum, frequency components with smaller magnitude are thus emphasized while retaining the order of magnitudes of frequency components. The cepstrum reveals the structure of the signal, its composition of rapid and slow components, completely invisible in the original domain. Much more sophisticated tools for signal analysis are wavelet transforms. As opposed to the sine and cosine functions used for Fourier transforms, a wavelet used for constructing this transform has not only locality (small support) in the frequency domain, but also in the time domain. The wavelet transform provides the time-frequency representation; thus, it makes possible the identification of a spectral component occurring at any instant of particular interest. The wavelet transform is capable of providing the time and frequency information simultaneously, hence giving a timefrequency representation of the signal. It may be viewed as an operation of iteratively passing the time-domain signal through various high- and low-pass filters, which filter out either high-frequency or low-frequency components of the signal. Since this procedure is repeated, every time some portion of the signal corresponding to some frequencies being removed from the signal. The original signal is decomposed into a bunch of signals, each corresponding to a different frequency band. They may be put together and plotted on a three-dimensional graph (amplitude vs. time and frequency) showing which frequency components exist at which time. There is an issue of resolution related to the so-called "uncertainty principle," which states that we cannot exactly know what frequency exists at what time instance, but we can only know what frequency bands exist at what time intervals. Wavelet transforms, in contrast to the Fourier transform, give a variable resolution: higher frequencies are better resolved in time, and lower frequencies are better resolved in frequency. This means that, a certain high-frequency component can be located better in time

139

Sensor Applications of Digital Signal Processing

(with less relative error) than a low-frequency component. On the contrary, a low-frequency component can be located better in frequency compared to high frequency component. The today's DSP is not confined to the abovecharacterized main-stream methods of signal analysis in the time and frequency domain. The main generalizations are as follows: • the extension of the concept of digital signal, being initially understood a scalar discrete function of time, on vector functions of other scalar variables (e.g., length or wavelength) and on functions of other vectors of variables (e.g., time and two space dimensions); • the extensive use of statistical methods with a particular emphasis on the methods of estimation and inference; • the processing of multiple signals aimed at inference about a system they are related to; • the incorporation of practically all chapters of numerical methods, including universal approximators such as neural networks; • the acceptance of heuristic methods characteristic of artificial intelligence.

2.5. Interdisciplinary Nature of DSP The logical pattern underlying historical development of DSP was very similar to that of the development of mathematics: • some practical problems generated a demand for new solutions that gave the origin to a new applicationspecific chapter of DSP; • this new chapter was step-by-step enhanced by additional methods of processing, and they were used for solving practical problems, other than the original ones; • at a certain level of generalization, of uniformization and of internal integration, this application-specific chapter of DSP matured to a status of a generalpurpose chapter of DSP. This process of development continues. Consequently, we have today the following general-purpose chapters of DSP: • signal representation and quantization, including multidimensional sampling, quantization of discrete-time signals, and finite-wordlength effects; • fast algorithms and structures including FFT, fast convolution and filtering, complexity theory of transforms, and fast matrix computations; • digital filtering, including linear filter design methods, and nonlinear filter design methods; • statistical signal processing, including signal detection and classification, spectrum estimation and modeling, noise modeling, and parameter estimation algorithms; • adaptive filtering, including least-squares adaptive linear filters design methods, other adaptive linear filters design methods; • inverse problems and signal reconstruction, including deconvolution methods and blind deconvolution methods; • multirate signal processing and time-frequency methods, including wavelet analysis, filter bank design, timevarying filter banks, and lapped transforms;

• advanced nonlinear signal processing, including chaotic signals processing, nonlinear maps, fractal signals, morphological signal processing, and higher-order spectral analysis. On the other hand, we have numerous application-specific chapters of DSP. Among them, the following are the most widely recognized: inverse problems of computational tomography, speech processing, telecommunication channel equalization, radar signal processing, digital audio communications, image and video processing, and sensor array processing. The interdisciplinary nature of DSP, reflected in the above-listed names of chapters of that field of knowledge, is also reflected in their location in the structure of learned societies and international organizations active in this field. Let's illustrate this statement with an example of Institute of Electrical and Electronics Engineers (IEEE). The general chapters of DSP are mainly covered by the following IEEE journals published by IEEE: IEEE Transactions on Signal Processing, IEEE Signal Processing Letters, IEEE Transactions on Neural Networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, and IEEE Transactions on Fuzzy Systems. The papers related to the application-specific chapters of DSP may be found in those journals as well, but many of them are published in other IEEE journals, such as IEEE Transactions on Speech and Audio Processing, IEEE Transactions on Automatic Control, IEEE Transactions on Biomedical Engineering, IEEE Transactions on Circuits and Systems for Video Technology, IEEE Transactions on Communications, IEEE Communications Letters, IEEE Transactions on Geoscience and Remote Sensing, IEEE Transactions on Image Processing, IEEE Transactions on Instrumentation and Measurement, IEEE Transactions on Medical Imaging, or IEEE Transactions on Robotics and Automation. Being an interdiscipline, DSP is constantly increasing its presence in various domains of engineering. In the structure of IEEE, DSP topics are covered not only by IEEE Signal Processing Society and IEEE Neural Networks Society, but also—to a lesser exten—by IEEE Circuits and Systems Society, IEEE Communications Society, IEEE Computer Society, IEEE Control Systems Society, IEEE Engineering in Medicine and Biology Society, IEEE Geoscience and Remote Sensing Society, IEEE Instrumentation and Measurement Society, IEEE Robotics and Automation Society, and IEEE Sensors Council. During the IEEE International Conferences on Acoustics Speech and Signal Processing (ICASSP), ca. 1000 papers are presented each year. Moreover, several other events, workshops or symposia, are organized or sponsored by IEEE where the DSP issues are undertaken in a more specialized context.

3. MATHEMATICAL MODELING FOR DSP APPLICATIONS This section contains an introduction to the methodology of mathematical modeling of physical objects, underlying the methodology of sensor applications of DSP, referring always to a mathematical model of a sensor, to a mathematical model of the measurement object a sensor is designed for, or to both of them. The concepts of model parameters and

140 model structure are discussed, as well as the problems of accuracy and adequacy of modeling.

3.1. Basic Concepts The mathematical model of a physical object is its description—by means of numbers, variables, sets, equations, functions, relations, images etc.—which enables one to approximately predict its behavior under various conditions. The properties of the object are, by means of abstraction, modeled with physical quantities which in fact are idealized features of idealized objects. The idealization of an object whose properties are modeled with a quantity consists, inter alia, in the isolation of this property from the context of other properties. The modeled object is a fragment of reality separated from its surrounding by clear boundaries—most frequently discontinuities of mass density. The existence of those boundaries does not exclude interactions between the object and its surrounding. As a rule, an exchange of energy or mass is going on, e.g., the energy of thermal radiation or of electric current is delivered, row materials are supplied and/or industrial products are taken away. The boundary points or areas, where the exchange is taking place, are usually called inputs or outputs of the object. The flow of energy or mass is described by means of quantities, such as flux of energy, flux of mass, flux of volume or density of energy, field of flow velocity, electric field strength. The inputs and outputs are often characterized using a pair of associated quantities (such as force and velocity or current and voltage) whose product is directly related to power flowing in or out. The quantities describing a modeled object are usually classified into four categories: • input quantities, identified with the causes of physical phenomena in the object, called also stimuli; • output quantities, identified with the effects of physical phenomena in the object, called also responses; • influence quantities that modify the behavior of the modeled object but may be controlled during the operations related to the creation or use of its model; • disturbances that also modify the behavior of the modeled object, but cannot be controlled during the operations related to the creation or use of its model, and therefore imply the discrepancy between the responses of the model and of the object to the same stimulus.

Sensor Applications of Digital Signal Processing

• The phenomena in the object or its properties, selected for modeling, are influenced by other phenomena in the object and its environment. Some of them may be controlled (influence quantities), some—not (disturbances). • The concept of disturbances is used not only for dealing with uncontrollable phenomena, but also to take into account effects of limited and uncertain cognizability of the object.

3.2. General Scheme of Model Identification According to a general scheme of model identification, shown in Fig. 1, two fundamental operations should be distinguished in the process of mathematical modeling: the selection of an adequate structure of the model (structural identification) and the estimation of its parameters (parametric identification). The first of them hardly can be algorithmized: the choice of the structure of the model is usually based on some intuitive premises, on anterior experience and trial-and-error methodology. The second is a subject of sophisticated algorithmization. There are two basic approaches of structural identification: the black-box approach and the white-box approach. The first one consists in identification of the input-output relationship exclusively on the basis of input-output data. The second assumes some a priori knowledge of the internal structure of the modeled object, viz.: the list of elements the modeled object is composed of or may be decomposed into; the mathematical models of all those elements, assumed to be verified or validated; the list of links among the elements; the mathematical models of those links, assumed to be verified or validated. Taking into account the nature of the input and output quantities, one may divide the mathematical models of physical objects into: static and dynamic models, time-invariant

Structural identification of the model

Parametric identification of the model

The disturbances are usually assigned to the outputs, although in reality they appear at various points of the modeled object. Classification of the quantities in four categories—input quantities, output quantities, influence quantities and disturbances—results from the philosophy of mathematical modeling which may be characterized in the following, a bit simplified, way: • The model reflects only some phenomena in the object or its properties, namely those which are important for potential (intended) applications of the model. It represents them, using an equation relating the input quantities (modeling the causes of the phenomena in the object or factors accountable for its properties) and the output quantities (modeling the manifestations of those phenomena or properties).

Assessment of the model

Is correction of the model necessary?

L—

Is correction of parameters possible?

Figure 1. A general scheme of model identification.

141

Sensor Applications of Digital Signal Processing

and time-varying models, and models with lumped and distributed parameters. In the dynamic models at least some quantities depend on time. The models with distributed parameters differ in that from the models with lumped parameters that at least some defining them quantities depend on the space co-ordinates.

3.3. Inadequacy and Inaccuracy of the Model Any mathematical model of a physical object provides only an approximate description of its behavior and properties because the model structure is always up to certain degree not adequate and the model parameters are always determined inaccurately. The nonadequacy of the structure of the model is a natural consequence of the limited cognizability of the modeled object, and it is implied by: • neglecting some factors, important for the phenomena in the object or for its properties, during the choice of the quantities modeling the object (input, output and influence quantities); • inappropriate specification of the those quantities; • inappropriate choice of the equation modeling the relationships among those quantities. The inaccuracy of the estimates of the parameters is due to the errors of the method of parameter identification, the errors of technical implementation of this method, and the errors in the data used for identification. Let us illustrate the above analysis with a simple example of static modeling of an object with a scalar input x and a scalar output y. Instead of the "true" values x and y of x and y, only the error-corrupted values, resulting from measurement, are known, viz.: x = x + Ax

and

y = y + ky

(1)

where Ax is the error of setting the value of the input quantity, and ky is the error of measuring the value of the output quantity. Thus, instead of the modeled object, only an image of this object is accessible, which consists of the results of the measurement data available for identification, e.g.:

\n = 1,2,..., N)

(2)

Let us try to model the relationship between x and v by means of a linear equation of the form: y = p0 + pxx for X G X C (—oo, -oo)

(3)

Using, for example, the method of least squares, we may determine some estimates p0 and px of the parameters p0 and /?, on the basis of the data (2), and obtain the model: y = Po +

(4)

Due to the errors in the data (2), to the errors of computation, to an inadequate choice of the criterion of estimation (the sum of squares of residuals), as well as due to the inadequacy of the assumed structure of the model, p0 and p{ are subject to some errors. The impact of those errors on the quality of the identified model may be assessed according to various criteria, depending on the class of potential applications of the model. The most typical options are

the following: the errors of mapping the values of the output quantity corresponding to selected reference values of the input quantity, the errors of mapping the values of the input quantity corresponding to selected reference values of the output quantity, and the errors of the estimates of parameters. The assessment of the above-mentioned errors is, however, highly problematic in practice because of the lack of a priori information on the adequate structure of the model and exact values of its parameters. The error of the model cannot be determined since instead of the true values only disturbed values of the involved quantities are known. This error may be only estimated using a so-called extended model of the object Mext, i.e., a model being structurally richer and/or more exact than a model to be assessed M. The extended model differs from M in that it may have more input, output and influence quantities, more parameters or that those parameters have been determined using more accurate data. As a rule, the limit errors characterizing the extended model are assumed to be known. It should be stressed that if the extended model is used for the assessment of the model M, then the latter is not referred to "reality" but to another model. The same applies to the situation when the model M is assessed using the image Mimg, i.e., an exact image of the object, contained in an additional set of the data of the form (2), not identical with the set used for parameter estimation. One has to realize that the assessment of the inadequacy and inaccuracy of the model M does not consist in determination of the true error but in the estimation of the limit error. For this reason, the comparison of the model M with the extended model Mexl is more convenient than its comparison with the image Mimg. The complete knowledge of the extended model is not required for the assessment of the limit error-on the whole, some its elements contain sufficient information.

3.4. Epistemological Considerations There are two principal methodological orientations that differ in the interpretation of the epistemological sense of the procedure of modeling shown in Fig. 1: methodological realism and methodological instrumentalism. For realists, the mathematical model of a physical object is its description which provides true knowledge about this object. Since it contains truth, it may be used as a valid basis for justified statements about reality. Therefore, realists are inclined to speak about identification of the object rather than about identification of its model. Consequently, mathematical modeling is for realists a sequence of operations aimed at determination of the structure of the object and at measurement of its parameters. Realists acknowledge that any model reflects only some phenomena in the object or some its properties, but stress that this is due to the limitation of the cognitive means rather that to an arbitrary decision of the model designer. Realists acknowledge that the knowledge of the phenomena in the object is always limited and uncertain, but they point out that by being constantly improved—it may asymptotically approach truth. For instrumentalists, the mathematical model of a physical object is a mathematical formalism which enables one to approximately predict the behavior of the object under various

142 conditions—in order to use it for various practical purposes. They clearly state that the model reflects only some phenomena in the object or some its properties, namely those which are important for potential (intended) applications of the model. They avoid any statements on the relationship between the model and reality. Realists and instrumentalists agree that structural identification hardly can be organized as an algorithmic procedure, but they draw different conclusions from this observation. Realists say that it should be based on the knowledge of the modeled object, of its structure and other features, while instrumentalists claim that, as a rule, the choice of the structure of the model is based rather on some intuitive premises, on anterior experience or trial-and-error methodology. Consequently, realists prefer white-box models or even dismiss black-box models as non-scientific, while instrumentalists do not discriminate any type of models and evaluate them on the basis of the result of their application in practice. Regardless of whether the black-box approach or whitebox approach is used for structural identification, parametric identification must be based on the result of measurements carried out on the modeled object. Nevertheless, the understanding of this operation by realists and instrumentalists is different. Realists claim that the parameters of the model— if its structure is chosen during structural identification— are preferably physical quantities that should be directly measured rather than computed on the basis measurement results. Instrumentalists are not interested in the nature of the parameters but rather in their numerical influence on the model behavior, characterized—for example—by the sensitivity of the criteria, used for model validation, to their variations. This difference is consistent with realists' preference for white-box models and instrumentalists' preference for black-box models. For realists, the mathematical model of a physical object is a form of knowledge about this object containing the elements of objective truth. Realists believe that by consecutive improvements the model may unlimitedly approach reality. Thus, they implicitly assume the existence or possibility of an ideal model. They accept, of course, the fact that the model of a physical object yields only an approximate prediction of its behavior and properties, but are inclined to explain this fact by the imperfection of our cognitive capabilities. Instrumentalists avoid any statements on the relationship of the model to reality, and put emphasis on its ability to meet requirements concerning its applicability for a pre-defined purpose. Both realists and instrumentalists accept the fact that the model structure is always up to certain degree not adequate, and that the model parameters are always determined inaccurately. Realists are inclined, however, to attribute the non-adequacy of the structure of the model to the limited cognition of the modeled object, in particular—to neglecting some factors, important for the phenomena in the object or for its properties, during the choice of the quantities modeling the object or inappropriate specification of the those quantities. Instrumentalists focus their explanation on the choice of the structure of the model, inappropriate from mathematical point of view. Realists and instrumentalists agree that the estimates of the parameters of the model are uncertain due to the errors

Sensor Applications of Digital Signal Processing

of the method of parameter identification, the errors of technical implementation of this method, and the errors in the data used for identification. Instrumentalists easily accept the fact that—in practice—the assessment of that uncertainty may be done only by comparison of the model under consideration with an extended model, not with reality. Consequently, they avoid the term model verification (derived from Latin vents = true). Realists always look for an absolute reference.

3.5. Examples of Mathematical Modeling of Sensors and Sensor Data The described methodology of mathematical modeling will be illustrated with some examples of modeling sensors and sensor data to be processed by means of DSP algorithms. First, an example of modeling spectrophotometric data will be presented in details, and then a variety of other examples will follow without details. It should be noted that a model of sensor data is as a rule richer than a model of the corresponding sensor since it is characterizing not only a sensor but also its input and influence quantities. Example of Modeling a Spectrophotometric Sensor A spectrophotometric sensor (SPS) is converting an optical signal into a digital signal—the data representative of the spectrum of the input optical signal. Numerical processing of those data by computing means comprises all the operations necessary for transforming "meaningless" digital codes into "meaningful" representation of the spectrum with an uncertainty not exceeding the predefined limits. It may also provide some results of spectrum interpretation, e.g., the estimates of magnitudes and positions of spectral peaks. The SPS to be considered in this example consists of a gratingtype dispersive element followed by an array of photodiodes (a photodetector) converting optical into electrical signals, and an analog-to-digital converter. This solution implies the discretization of the wavelength axis, that may be defined by a sequence of wavelength values {A,,} such that: ^min = A, < A2 < • • • < A ^ . , < \N

= A max

(5)

where N is the number of data provided at the SPS output, i.e., the number of photodiodes in the photodetector. Thus, the average interval between the consecutive wavelength values is: *\

(6)

An adequate mathematical model of the SPS may be identified using a white-box approach, or a black-box approach, or a mixed approach (grey-box approach). The first of them consists in systematic analysis and modeling of the physical phenomena in the SPS; it may be applied if all the internal signals of the SPS, necessary for modeling, are accessible for measurements. The second approach requires the measurement accessibility of input and output quantities only. The advantages of both approaches may be combined in a grey-box approach which assumes the availability of input and output quantities—like the black-box approach—and only some internal parameters of the SPS [5]. Both the

143

Sensor Applications of Digital Signal Processing

black-approach and the grey-box approach lead to a conclusion that the approximation power of the Wiener operator (a superposition of a linear integral operator with a nonlinear algebraic operator) is sufficient for adequate modeling of the relationship between the intensity spectrum x(A) of the input optical signal and the raw data at the output of the SPS: (7)

y = [9i • • • yNV

where yn is the output of the nth photodiode corresponding to A,,, for n = 1 , . . . , N. It is, therefore, assumed that the vector: y = [yi---yN]T

output of the «th photodiode, to a tunable monochromator producing an optical signal whose spectrum may be adequately modeled with x(X) =
162 172. R. J. Gilbert, R. Goodacre, A. M. Woodward, and D. B. Kell, Anal. Chem. 69(21), 4381 (1997). 173. R. P. Paradkar and R. R. Williams, Appl. Spectr. 50(6), 753 (1996). 174. H. J. Luinge, J. H. van der Maas, and T. Visser, Chemometr, Intell. Lab. Syst. 28(1), 129 (1995). 175. L. Antonov and S. Stoyanov, Spectr. Lett. 29(2), 231 (1996). 176. H. Martens and T. Naes, "Multivariate Calibration." J. Wiley & Sons, Chichester, 1989. 777. F. B. M. Suah, M. Ahmad, and M. N. Taib, Sens. Actuat. B—Chem. 90(1-3), 182 (2003). 178. L. M. Mclntosh, M. Jackson, H. H. Mantsch, J.R. Masfield, A. N. Crowson, and J. W. P. Toole, Vibrat. Spectr. 28, 53 (2002). 179. A. A. Boscolo, A. Cont, and M. Ulian, in "Proceedings of the IEEE Instrum. and Meas. Technol. Conference—IMTC'04," p. 2095, 2004. 180. B. K. Lavine, C. E. Davidson, and A. J. Moores, Vibrat. Spectr. 28, 83 (2002). 181. D. King, W. B. Lyons, C. Flanagan, and E. Lewis, IEEE Sens. J. 4(1), 21 (2004). 182. A. Bernieri, G. Betta, P. Caramia, and P. Verde, in "Proceedings of the 7th IMEKO-TC4 International Symposium 'Modern Electrical and Magnetic Measurement'," p. 320, 1995. 183. G. Betta and A. Pietrosanto, in "Proceedings of the IEEE Instrum. and Meas. Technol. Conference—IMTC'98," p. 483, 1998. 184. A. Carullo, F. Ferraris, S. Graziani, U. Grimaldi, and M. Parvis, IEEE Trans. Instrum. Meas. 45(2), 677 (1996). 185. F. Dondi, A. Betti, L. Pasti, and M. C. Pietrogrande, Anal. Chem. 65(17), 2209 (1993). 186. F. C. Sanchez, S. C. Rutan, M. D. G. Garcia, and D. L. Massart, Chemometr. Intell. Lab. Syst. 36(2), 153 (1997). 187. S. Popescu, L. Crisan, D. Hurgoiu, and H. Hedesiu, in "Proceedings of the 6th IMEKO Symposium on "Metrology for Quality Control in Production'," p. 531, 1998. 188. P. Pawlus, Meas.—]. IMEKO 24(3), 139 (1998). 189. Ch. H. McGogney, E. N. Landis, and S. P. Shah, in "Nondestructive Testing Methods for Civil Infrastructure" (H. V. S. GangaRao, Ed.). New York, American Society of Civil Engineers, p. 31, 1995. 190. D. A. McRae and M. A. Esrick, IEEE Trans. Biomed. Eng. 43(6), 607 (1996). 191. T. Naes and E. Risvik, "Multivariate Analysis of Data in Sensory Science." Elsevier Publishers, Amsterdam, 1996. 192. A. A. Diaz, B. J. Burghardt, T. R. Skorpik, C. L. Shepard, T. J. Samuel, and R. A. Pappas, in "Proceedings of the IEEE Instrum. and Meas. Technol. Conference—IMTC'03," p. 1275, 2003. 193. M. Garcia, M. C. Horrillo, J. P. Santos, M. Aleixandre, I. Sayago, M. J. Fernandez, L. Ares, and J. Gutierrez, Sens. Actuat. B—Chem. 96(3), 621 (2003). 194. A. Branca, P. Simonian, M. Ferrante, E. Novas, and R. M. Negri, Sens. Actuat. B~Chem. 92(1-2), 222 (2003). 195. K. Brudzewski, S. Osowski, and T. Markiewicz, Sens. Actuat. B— Chem. 98(2-3), 291 (2004). 796. Daqi(Gao), Wang Shuyan, and Ji Yan, Sens. Actuat. B—Chem. 97(2-3), 391 (2004). 797. Zhang(Haoxian), M. O. Balaban, and J. C. Principe, Sens. Actuat. B—Chem. 96(1-2), 385 (2003). 198. B. C. Sisk and N. S. Lewis, Sens. Actuat. B—Chem. 96(1-2), 268 (2003). 799. L. Carmel, N. Sever, D. Lancet, and D. Harel, Sens. Actuat. B— Chem. 93(1-3), 77 (2003). 200. Lee(Dae-Sik), Jeung-Soo Huh, and Duk-Dong Lee, Sens. Actuat. B—Chem. 93(1-3), 1 (2003). 201. T. Sasahara, A. Kido, T. Sunayama, S. Uematsu, and M. Egashira, Sens. Actuat. B—Chem. 99(2-3), 532 (2004). 202. P. Mazein, C. Zimmermann, D. Rebiere, C. Dejous, J. Pistre, and R. Planade, Sens. Actuat. B—Chem. 96(1-2), 51 (2003).

Sensor Applications

of Digital Signal

Processing

203. L. Carmel, S. Levy, D. Lancet, and D. Harel, Sens. Actuat. B— Chem. 93(1-3), 67 (2003). 204. A. Setkus, C. Baratto, E. Comini, G. Faglia, A. Galdikas, Z. Kancleris, G. Sberveglieri, and D. Senuliene, in "Proceedings of the 17th European Conference on Solid State Transducers— Eurosensors XVII," p. 93, 2003. 205. C. Di Nucci, A. Fort, S. Rocchi, N. Ulivieri, V. Vignoli, and M. Catelani, in "Proceedings of the IEEE Instrum. and Meas. Technol. Conference—IMTC'04," p. 605, 2004. 206. A. Fort, S. Rocchi, M. B. Serrano-Santos, R. Spinicci, N. Ulivieri, and V. Vignoli, in "Proceedings of the IEEE Instrum. and Meas. Technol. Conferemce—IMTC'04," p. 599, 2004. 207. C. Distante, M. Leo, P. Siciliano, and K. C. Persaud, Sens. Actuat. B—Chem. 87(2), 274 (2002). 208. F. Di Francesco, C. Domenici, R. Francesconi, A. Ahluwalia, A. Marchetti, G. Pioggia, W. Rocchia, G. Serra, and D. De Rossi, in "Proceedings of the 17th European Conference on Solid State Transducers—Eurosensors XVII," p. 915, 2003. 209. A. Legin, A. Rudnitskaya, L. Lvova, Y. Vlasov, C. D. Natale, and A. D. Amico, Anal. Chim. Acta 484, 33 (2003). 270. M. Penza, and G. Cassano, Sens. Actuat. B—Chem. 89(3), 269 (2003). 277. M. Aleixandre, I. Sayago, M. C. Horrillo, M. J. Fernandez, L. Ares, M. Garcia, J.P. Santos, and J. Gutierrez, in "Proceedings of the 17th European Conference Solid State Transducers—Eurosensors XVII," p. 1019, 2003. 272. M. Zuppa, C. Distante, P. Siciliano, and K. C. Persaud, Sens. Actuat. B—Chem. 98(2-3), 305 (2004). 213. S. B. Belhouari, A. Bermak, G. Wei, and P. C. H. Chan, in "Proceedings of the IEEE Instrum. and Meas. Technol. Conference— IMTC," p. 584, 2004. 214. R. Izumi, K. Hayashi, and K. Toko, Sens. Actuat. B—Chem. 99(23), 315 (2004). 275. M. Bicego, G. Tessari, G. Tecchiolli, and M. Bettinelli, Sens. Actuat. B—Chem. 85(1-2), 137 (2002). 276. C. Di Natale, E. Martinelli, and A. D'Amico, Sens. Actuat. B— Chem. 82(2-3), 158 (2002). 277. E. Martinelli, G. Pennazza, C. Falconi, A. D'Amico, and C. Di Natale, in "Proceedings of the 17th European Conference on Solid State Transducers—Eurosensors XVII," p. 961, 2003. 278. E. Llobet, J. Brezmes, R. Ionescu, X. Vilanova, S. Al-Khalifa, J. W. Gardner, N. Barsan and X. Correig, Sens. Actuat. B—Chem. 83(1-3), 238 (2002). 279. L. Robertsson and P. Wide, in Proceedings of the IEEE Instrum. and Meas. Technol. Conference—IMTC'04," p. 854. Como, Italy, May 18-20, 2004. 220. R. Ionescu, and E. Llobet, "Wavelet Transform-based Fast Feature Extraction from Temperature Modulated Semiconductor Gas Sens.," Sens. Actuat. B- Chem. 81(2-3), 289 (2002). 227. T. Artursson, and M. Holmberg, Sens. Actuat. B—Chem. 87(2), 379 (2002). 222. M. O'Farrell, E. Lewis, C. Flanagan, W. Lyons, and N. Jackman, in "Proceedings of the 17th European Conference on Solid State Transducers—Eurosensors XVII," p. 993, 2003. 223. A. Riul, H. C. de Sousa, R. R. Malmegrim, D. dos Santos, A. C. P. L. F. Carvalho, F. J. Fonseca, O. N. Oliveira, and L. H. C. Mattoso, Sens. Actuat. B—Chem. 98(1), 77 (2004). 224. C. Apetrei, M. L. Rodriguez-Mendez, V. Parral, F. Gutierrez, and J. A. de Saja, in "Proceedings of the 17th European Conference on Solid State Transducers—Eurosensors XVII," p. 456, 2003. 225. C. Soderstrdm, F. Winquist, and C. Krantz-Riilcker, Sens. Actuat. B—Chem. 89(3), 248 (2003). 226. G. Sehra, M. Cole, and J. W Gardner, in "Proceedings of the 17th European Conference on Solid State Transducers—Eurosensors XVII," p. 44, 2003. 227. D. Rivera, M. K. Alam, W. G. Yelton, A. W. Staton, and R. J. Simonson, Sens. Actuat. B—Chem. 99(2-3), 480 (2004).

Sensor Applications of Digital Signal Processing 228. L. Lvova, A. Legin, Y. Vlasov, Geun Sig Cha, and Hakhyun Nam, Sens. Actual. B—Chem. 95(1-3), 391 (2004). 229. S. R. Johnson, J. M. Sutter, H. L. Engelhardt, and P. C. Jurs, Anal. Chem. 69 (22), 4641 (1997). 230. A. Depari, P. Ferrari, V. Ferrari, A. Flammini, A. Ghisla, D. Marioli, and A. Taroni, in "Proceedings of the IEEE Instrum. and Meas. Technol. Conference—IMTC'04," p. 201, 2004. 231. J. Lagnese and R. H. McKnight, IEEE Trans. Instrum. Meas. 37(2), 201 (1988). 232. P. M. T. Broersen, in "Proceedings of the IEEE Instrum. and Meas. Technol. Conference—IMTC," p. 36, 1998. 233. T. Daboczi, in "Proceedings of the IEEE Instrum. and Meas. Technol. Conference—IMTC'98," p. 1296, 1998. 234. G. Iuculano, M. Cox, G. Pellegrini, and A. Zanobini, in "Proceedings of the IEEE Instrum. and Meas. Technol. Conference— IMTC'04," p. 1240, 2004. 235. T. J. Ross, "Fuzzy Logic with Engineering Applications." McGrawHill, New York, 1995. 236. V. Lasserre, G. Mauris, and L. Loulloy, in "Proceedings of the IEEE Instrum. and Meas. Technol. Conference—IMTC'98," p. 837, 1998. 237. G. Mauris, V. Lasserre, and L. Foulloy, IEEE Trans. Instrum. Meas. 49(6), 1201 (2000). 238. G. Mauris, and L. Foulloy, IEEE Trans. Instrum. Meas. 51(4), 712 (2002). 239. M. K. Urbanski and J. Wasowski, Meas.—I. IMEKO 34(1), 67 (2003). 240. A. Ferrero and S. Salicone, in "Proceedings of the IEEE Instrum. and Meas. Technol. Conference—IMTC'04," p. 1620, 2004. 241. G. Mauris, L. Berrah, L. Foulloy, and A. Haurat, IEEE Trans. Instrum. Meas. 49(1), 784 (2000). 242. T. Szafranski and R. Z. Morawski, Meas.—J. IMEKO 29(1), 77 (2001). 243. C. Huddleston, "Sensor on line," Jan. 2003, www.sensorsmag.com.

163 244. B. K. Newhall, J. W. Jenkins, and J. E. G. Dietz, in "Proceedings of the IEEE Instrum. and Meas. Technol. Conference—IMTC'04," p. 873, 2004. 245. M. B. Yeary, and N. C. Griswold, IEEE Trans. Instrum. Meas. 51(2), 259 (2002). 246. S. Papezova, Sens. Actual B—Chem. 95(1-3), 328 (2003). 247. X. Q. Li and L. Qian, Sens. Actual. A- Phys. 112(2-3), 262 (2004). 248. L. P. van Biesen, L. Peirlinckx, S. Masyn, and S. Wartel, in "Proceedings of the IEEE Instr. and Meas. Technol. Conference— IMTC'90," p. 353, 1990. 249. O. Lee, A. P. Wade and Dumont G. A., Anal. Chem. 66(24), 4507 (1994). 250. D. K. Buslov and N. A. Nikonenko, Appl. Spectro. 52(4), 613 (1998). 257. F. Attivissimo, M. Savino, and A. Trotta, in "Proceedings of the IEEE Instrum. and Meas. Technol. Conference—IMTC'98," p. 909, 1998. 252. F. Figueroa, S. Griffin, L. Roemer, and J. Schmalzel, IEEE Instrum. Meas. Mag. 2(4), 23 (1999). 253. P. Sinha, "Sensor on line," Sept. 2003, www.sensorsmag.com. 254. M. Akay, IEEE Spectrum 34(5), 50 (1997). 255. H. Krin and J.-C. Pesquet, IEEE Signal Process. Mag. 15(5), 34 (1998). 256. S. Qian and D. Chen, IEEE Signal Process. Mag. 16(2), 52 (1999). 257. A. Essebar, IEEE Signal Process. Lett. 3(7), 218 (1996). 258. J. Beyerer, Meas.—J. IMEKO 25(1), 1 (1999). 259. G. B. Rossi, Meas.—J. IMEKO 34(2), 85 (2003). 260. V. Tuninsky, in "Proceedings of the 10th IMEKO-TC7 International Symposium on Advances of Measurement Science'," p. 367, 2004. 261. Y. C. Eldar and A. V. Oppenheim, IEEE Signal Process. Mag. 19(6), 12 (2002). 262. Telecom Glossary 2000 American National Standard for Telecommunications, http://www.atis.org/tg2k/tlg2k.html.