Constructive Algebra and Systems Theory
Proceedings of the Colloquium, Amsterdam, November 2000
Koninklijke Nederlandse Akademie van Wetenschappen Verhandelingen, Afd. Natuurkunde, Eerste Reeks, deel 53
Constructive Algebra and Systems Theory
Edited by Bernard Hanzon and Michiel Hazewinkel
Amsterdam, 2006
© 2006 Royal Netherlands Academy of Arts and Sciences No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the publisher. P.O. Box 19121, 1000 GC Amsterdam, the Netherlands T +31 20 551 07 00 F +31 20 620 49 41 E
[email protected] www.knaw.nl ISBN 906984477X ∞ isonorm 9706 (1994) for The paper in this publication meets the requirements of permanence.
v
Table of Contents
Bernard Hanzon and Michiel Hazewinkel An introduction to constructive algebra and systems theory
1
Bruno Buchberger Computerassisted proving by the PCS method 9 Hirokazu Anai and Shinji Hara A bridge between robust control and computational algebra. A robust control synthesis by a special quantifier elimination 23 Hans J. Stetter An introduction to the numerical analysis of multivariate polynomial systems 35 B. Mourrain An introduction to algebraic methods for solving polynomial equations Lorenzo Robbiano Zerodimensional ideals or the inestimable value of estimable terms
49
95
J.M. Maciejowski Computational aspects of computer algebra in systemtheoretic applications 115 Franz Winkler Computer algebra and geometry – some interactions
127
Bernard Hanzon and Dorina Jibetean A matrix method for finding the minimum or infimum of a polynomial Ralf Peeters and Paolo Rapisarda Solution of polynomial Lyapunov and Sylvester equations
151
H. Pillai, J. Wood and E. Rogers Constructive multidimensional systems theory with applications
167
139
vi
Isabel Brás and Paula Rocha A test for state/drivingvariable representability of 2D behaviors
185
Jan C. Willems and Harish K. Pillai Storage functions for systems described by PDE’s 193 Ulrich Oberst The constructive solution of linear systems of partial difference and differential equations with constant coefficients 205 A.H.M. Levelt Regular singularities and stable lattices
235
Vladimir P. Gerdt Involutive methods applied to algebraic and differential systems 245 P.H.M. Kersten and I.S. Krasil’shchik The Cartan covering and complete integrability of the KdV–mKdV system 251 Michel Fliess Variations sur la notion de contrôlabilité
267
Paolo Vettori and Sandro Zampieri Stability and stabilizability of delay–differential systems
307
S.T. Glad Using differential algebra to determine the structure of control systems 323 Giovanni Pistone, Eva Riccomagno and Henry P. Wynn A note on computational algebra for discrete statistical models Vladimir I. Elkin Reduction and categories of nonlinear control systems
349
341
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
An introduction to constructive algebra and systems theory ✩
Bernard Hanzon a and Michiel Hazewinkel b a b
School of Mathematical Sciences, University College, Cork, Ireland Centrum voor Wiskunde en Informatica, P.O. Box 94079, 1090 GB Amsterdam, The Netherlands
Emails:
[email protected] (B. Hanzon),
[email protected] (M. Hazewinkel)
1. I N T R O D U C T I O N In systems and control theory the concept of a state space realization of a dynamical system that is given by its input–output behavior plays a central role. An important landmark in the development of realization theory was the construction of a realization algorithm in [13]. This in turn was based on the theory of Kronecker of finite rank Hankel matrices associated with rational functions. Realization theory of linear dynamical systems has proved a very powerful tool. The mere fact of having a matrix representation (using for example a companion matrix or one of its many generalizations) has proved itself extremely useful for finding solutions and formulating problems. One of the larger open problems in this area is to link the powerful tools of computational algebra to some emerging ideas in infinitedimensional linear analysis, such as the ideas evolving around the concept of open systems (cf., e.g., [15]) and timedependent operator theory (cf., e.g., [1]). Realization of operators, systems, transfer functions, spectral density functions, probability density functions, forward interest rate curves, etc., leads to more tools and a better environment to work in. A matrix or operator is an abstract entity and as such has relatively little structure. If it appears as the central part of the system then it acquires handles and suddenly there are techniques available which one can explore. Examples are the notions of observability and controllability which are not natural if you just consider a single operator but which are eminently natural if you think of an operator as something which transforms data in one format into data of another format. Key words and phrases: Optimization, Polynomials, Constructive algebra, Gröbner basis, Linear algebra, Eigenvalue problems, Linear dynamical systems theory, Polynomial matrices, Multidimensional systems theory ✩ Research is partly supported by the NWO project 613304057.
2
B. Hanzon and M. Hazewinkel
Inversely, system and control theory, which is heavily dependent upon computation and which abounds in numerical algorithms, has a habit of generating challenging problems for computer algebra. So on the one side, as this Proceedings attempts to prove, computational algebra is a sofar not very much exploited tool in control and systems theory. On the other hand control and systems theory are a good source for challenging computational algebra problems. It is interesting to note that at the current level of computer algebra, which is still a young science, there are a number of problems in control theory1 which seem at the moment beyond the reach of presentday computer algebra techniques. So it looks that on both sides of the theme of this conference there is yet much work to be done. To give a quick impression of the link between computational algebra and state space realization theory we will now present two examples. The first one uses a minimum of technical notions, in the second one we give a bit more background, which should start to give hints at what is the kind of algebra that is behind this. Example 1. Consider the following system of polynomial equations in the unknowns y and z: g1 (y, z) = 2z3 − 11z2 + 17z − 6 = 0, g2 (y, z) = 3y 2 + 5yz + 2z − 17 = 0.
We would like to find the solutions of this system of two equations in two unknowns (namely y and z). One way to solve this is by considering a corresponding system of difference equations and constructing a commutative state space realization for the system involved. This leads to a matrix solution of the original system of polynomial equations, from which the scalar solutions can be deduced by solving a common eigenvector problem. In the present example this program runs as follows. The corresponding system of difference equations is obtained by interpreting the variables y and z as shift operators on two time axes. The corresponding time variables will be denoted by t and s , respectively. Let {vt,s }(t,s) ∈ N20 (where N0 := {0, 1, 2, . . .}) denote a solution of the associated system of difference equations, then we have 2vt,s+3 − 11vt,s+2 + 17vt,s+1 − 6vt,s = 0, 3vt+2,s + 5vt+1,s+1 + 2vt,s+1 − 17vt,s = 0
which can alternatively be written as vt,s+3 =
11 17 vt,s+2 − vt,s+1 + 3vt,s , 2 2
1 For instance, problems that have to do with computer vision, where one encounters problems that could be solved if algorithms to construct Gröbner bases would be available in the noncommutative case; however though such techniques are available in a number of cases, they are not available in full generality at the present time.
An introduction to constructive algebra and systems theory
3
5 2 17 vt+2,3 = − vt+1,s+1 − vt,s+1 + vt,s . 3 3 3
A state space realization of this system of difference equations is obtained as follows: Let the state vector be defined as wt,s = (vt,s , vt+1,s , vt,s+1 , vt+1,s+1 , vt,s+2 , vt+1,s+2 ) .
(The procedure which leads to this choice of state vector is not made explicit here, it is associated to the concept of monomial basis, cf. [7, Section 2.2, p. 36].) Then the difference equations can be written in terms of wt,s as: wt+1,s = ATY wt,s , wt,s+1 = ATZ wt,s ,
where
(1)
0
1 0 AY = 0 0 0
17 3
0
0
0
0
0
0
0
− 23
0
17 3
0
− 53
1
0
0
0
0
0
0
0
− 23 − 53
1
−2
−5 17 3 85 6 2 − 55 6
and
(2)
0
0 1 AZ = 0 0 0
0
0
0
3
0
0
0
0
0
0
0
− 17 2
1
0
0
0
0
1
0
11 2
0
0
1
0
0
3 0 . − 17 2 0 11 3
This matrix–vector version can be obtained by repeatedly applying the difference equations. The first equation reads vt+1,s = vt+1,s
which is trivial. The second equation reads vt+2,s =
17 2 5 vt,s − vt,s+1 − vt+1,s+1 3 3 3
which is obtained directly from the original equations. Similarly the third and fifth equations are trivial and the fourth is obtained directly from the original equations. The sixth equation is the most interesting one. It can be obtained by applying the original equations three times, first the second one, then twice the first one:
4
B. Hanzon and M. Hazewinkel
5 2 17 vt+2,s+2 = − vt+1,s+3 − vt,s+3 + vt,s+2 3 3 3 5 11 17 vt+1,s+2 − vt+1,s+1 + 3vt+1,s =− 3 2 2 17 2 11 17 − vt,s+2 − vt,s+1 + 3vt,s + vt,s+2 3 2 2 3 55 85 17 = − vt+1,s+2 + 2vt,s+2 + vt+1,s+1 + vt,s+1 6 6 3 − 5vt+1,s − 2vt,s .
A similar calculation gives the matrix–vector equation wt,s+1 = ATZ wt,s .
Note that wt+1,s+1 = ATY wt,s+1 = ATY ATZ wt,s = ATZ wt+1,s = ATZ ATY wt,s .
It can actually be seen by simple inspection that ATY ATZ = ATZ ATY in the present example, so ATY and ATZ commute. The solutions of the difference equations can now be written as t s wt,s = ATY ATZ w0,0 , t s vt,s = e1T ATY ATZ w0,0
for all (t, s) ∈ N20 . Substituting this in the original system of difference equations we get 3 2 e1T 2 ATZ − 11 ATZ + 17ATZ − 6I = 0, 2 e1T 3 ATY + 5ATY ATZ + 2ATZ − 17I = 0.
Now if η is a common eigenvector of ATY and ATZ with ξ = (ξ1 , ξ2 )T the corresponding multieigenvalue, i.e. ATY η = ξ1 η and ATZ η = ξ2 η , and e1T η = 0, then we obtain by postmultiplication with η of both equations: e1T η 2ξ22 − 11ξ22 + 17ξ2 − 6 = 0
and e1T η 3ξ12 + 5ξ1 ξ2 + 2ξ2 − 17 = 0,
therefore (ξ1 , ξ2 ) is a solution of the original system of polynomial equations! Also, if (ξ1 , ξ2 ) is a solution of the original system of polynomial equations, then vt,s = ξ1t ξ2s ,
(t, s) ∈ N20 ,
An introduction to constructive algebra and systems theory
5
is a solution of the system of difference equations, and the corresponding state vector
T wt,s = ξ1t ξ2s , ξ1t+1 ξ2s , ξ1t ξ2s+1 , ξ1t+1 ξ2s+1 , ξ1t ξ2s+2 , ξ1t+1 ξ2s+2 is an eigenvector, that is if wt,s = 0, which is true at least for (t, s) = (0, 0), of ATY with eigenvalue ξ1 and at the same time an eigenvector of ATZ with eigenvalue ξ2 . Therefore all solutions of the system of polynomial equations are found as the multieigenvalue ξ = (ξ1 , ξ2 )T corresponding to a common eigenvalue η of ATY and ATZ . Example 2. Consider the following system of three polynomial equations. In fact it is a Gröbner basis based on lexicographical ordering with x > y > z: g1 (x, y, z) = 2z3 − 11z2 + 17z − 6, g2 (x, y, z) = 3y 2 + 5yz + 2z − 17, g3 (x, y, z) = 6x 4 + 9x 3 yz + 5xyz + 2xz − 2x + 72.
The leading monomials of g1 , g2 and g3 are z3 , y 2 and x 4 respectively. It follows from [6, Section 5.3, Theorem 6] that this system of equations has a finite number of solutions over the field of complex numbers. The corresponding monomial basis consists of 24 elements, namely x α y β zγ with α = 0, 1, 2, 3, β = 0, 1, γ = 0, 1, 2. We can also calculate a Gröbner basis based on degreereverselexicographical (DRL) ordering, again with x > y > z: d1 (x, y, z) = 2z3 − 11z2 + 17z − 6, d2 (x, y, z) = 3y 2 + 5yz + 2z − 17, d3 (x, y, z) = 9x 3 yz + 6x 4 + 5xyz + 2xz − 2x + 72, d4 (x, y, z) = 6x 4 y + 10x 4 z − 6x 3 z2 + 51x 3 z + 2xyz − 2xy + 25xz + 72y + 120z, d5 (x, y, z) = 12x 5 − 165x 4 z − 54x 3 z2 + 255x 4 + 135x 3 y − 153x 3 z + 10x 2 yz − 6xyz2 + 54x 3 + 4x 2 z + 6xyz − 85xz2 − 4x 2 + 75xy + 55xz − 216yz + 89x − 1980z + 3060, d6 (x, y, z) = 6x 4 z2 − 33x 4 z + 51x 4 + 27x 3 y − 2xz2 + 15xy + 11xz + 72z2 − 11x − 396z + 612.
Note that LM (d1 ) = z3 , LM (d2 ) = y 2 and LM (d5 ) = x 5 . Therefore, again, we can conclude the system of equations has a finite number of solutions. However in this case the monomial basis is slightly more complicated. If consists again of 24 elements, namely 1, x, x 2 , x 3 , x 4 , y, xy, x 2 y, x 3 y; z, xz, x 2 z, x 3 z, x 4 z, yz, xyz, x 2 yz; z2 , xz2 , x 2 z2 , x 3 z2 , yz2 , xyz2 .
6
B. Hanzon and M. Hazewinkel
We can again derive a finitedimensional statespace realization for the associated system of difference equations. The realization will be somewhat different depending on which Gröbner basis is used. The dimension of the statespace will be 24 and the three matrices corresponding with the shift in each of the three time axes will have size 24 × 24. The multieigenvalues corresponding to a common eigenvalue of the three matrices form a (scalar) solution of the system of polynomial equations we started with. And all solutions can be found in this way. The method that is used here can be applied to all systems of polynomial equations that have a finite number of complex solutions. If the system of polynomial equations is not in Gröbner basis form, then it can be put in that form by applying the wellknown Buchberger algorithm or one of its many variants. The reader is advised to do some more Gröbner basis calculations with this system of polynomial equations, using a computer algebra system. It will be instructive to see the enormous difference in size between the various Gröbner bases for the same system of equations. From these examples it should be clear that there are strong connections between polynomial system solving and multidimensional systems theory, and especially between the Stetter–Möller (cf. [17]) matrix method to polynomial systems solving and realization theory for multidimensional systems. But there are many other connections between constructive algebra and systems theory as well, as should become clear from this volume: There are contributions involving theorem proving and quantifier elimination (Buchberger, Anai & Hara), solving polynomial equations and algebraic optimization problems by matrix methods, Gröbner basis methods and other constructive algebra methods (Stetter, Mourrain, Robbiano, Maciejowski, Hanzon & Jibetean, Peeters & Rapisarda), the interplay between computer algebra and multidimensional (or nD) systems (Pillai & Wood & Rogers, Bras & Rocha, Willems & Pillai, Oberst), applications of computer algebra to various kinds of modeling (Anai & Hara, Maciejowski, Winkler, Fliess, Vettori & Zampieri, Glad, Pistone & Riccomagno & Wynn), applications of differential algebra and differential geometry to systems theory (Fliess, Glad, Levelt, Gerdt, Kersten, Elkin). We hope this volume will contribute to the exchange of information and interaction between the various theoretical and applied fields involved, which we think is vital for the developments on both sides. R EFERENCES [1] Arov D.Z., Kaashoek M.A., Pik D.R. – Optimal timevariant systems and factorizations of operators I, II, Integral Equations Operator Theory 31 (4) (1998) 389–420; J. Operator Theory 43 (2) (2000) 263–294. [2] Attasi S. – Modelling and recursive estimation for double indexed sequences, in: Mehra R.K., Lainiotis D.G. (Eds.), System Identification: Advances and Case Studies, Academic Press, New York, 1976, pp. 289–348. [3] Basu S., Pollack R., Roy M.F. – A new algorithm to find a point in every cell defined by a family of polynomials, in: Caviness B.F., Johnson J.R. (Eds.), Quantifier Elimination and Cylindrical Algebraic Decomposition, 1998. [4] Bochnak J., Coste M., Roy M.F. – Géometrie algébrique réelle, Springer, 1987.
An introduction to constructive algebra and systems theory
7
[5] Buchberger B. – in: Bose N. (Ed.), Multidimensional Systems Theory, Reidel Publishing Company, Dordrecht, 1985, pp. 184–232. [6] Cox D., Little J., O’Shea D. – Ideals, Varieties, and Algorithms, 2nd ed., Springer, New York, 1997. [7] Cox D., Little J., O’Shea D. – Using Algebraic Geometry, Springer, New York, 1998. [8] Forney G.D. Jr. – Minimal bases of rational vector spaces with applications to multivariable linear systems, SIAM J. Control 13 (1975) 493–520. [9] Geddes K., Czapor S., Labahn G. – Algorithms for Computer Algebra, Kluwer Academic Publishers, 1992. [10] Hanzon B., Jibetean D. – Global minimization of a multivariate polynomial using matrix methods, J. Global Optim. 27 (2003) 1–23. [11] Hanzon B., Maciejowski J.M. – Constructive algebra methods for the L2 problem for stable linear systems, Automatica 32 (12) (1996) 1645–1657. [12] Hanzon B., Maciejowski J.M., Chou C.T. – Model reduction in H2 using matrix solutions of polynomial equations, Technical Report CUED/FINFENG/TR314, Cambridge University Engineering Department, 1998. [13] Ho B.L., Kalman R.E. – Effective construction of linear statevariable models from input/output functions, Regelungstechnik 14 (1966) 545–548. [14] Kreuzer M., Robbiano L. – Computational Commutative Algebra, Springer, Berlin, 2000. [15] Livšic M.S., Kravitsky N., Markus A.S., Vinnikov V. – Theory of Commuting Nonselfadjoint Operators, Kluwer Academic Publishers, Dordrecht, 1995. [16] Möller H. – Systems of algebraic equations solved by means of endomorphisms, in: Cohen, Mora, Moreno (Eds.), Lecture Notes in Comput. Sci., vol. 673, SpringerVerlag, New York, 1993, pp. 43–56. [17] Möller H., Stetter H.J. – Multivariate polynomial equations with multiple zeros solved by matrix eigenproblems, Numer. Math. 70 (1995) 311–329. [18] Stetter H.J. – Matrix eigenproblems are at the heart of polynomials systems solving, SIGSAM Bull. 30 (1996) 22–25.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
Computerassisted proving by the PCS method
Bruno Buchberger Research Institute for Symbolic Computation, Johannes Kepler University, A4040 Linz, Austria
A BSTRACT In this chapter we describe Theorema, a system for supporting mathematical proving by computer. The emphasis of the chapter is on showing how, in many situations, proving can be reduced to solving in algebraic domains. We first illustrate this by known techniques, in particular the Gröbner bases technique. Then we go into the details of describing the PCS technique, a new technique that is particularly well suited for doing proofs in elementary analysis and similar areas in which the notions involved, typically, contain alternating quantifiers in their definitions. We conclude with a general view on the interplay between automated proving, solving, and simplifying.
1. A L G E B R A ,
ALGORITHMS, PROVING
1.1. An algebraic notion: Gröbner bases We start with a typical algebraic definition, the definition of the concept of “Gröbner basis” which, in the notation of our Theorema system [2,3], looks like this: Definition “Gröbner basis”, any[F ], isGröbnerbasis[F ] :⇐⇒ f →F ∗ 0 . ∀ f ∈Ideal[F ]
Here, →F ∗ denotes the algorithmic reduction (division) relation on polynomials modulo sets F of multivariate polynomials. The notion of Gröbner basis turned out to be important for polynomial ideal theory because many ideal theoretic problems can be solved “easily” if Gröbner bases generating the ideals involved are known
10
B. Buchberger
and, furthermore, many problems in other areas of mathematics can be reduced “easily” to ideal theoretic problems. For the theory of Gröbner bases, originally introduced in [1], see the textbooks on Gröbner bases, e.g., [7] that contains a list of all other current textbooks on Gröbner bases. Tutorials on the applications of Gröbner bases to fields other than polynomial ideal theory can be found in [4]. For example, it turned out that Gröbner bases have important applications also to fundamental problems in systems theory. A recent overview by J. Wood [9] lists the following problems of systems theory that can be solved by the Gröbner bases method: elimination of variables, computation of minimal left and right annihilators, computation of controllable part; controllability test, observability test, computation of transfer matrix and minimal realization, solution of the Cauchy problem for discrete systems, testing for inclusion; addition of behaviors, tests for zero/weak zero/minor primeness; Bezout identity construction, finitedimensionality test, computation of various sets of poles and zeros; polar decomposition, achievability by regular interconnection, computation of structure indices. Because of the many possible applications, the algorithmic construction of Gröbner bases is of considerable importance. In the sequel, we will take the example of Gröbner bases as an example for illustrating various notions of “constructing” in mathematics. The chapter will then focus on bringing together two levels of looking to mathematics: The object level, in which we develop an algorithmic theory like Gröbner bases theory using various (automated) proof techniques and the metalevel, in which we apply algorithmic results from the object level, e.g., Gröbner bases theory, for further automation of proving in other areas of mathematics. 1.2. Inconstructive ‘‘construction’’ of Gröbner bases The problem of “constructing” Gröbner bases can be cast into the form of the following theorem, which in Theorema notation looks as follows: Theorem “Existence of Gröbner bases”, ∀ ∃ Ideal[G] = Ideal[F ] ∧ isGröbnerbasis[G] . F G
A first proof of this theorem could be given by just observing that G := Ideal[F ] has the required properties. Trivially, all polynomials f ∈ Ideal[G] = Ideal[Ideal[F ]] = Ideal[F ] can be reduced to zero modulo G by just subtracting f . However, this is like a bad joke! The “construction” of the ideal generated by F is an infinite
Computerassisted proving by the PCS method
11
process. Thus, this proof of the theorem contains too little information for being algorithmically useful. 1.3. Inconstructive ‘‘construction’’ of finite Gröbner bases We now move one step further and consider the following, stronger, version of the theorem: Theorem “Existence of finite Gröbner bases”, ∀ ∃ isfinite[G] ∧ Ideal[G] = Ideal[F ] ∧ isGröbnerbasis[G] . F G
The proof can be given in the following way: We consider the set Contour[F ] := LP[f ]  f ∈ Ideal[F ] ∧ ¬
∃
g∈Ideal[F ]
LP[g] = LP[f ] ∧ divides LP[g], LP[f ] .
(Here, LP[f ] denotes the leading power product of the polynomial f .) By Dixon’s lemma, this set is finite. Now it is easy to see that
G := Select[t]  t ∈ Contour[F ]
has the desired properties (where Select is a choice function that, for any power product t ∈ Contour[F ] selects a polynomial in Ideal[F ] whose leading power product is t ). This proof is much more constructive than the previous one and gives the additional information on the finiteness of G. However, although the final outcome of the construction is the finite set G, the construction itself is far from being an algorithm: For constructing Contour[F ], one still has to consider the infinitely many polynomials f in the ideal generated by F and, for each such f , one has to check whether any of the infinitely many g in the same ideal satisfies a certain (algorithmic) condition. Also, Select is not an algorithmic function. 1.4. Algorithmic construction of finite Gröbner bases The algorithmic version of the problem can be formulated as follows: Theorem “Algorithmic construction of finite Gröbner bases”, ∃ ∀ isfinite GB[F ] ∧ GB
isalgorithmic[GB]
F
Ideal GB[F ] = Ideal[F ] ∧ isGröbnerbasis GB[F ] .
It turns out that an algorithm GB can be established relatively easily as soon as the following theorem is proved:
12
B. Buchberger
Theorem “Algorithmic characterization of Gröbner bases”, any[F ], isGröbnerbasis[F ] :⇐⇒ ∀ SP[f, g] →F ∗ 0 . f,g∈F
Here, SP[f, g] is the “Spolynomial of f and g ” defined by SP[f, g] = u.f − v.g, where the monomials u and v are chosen in such a way that u.LP[f ] = v.LP[g] = LCM[LP[f ], LP[g]]. Note that, if F is finite, the righthand side of the theorem is an algorithm! Hence, by this theorem, the inconstructive condition in the original definition of the notion of Gröbner basis can be replaced by an algorithmic condition! The power of the Gröbner basis method is contained entirely in this theorem and its proof, which was originally given in [1]. The above theorem can now be turned into an algorithm GB for constructing Gröbner bases by considering all Spolynomials of polynomials in F and reducing them to normal forms. If all these reductions yield zero, by Theorem[“Algorithmic characterization of Gröbner bases”], the original F is already a Gröbner basis. Otherwise, one has to adjoin the results of the reductions to the basis and repeat the process until it terminates. (Termination of this process can either be shown by Hilbert’s basis theorem or, alternatively, by Dickson’s lemma.) For details on the algorithm GB, see the textbooks on Gröbner bases, e.g., [7]. 1.5. Reduction of mathematical problems to the construction of Gröbner bases As a matter of fact, many fundamental problems in various mathematical areas can be reduced to the construction of Gröbner bases and, hence, become algorithmically solvable. In this chapter, our focus is not on this wellknown reduction of problems inside various mathematical areas but on the reduction of proving about various mathematical areas to the solution of algebraic problems as, for example, Gröbner bases construction. Thereby, proving in certain areas of mathematics becomes algorithmic. Looking further into the future, also proving about Gröbner bases theory may become algorithmic some day. We are currently working on this. 2. R E D U C E
PROVING TO ALGEBRA: TWO WELLKNOWN METHODS
2.1. Proving Boolean combinations of equalities by the Gröbner bases method The truth of a whole class of formulae can be decided by reducing the decision to the computation of Gröbner bases: This class consists of all universally quantified Boolean combinations of equalities between arithmetical terms with variables ranging over the complex numbers (or any other algebraically closed field with algorithmic field operations). In fact, this class of formulae contains hundreds of interesting geometrical theorems in algebraic formulation using coordinates. Here is an example of such a formula:
Computerassisted proving by the PCS method
13
Formula “Test”, any[x, y], 2 “B1” (x y − 3x) = 0 ∨ (xy + x + y) = 0 ∨ 2 2 2 3 2 2 2 2 x y + 3x = 0 ∨ −2x + −7xy + x y + x y + −2y + −2xy + 2x y = 0 ∧ x 2 + −xy + x 2 y + −2y 2 + −2xy 2 = 0
By the following call, Theorema generates the proof of this formula together with explanatory text that, in fact, explains the general reduction method to Gröbner bases computations for the particular input formula: Prove Formula[“Test”], using → { }, by → GroebnerBasesProver . The output of this call is the proof text in Appendix A1 that may well be sufficient for the reader to understand also the general method behind this reduction of proving to the construction of certain Gröbner bases. 2.2. Proving firstorder formulae over the reals by cylindrical algebraic decomposition Another wellknown, and rather sophisticated, method for reducing proving to the solution of a certain algebraic problem is Collins’ method for proving firstorder formulae over the reals by cylindrical algebraic decomposition of Rn . Collins’ theorem on which the method hinges, essentially, states that, given a finite set of multivariate polynomials F , Rn decomposes into finitely many “cells” so that, on each of them, each f ∈ F stays signinvariant. Given F , Collins’ algorithm then finds at least one “sample” point in each cell. In an oversimplified presentation, the truth of any firstorder predicate formula over the reals can then be decided in the following way: • First construct sample points in each cell in a signinvariant decomposition of Rn for the set F of polynomials occurring in the formula. • Partial formulae of the form (∀x ∈ R) P[x] can then be decided by checking P[α1 ] ∧ · · · ∧ P[αk ] for the finitely many sample points αi . • Similarly, (∃x ∈ R) P[x] can be decided.
For the details, which are in fact quite involved, see the original paper [6] and the recent collection of articles on Collins’ method [5]. Collins’s algorithm, as the algorithm for constructing Gröbner bases, is now available in (some of) the mathematical software systems, like Mathematics, and can, hence, be routinely applied for proving. 3. R E D U C E
PROVING TO ALGEBRA:
PCS,
A NEW METHOD
3.1. The purpose of the method We focus now on presenting a new method (the “PCS Method”) for reducing proving in certain areas of mathematics to the solution of algebraic problems: This method is a heuristic method, which
14
B. Buchberger
• aims at automatically generating “natural” proofs that can easily be understood and checked by a human reader, • is particularly suited for formulae involving concepts defined by formulae containing “alternating quantifiers” (i.e. alternations of the quantifiers ∀ and ∃), as this is typically the case in proofs in analysis, and • allows the reduction of higherorder formulae (i.e. formulae containing variables on functions and predicates) to solving firstorder formulae, in particular quantifierfree firstorder formulae (that can then be solved, for example, by Collins’ method).
The method is a heuristic method in the sense that we are not (yet) able to give a completeness result for the method. As a practical result we observed, however, that most of the typical propositions in elementary analysis textbooks (which are still considered to present quite some challenge for automated theorem proving) can be automatically proved by the method and that, actually, these proofs can be found with very little search in the search space. Applications of the approach to areas in mathematics other than analysis, e.g., to areas involving set theory, have been investigated, see [8]. 3.2. An example The PCS method is best explained in an example. Consider, for example, the usual definition of “sequence f has limit a ”, which in Theorema notation looks as follows: Definition “limit:”, any[f, a], limit[f, a] ⇐⇒ ∀ ∃ ∀ f [n] − a < ε . ε N n ε>0 nN
Note that ‘f ’ is a function variable, i.e. a higherorder variable, as can be seen by its occurrence in the term ‘f [n]’. Thus, propositions involving this notion cannot be treated by firstorder methods like Collins’ method. A typical theorem on the notion ‘limit’ is, for example, Proposition “limit of sum”, any[f, a, g, b], limit[f, a] ∧ limit[g, b] ⇒ limit[f + g, a + b] . Of course, this proposition cannot be proved without further knowledge on the notions occurring in the proposition and the definition of limit as, for example, the notions ‘ 0 by assumption (8)) and ε1∗ = ε0 − δ0∗ . Finally, the solving term max N0 [δ0∗ ], N1 [ε0 − δ0∗ ]
for N2∗ is produced. Note that this term contains much more constructive information than is usually given in proofs presented in textbooks of elementary analysis: It tells us that, if we know a function N0 that gives the index bound for f in dependence on the distance between sequence values and limit and similarly a function N1 for g , we can explicitly compute the index bound for the sum sequence f + g : If we want to be sure that (f + g)[n] stays closer to a + b than a given ε0 > 0, take any δ0∗ between 0 and ε0 and take n > max[N0 [δ0∗ ], N1 [ε0 − δ0∗ ]]. In case N0 and N1 are algorithms, this procedure is an algorithm. In other words, the above proof does not only prove the theorem but it automatically synthesizes an algorithm for the index bound of the sum of sequences.
Computerassisted proving by the PCS method
17
3.4. A general view The PCS method heavily interrelates proving, solving, and simplifying (computing). In fact, we believe that the interrelation of proving, solving, and simplifying is fundamental for the future of algorithmic mathematics. Proving, solving, and simplifying seem to be the three basic mathematical activities that, roughly, correspond to how the free variables in formulae are treated: An algorithm A is a prover for the theory T iff, for all formulae F and knowledge bases K , if A[F, K] = “proved”
then
T ∪ K = (∀x) F
(where x are the free variables in F ). (Complete provers would have to produce “proved” iff T ∪ K = (∀x) F .) An algorithm A is a solver for the theory T iff, for all F and K , if T ∪ K = (∃x) F
then
T ∪ K = (∀y) A[F, K] ◦ F
(where x are the free variables in F and y are the free variables in the substitution A[F, K] which has the form x → S , where S is a term). An algorithm A is a simplifier for the theory T iff, for all F and K , T ∪ K = (∀x) (A[F, K] ⇔ F )
(and A[F, K] “is simpler than” F )
(where x are the free variables in F ). Accordingly, Theorema is a (growing) library of • elementary provers, solvers, simplifiers for various theories and • reduction methods that reduce proving, solving, simplifying in various theories to proving, solving, simplifying in other theories
together with tools for producing explanatory text and nice syntax and managing mathematical knowledge bases. Theorema is programmed in Mathematica and also heavily uses the frontend of Mathematica. Hence, Theorema runs on all platforms on which Mathematica is available. In the future, the interplay between proving, solving, and simplifying will be made the main structuring design feature of Theorema. As a consequence, we will give the user more explicit control over this interplay. APPENDIX A A.1. A Proof by the Gröbner Bases method The Theorem is proved by the Gröbner Bases method. The formula in the scope of the universal quantifier is transformed into an equivalent formula that is a conjunction of disjunctions of equalities and negated equalities. The universal quantifier can then be distributed over the individual parts of the conjunction. By this, we obtain:
18
B. Buchberger
Independent proof problems: (Formula (Test): B1.1) ∀ x 2 + (−x y) + x 2 y + −2 y 2 + −2 x y 2 = 0 x,y
∨ −3 x + x 2 y = 0) ∨ (x + y + x y = 0) .
(Formula (Test): B1.2) ∀ 3 x + x2 y = 0 ∨ x,y
−2 x 2 + −7 x y + x 2 y + x 3 y + −2 y 2 + −2 x y 2 + 2 x 2 y 2 = 0 ∨ −3 x + x 2 y = 0 ∨ (x + y + x y = 0) .
We now prove the above individual problems separately: Proof of (Formula (Test): B1.1). . . . (Here comes the proof of the first partial problem. We do not show it here because it is similar and, in fact, simpler that the proof of the second partial problem, which we show in all detail. . . .) 2 Proof of (Formula (Test): B1.2). This proof problem has the following structure: (Formula (Test): B1.2.structure) ∀ Poly[1] = 0 ∨ Poly[2] = 0 ∨ Poly[3] = 0 ∨ Poly[4] = 0 , x,y
where Poly[1] = −3 x + x 2 y, Poly[2] = x + y + x y, Poly[3] = 3 x + x 2 y, Poly[4] = −2 x 2 + −7 x y + x 2 y + x 3 y + −2 y 2 + −2 x y 2 + 2 x2 y2.
(Formula (Test): B1.2.structure) is equivalent to (Formula (Test): B1.2.implication) ∀ Poly[1] = 0 ∧ Poly[2] = 0 ⇒ Poly[3] = 0 ∨ Poly[4] = 0 . x,y
(Formula (Test): B1.2.implication) is equivalent to (Formula (Test): B1.2.notexists) ∃ Poly[1] = 0 ∧ Poly[2] = 0 ∧ Poly[3] = 0 ∧ Poly[4] = 0 . x,y
Computerassisted proving by the PCS method
19
By introducing the slack variable(s) {ξ1 , ξ2 }
(Formula (Test): B1.2.notexists) is transformed into the equivalent formula (Formula (Test): B1.2.notexistsslack) ∃ Poly[1] = 0 ∧ Poly[2] = 0 ∧ x,y,ξ1 ,ξ2
−1 + ξ1 Poly[3] = 0 ∧ −1 + ξ2 Poly[4] = 0 .
Hence, we see that the proof problem is transformed into the question on whether or not a system of polynomial equations has a solution or not. This question can be answered by checking whether or not the (reduced) Gröbner basis of
Poly[1], Poly[2], −1 + ξ1 Poly[3], −1 + ξ2 Poly[4]
is exactly {1}. Hence, we compute the Gröbner basis for the following polynomial list: −1 + 3xξ1 + x 2 yξ1 , −1 + −2x 2 ξ2 + −7xyξ2 + x 2 yξ2 + x 3 yξ2
+ − 2y 2 ξ2 + −2xy 2 ξ2 + 2x 2 y 2 ξ2 , −3x + x 2 y, x + y + xy .
The Gröbner basis: {1}.
Hence, (Formula (Test): B1.2) is proved.
2
Since all of the individual subtheorems are proved, the original formula is proved. A.2. A proof by the PCS method Prove: (Proposition (limit of sum))
∀ (limit[f, a] ∧ limit[g, b] ⇒ limit[f + g, a + b]),
f,a,g,b
under the assumptions: ∀ (limit[f, a] ⇔ ∀ε ∃ ∀n (f [n] − a < ε)),
(Definition (limit:))
f,a
(Definition (+:)) (Lemma (+))
N
nN
∀ ((f + g)[x] = f [x] + g[x]),
f,g,x
∀
x,y,a,b,δ,ε
(Lemma (max))
ε>0
∀
((x + y) − (a + b) < δ + ε ⇐ (x − a < δ ∧ y − b < ε)),
m,M1 ,M2
(m max[M1 , M2 ] ⇒ m M1 ∧ m M2 ).
20
B. Buchberger
We assume (1)
limit[f0 , a0 ] ∧ limit[g0 , b0 ],
and show (2)
limit[f0 + g0 , a0 + b0 ].
Formula (1.1), by (Definition (limit:)), implies: (3) ∀ ∃ ∀ f0 [n] − a0 < ε . ε N n ε>0 nN
By (3), we can take an appropriate Skolem function such that f0 [n] − a0 < ε . (4) ∀ ∀ ε n ε>0 nN0 [ε]
Formula (1.2), by (Definition (limit:)), implies: (5) ∀ ∃ ∀ g0 [n] − b0 < ε . ε N n ε>0 nN
By (5), we can take an appropriate Skolem function such that g0 [n] − b0 < ε . (6) ∀ ∀ ε n ε>0 nN1 [ε]
Formula (2), using (Definition (limit:)), is implied by: (7) ∀ ∃ ∀ (f0 + g0 )[n] − (a0 + b0 ) < ε . ε N n ε>0 nN
We assume (8)
ε0 > 0,
and show (9)
∃ ∀n (f0 + g0 )[n] − (a0 + b0 ) < ε0 .
N
nN
We have to find N2∗ such that (10) ∀ n N2∗ ⇒ (f0 + g0 )[n] − (a0 + b0 ) < ε0 . n
Formula (10), using (Definition (+:)), is implied by: (11) ∀ n N2∗ ⇒ (f0 [n] + g0 [n]) − (a0 + b0 ) < ε0 . n
Computerassisted proving by the PCS method
21
Formula (11), using (Lemma (+)), is implied by: (12)
∃
∀ n N2∗ ⇒ f0 [n] − a0  < δ ∧ g0 [n] − b0 < ε .
δ,ε n δ+ε=ε0
We have to find δ0∗ , ε1∗ and N2∗ such that (13)
(δ0∗ + ε1∗ = ε0 )
∀ n N2∗ ⇒ f0 [n] − a0 < δ0∗ ∧ g0 [n] − b0 < ε1∗ . n
Formula (13), using (6), is implied by: (δ0∗ + ε1∗ = ε0 )
∀ n N2∗ ⇒ f0 [n] − a0 < δ0∗ ∧ ε1∗ > 0 ∧ n N1 [ε1∗ ] , n
which, using (4), is implied by: (δ0∗ + ε1∗ = ε0 )
∀ n N2∗ ⇒ δ0∗ > 0 ∧ n N0 [δ0∗ ] ∧ ε1∗ > 0 ∧ n N1 [ε1∗ ] , n
which, using (Lemma (max)), is implied by: (14)
(δ0∗ + ε1∗ = ε0 )
∀ n N2∗ ⇒ δ0∗ > 0 ∧ ε1∗ > 0 ∧ n max N0 [δ0∗ ], N1 [ε1∗ ] . n
Formula (14) is implied by (15) (δ0∗ + ε1∗ = ε0 )
δ0∗ > 0
ε1∗ > 0 ∧ ∀ n N2∗ ⇒ n max N0 [δ0∗ ], N1 [ε1∗ ] . n
Partially solving it, formula (15) is implied by (16)
(δ0∗ + ε1∗ = ε0 ) ∧ δ0∗ > 0 ∧ ε1∗ > 0 ∧ N2∗ = max N0 [δ0∗ ], N1 [ε1∗ ] .
Now, (δ0∗ + ε1∗ = ε0 ) ∧ δ0∗ > 0 ∧ ε1∗ > 0
can be solved for δ0∗ and ε1∗ by a call to Collins cadmethod yielding the solution 0 < δ0∗ < ε1∗ , ε1∗ ← ε0 + −1 δ0∗ .
Let us take N2∗ ← max N0 [δ0∗ ], N1 [ε0 + −1 δ0∗ ] .
Formula (16) is solved. Hence, we are done.
22
B. Buchberger
R EFERENCES [1] Buchberger B. – An algorithmic criterion for the solvability of algebraic systems of equations, Aequationes Mathematicae 4/3 (1970) 374–383. (English translation in: [4], pp. 535–545.) This is the journal publication of the PhD thesis: B. Buchberger, On Finding a Vector Space Basis of the Residue Class Ring Modulo a Zero Dimensional Polynomial Ideal (German), University of Innsbruck, Austria, 1965. [2] Buchberger B., Dupre C., Jebelean T., Kriftner F., Nakagawa K., Vasaru D., Windsteiger W. – The Theorema project: A progress report, in: Kerber M., Kohlhase M. (Eds.), 8th Symposium on the Integration of Symbolic Computation and Mechanized Reasoning (St. Andrews, Scotland, August 6–7), Available from Fachbereich Informatik, Universität des Saarlandes, Germany, 2000, pp. 100–115. [3] Buchberger B., Jebelean T., Kriftner F., Marin M., Vasaru D. – An overview on the Theorema project, in: Kuechlin W. (Ed.), Proceedings of ISSAC’97, International Symposium on Symbolic and Algebraic Computation (Maui, Hawaii, July 21–23, 1997), ACM Press, 1997, pp. 384–391. [4] Buchberger B., Winkler F. (Eds.) – Gröbner Bases and Applications, Proc. of the International Conference “33 Years of Gröbner Bases”, in: London Math. Soc. Lecture Note Series, vol. 251, Cambridge University Press, 1998, 552 p. [5] Caviness B.F., Johnson J. (Eds.) – Quantifier Elimination and Cylindrical Algebraic Decomposition, Springer, New York, 1998, 431 p. [6] Collins G.E. – Quantifier elimination for the elementary theory of real closed fields by cylindrical algebraic decomposition, in: Brakhagem H. (Ed.), Automata Theory and Formal Languages, in: Lecture Notes in Comput. Sci., vol. 33, pp. 134–183. Reprinted in [5], pp. 85–121. [7] Kreuzer M., Robbiano L. – Computational Commutative Algebra, vol. 1, Springer, Berlin, 2000, 321 p. [8] Windsteiger W. – A Set Theory Prover Within Theorema, PhD Thesis, RISC Institute, University of Linz, Austria, 2001. [9] Wood J. – Modules and behaviours in nD systems theory, Multidimensional Systems and Signal Processing 11 (2000) 11–48.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
A bridge between robust control and computational algebra. A robust control synthesis by a special quantifier elimination
Hirokazu Anai a,1 and Shinji Hara b a
IT Core Laboratories, Fujitsu Laboratories Ltd., 411 Kamikodanaka, Nakaharaku, Kawasaki, Kanagawa 2118588, Japan b Dept. of Information Physics and Computing, The University of Tokyo, 2121 OhOkayama, Meguroku, Tokyo 1528552, Japan Emails:
[email protected] (H. Anai),
[email protected] (S. Hara)
A BSTRACT For multiobjective design and robust control synthesis problems, the methods based on Quantifier Elimination (QE) have been proposed. QE based methods are really suitable for such problems but, in general, they have a drawback on computational complexity. In this chapter, we will propose an efficient symbolic method for a parameter space approach based on sign definite condition (SDC) by a special quantifier elimination using Sturm–Habicht sequence. Then we examine its feasibility, in particular, for multiobjective control using low degree fixedstructure controller by showing the several experimental results.
1. I N T R O D U C T I O N There is a close relation between the history of development of control theory and computational methods which are available at that time. The stability criterion of Routh–Hurwitz, for example, provides a method to check stability only in terms of four fundamental arithmetic operations without finding roots of a characteristic polynomials at the times when it was difficult to find the polynomial roots due to lack of computers. Nichols diagram provides the frequency characteristics of a closedloop systems from that of a openloop system in view of diagram (without computation). This is also one of the wisdom at the times with no computers. Moreover, control system design method based on Bode diagram is regarded as the method which enables us to obtain the compensators with desired characteristics only by graphical addition and subtraction. 1
Corresponding author.
24
H. Anai and S. Hara
The methods to solve Ricatti equations and various optimal control problems (including nonlinear cases) were research fields at the times of modern control theory. Therefore, once the method to solve Ricatti equations based on the eigenvalues of Hamilton matrices has been established, it became to be one of the main targets of control theory to reduce control problems into that of existence of the solution to Ricatti equations. That is, reducing control problems to those of Ricatti equations was considered to give an answer to the problems. Solution to H∞ control problems is one of their typical examples. Then Boyd [9] proposed that control theory should be constructed on the assumption that we utilized power of computers to the full. Hence, in 1990, control system design based on numerical optimization methods has attracted considerable attention. The typical example is the method to solve robust control system design and analysis problems by using numerical optimization methods. Robust control system design and analysis problems are reduced to convex optimization problems described by LMI (Linear Matrix Inequality) and then solve them numerically by using SDP (SemiDefinite Programming), for which efficient numerical methods was studied vigorously based on interior point methods in the field of mathematical programming. This method gives solution to the problems which have no analytic solution by finding global solution (though it is a numerical method) and, in particular, breaks new way in robust control design and multiobjective design problems. Recently, the interest in this direction is shifting to the study of nonconvex optimization problems relative to control problems in order to solve more practical problems. The flow such as Ricatti equations, LMI convex optimization, and nonconvex optimization is control system design method based on numerical computations. These design methods has become practically effective by virtue of improvement in ability and accuracy of computers and development of efficient algorithm. While there exists another computational way “symbolic computation” that is opposite to numerical computation in some sense and its research field is called “computer algebra”. In general symbolic computations take much time and the size of the problems which can be solved by symbolic computation is limited compared with numerical computations. However, computer algebra methods have a good property that their result is easy to understand in parameter design method because they can deal with design parameters symbolically. Viewed in this light, there were several attempts to apply computer algebra to control theory. The peak of such trials was at the middle of 1980’s. For example, an initiative work was achieved by Saito et al. [26] and control system design environment [3]. Though the ideas of such works are interesting, they are suitable for educational use but far from employing them for practical control system design and analysis because the ability of computers is not sufficient at that time, After that in 1990’s there was great progress in computer algebra. In particular, “quantifier elimination” (QE for short) has made much progress. In fact, several effective algorithms and good softwares for quantifier elimination were developed in 1990’s. This is vital to the research of control system analysis and design methods
A bridge between robust control and computational algebra
25
based on quantifier elimination and many results were presented (see [12,1,13,18, 24,4,23,5,21,6]). In this chapter we first survey the history of development of quantifier elimination and its properties, then explain the relation between quantifier elimination and control system design. We first point out the difficulty to apply general quantifier elimination method directly to control design problems in view of efficiency. A general quantifier elimination method can solve toy problems in textbooks and cannot solve practical problems. Then we state that necessity of applying special quantifier elimination for a subclass of input formulas relative to the control problem. Finally, we propose a special quantifier elimination method for particular input derived from robust control system design and analysis and examine its practical efficiency of our method by showing experimental results on computers. A similar article in Japanese is found in [7]. 2. Q U A N T I F I E R
ELIMINATION
(QE)
The history of development of quantifier elimination algorithms is shown in Fig. 1. The history of the algorithms for QE begins with Tarski–Seidenberg decision procedure in 1950’s [30,27]. But this is very intricate and far from feasible. In 1975, Collins presented a more efficient general purpose QE algorithm based on Cylindrical Algebraic Decomposition (CAD) [10]. The algorithm has improved by Collins and Hong [11] and was implemented as “QEPCAD” by Hong.
Figure 1. The history of QE algorithms and its application to control theory.
26
H. Anai and S. Hara
Weispfenning has presented other QE algorithm by using Comprehensive Gröbner basis and the real root counting for multivariate polynomial systems [34]. Then, Weispfenning presented a more efficient QE algorithm based on test terms [32,22,33]. Though there is some degree restriction of a quantified variable in input formulas for test terms approach, this approach seems very practical. Implementation of the method was done on Reduce as “REDLOG” and Risa/Asir 2 by Sturm and Dolzman [28,29]. Moreover, L. GonzálezVega et al. also presented a special QE algorithm for definite conditions of polynomials [16]. Many mathematical and industrial problems can be translated formulas consisting of polynomial equations, inequalities, quantifiers (∀, ∃) and Boolean operators (∧, ∨, ¬, →, etc.). Such formulas construct sentences in the socalled firstorder theory of real closed fields and are called firstorder formulas. Let fi (X, U ) ∈ Q[X, U ], i = 1, 2, . . . , t , where Q is the fields of rational numbers, X = (x1 , . . . , xn ) ∈ Rn a vector of quantified variables, and U = (u1 , . . . , um ) ∈ Rm a vector of unquantified parameter variables. Let Fi = fi (X, U ) 2i 0, where 2i ∈ {=, , >, =}, for i = 1, . . . , s , Qj ∈ {∀, ∃}, and Xj a block of qj quantified variables for j = 1, . . . , s . In general, quantified formula ϕ is given
(1)
ϕ=
Q
1X
1 ···
Qs
Xs G(F1 , . . . , Ft ),
where G(F1 , . . . , Ft ) is a quantifierfree (qf) Boolean formula. QE procedure is an algorithm to compute equivalent qf formula for a given firstorder formula. If all variables are quantified (i.e. m = 0), QE procedure decides whether the given formula (1) is true or false. This problem is called decision problem. When there are some unquantified variables U , QE procedure find a qf formula ϕ(U ) describing the range of possible U where ϕ(U ) is true. If there is no such range QE outputs false. This problem is called general quantifier elimination problem. Quantifier elimination is one of the powerfull tools for constraint solving problems. It is applicable to the problems described as multivariate polynomial inequalities (MPIs for short) and has following advantages: QE enable us to • obtain not only one feasible solution but also feasible region of solutions, • deal with nonconvex optimization and • examine decision problems exactly.
These features (advantages) of QE is useful to resolve many unsolved problems in engineering and industrial problems if we utilize numerical methods only. 2
Risa/Asir is a computer algebra system [25] developed at Fujitsu Labs Ltd. FTP: endeavor.fujitsu.co.jp:/pub/isis/asir.
A bridge between robust control and computational algebra
3. A
27
ROBUST CONTROL SYSTEM DESIGN BY QUANTIFIER
ELIMINATION
Multiobjective design and robust control synthesis are of great practical interest and main concerns in the control system design. However, in general, they are hard to solve and there are no analytical solutions. A parameter space design method is known to be one of the useful tools to deal with multiobjective design problems [2]. Quantifier elimination is an effective method, which is needed in a parameter space approach, to get the feasible region of design parameters satisfying given specifications. Recently, in fact, for such problems the methods based on QE were proposed by several researchers (see [13,18,24,4]). For example, in [13] it is shown that how certain robust multiobjective design problems can be reduced to QE problems and actually solved by using QEPCAD. In [18] it is shown that, in feedback design of linear timeinvariant systems, robustness and several performance specifications (H∞ norm constraint, gain and phase margins) on the closeloop system can also be solved as QE problems by using QEPCAD. In this chapter, we consider this kind of problem, in particular, focus on a robust control system design methods based on QE. QE based approach is really effective for such problems. However, unfortunately the size of the problems which can be solved by QE based approach is limited, because the computational complexity of the general QE algorithm based on CAD algorithm is doubly exponential in the number of quantified variables (including parameter variables). In applications of QE to control problems so far, QE method is applied to the firstorder formulas derived from the control problems by a direct translation. For the efficient computation, it is important to reduce the target problems to a firstorder formula as simple as possible. Furthermore, it is preferable to use special QE algorithm which is effective for a particular input. (See Fig. 2.) Hence, we should try to translate the control system design problem to a firstorder formula to which a special QE algorithm is applicable. This is quite similar to the following situation on the design methods based on matrix inequalities; many control system design problems are reduced to BMI (Bilinear Matrix Inequalities) problems. However, in general, it is very difficult to solve BMI problems. Therefore it is important to find out appropriate subclass of BMI which includes lots of control system design problems and there is an efficient algorithm for. As one of such formulas, there is a “Sign Definite Condition” (SDC) for robust control system design problems. Fortunately, important design specifications such as H∞ norm constraint, stability margins, etc., which are frequently used as indices of the robustness, can be reduced to sign definite condition (2)
∀x > 0,
f (x) > 0
(see [17,19,20]). For example, an H∞ norm constraint of a strictly proper transfer function G(s) = n(s)/d(s) given by G(s) := supG(j ω) < 1 ∞ ω
28
H. Anai and S. Hara
is equivalent to d(j ω)d(−j ω) > n(j ω)n(−j ω);
∀ω.
Since we can find a function f (ω2 ) which satisfies f ω2 = d(j ω)d(−j ω) − n(j ω)n(−j ω) > 0
letting x = ω2 leads to the SDC (2). The sign definite condition is a very simple (firstorder) formula and suited for a QE procedure in view of computational efficiency. The combination of SDC and a special quantifier elimination for it enable us to realize a efficient robust control system design method using quantifier elimination.
Figure 2. Scheme for solving control problems by QE.
Figure 3. Relevance of our approach.
A bridge between robust control and computational algebra
4. A D E S I G N
29
METHOD BY USING A SPECIAL QUANTIFIER
ELIMINATION
SDC given by (2) is known to be checked by using Routh–Hurwitz like criterion [31,17]. Lemma 1. Let f (x) = ni=0 ai x i ∈ R[x]. f (x) is sign definite in x ∈ [0, +∞] if and only if V [f (x)] = n holds, where V [f ] is the number of sign changes of the most left column of the modified Routh array of f defined by (−1)n an
(−1)n−1 an−1
(−1)n na
(−1)n−1 (n − 1)a
n
n−1
···
−a1
···
−a1
a0
.. . a0
The first two rows of Routh array above are formed by the coefficients of the polynomial f (−x) and f (−x), and following rows in Routh array are formed by the coefficients of the polynomial remainder sequence generated by Euclidean ˘ divisions. A parameter space approach based on SDC using D. Siljak’s criterion is essentially equivalent to performing QE for the particular inputs ∀x > 0, f (x) > 0 where f (x) is a polynomial with real coefficients. So this method is regarded as a special QE algorithm for the particular input firstorder formula (2) and more efficient than the general QE algorithm based on CAD algorithm. However, this method has some issues: • In computing the remainder sequence by Euclidean divisions, we suffer from computational complexity due to exponential coefficient growth. • In the parametric coefficient cases, there remains the problem concerning specialization of parameters. Since rational functions may appear in the sequence due to Euclidean division procedure, “division by 0” may occur by specialization. • Moreover, we cannot proceed the whole procedure to compute modified Routh array in singular cases which occur when (i) an element of the first column becomes zero, or (ii) all the elements in a row of the array vanish simultaneously.
Hence, in this chapter, we propose a parameter space approach for robust control system design based on a special QE method for SDC using Sturm–Habicht sequence. A combinatorial algorithm to solve the particular QE problem ∀x , f (x) > 0 (definite condition) based on Sturm–Habicht sequence was first proposed by GonzálezVega et al. [16]. We utilize their algorithm with some modification for a sign definite condition (2). The method proposed here is more efficient than the ˘ method using Routh–Hurwitz like criterion by D. Siljak and moreover has a good specialization property.
30
H. Anai and S. Hara
Let f (x) ∈ R[x] with degree n. Sturm–Habicht sequence of a polynomial f (x) is defined as the subresultant sequence starting from f (x) and f (x) modulo some specified sign changes (see [6,16] for details) and it has the following properties: ◦ Desirable worstcase computational complexity: Subresultant sequence computation is a polynomial time computation. (A remainder sequence computation by a naive Euclidean division is not.) ◦ Desirable specialization properties: Since Sturm–Habicht sequence is defined through determinants, rational functions do not appear in it.
Moreover, Sturm–Habicht sequence makes us free from the care of singular cases and can be used for real root counting as is Sturm sequence as follows [16]: Let the Sturm–Habicht sequence of f (x) ∈ R[x] be {SH j (f (x))} = {g0 (x), . . . , gn (x)} and α, β ∈ R ∪ {−∞, +∞} s.t. α < β . We define WSH (f ; α) as the number of sign variations in the list {g0 (α), . . . , gn (α)}. Then WSH (f ; α, β) ≡ WSH (f ; α) − WSH (f ; β)
gives a number of real roots of f in [α, β]. Since SDC is equivalent that f has no root in the interval [α, β]. If we find out sequence of signs Si = 2i (where 2i ∈ >, 0 → Fn > 0) up to n = 3. However we could not solve the QE problems by QEPCAD for n 4 due to the lack of memory. On the other hand, we can solve it for generic polynomials up to n = 8 in our method within 136.5 sec and the computation is very efficiently, for small n, e.g., 0.121 for n = 5. Once we compute Sturm–Habicht sequence of Fn (x), the result can be used for another polynomials with degree n by substituting the coefficients ci by those of an input polynomial. The results for the generic cases should be stored in a database to be called upon, whenever needed. This greatly 3
These computation by QEPCAD are executed on Sun Ultra Sparc I Model 140.
A bridge between robust control and computational algebra
C(s) r
e
k
m s
31
P(s) u
1 d (s)
y
Figure 4. PI control system.
improves the total efficiency. In the case of polynomials with many parameters it seems to be better that we compute Sturm–Habicht sequence in this way. PIcontroller synthesis: We consider to compute Sturm–Habicht sequence of the polynomials ft (z), for which we check the SDC in analyzing sensitivity of PI control systems with compensators C(s) = k + ms . PI control systems here we consider have the structure shown in Fig. 4 and the compensator has 2 design parameters. As a target specification, we consider the frequency restricted norm constraint for complementary sensitivity function: T (s) [20,+∞] < 10. This is equivalent to a SDC ft (z) > 0, ∀z > 0. The numerators of the plants P (s) are fixed as 1 and the denominators d(s) for each degree are given randomly. Noted that the computation of ft (z) is achieved immediately, e.g., 0.459 sec and 82.7 sec for 5th and 10th degrees respectively. Then up to 15 degree of d(s) we can compute Sturm–Habicht sequence within 795.4 sec. As a practical example, we quote the flexible beam example in [14]. The plant transfer function is given by P (s) =
−6.4750s 2 + 4.0302s + 175.7700 . s(5s 3 + 3.5682s 2 + 139.5021s + 0.0929)
We consider the PID control system for this plant with a same controller as above and the same frequency restricted norm constraint for complementary sensitivity
T (s) [20,+∞] < 10. Then ft (z) is obtained in 0.55 sec and Sturm–Habicht for ft (z) is computed in 115.50 sec. There are several possibilities to improve the combinatorial part. We can prune the impossible sign combinations before counting the number of sign changes owing to the followings. For example, in the case of generic polynomials with degree 4 there are totally 38 = 6561 sign combinations to verify the number of sign changes. After pruning impossible sign combinations and checking the number of sign changes, we have 561 feasible sign combinations. Furthermore, this formula can be simplified by deleting trivially empty semialgebraic sets manually. Finally we have 477 feasible sign combinations. For practical control problems, the number of possible sign combinations can become rather small as seen in [6] where we show the effectiveness of the proposed method for integrated design as well as fixedstructure robust controller synthesis. Finally, we summarize the computational complexity of our approach based on the computational results. Our approach is practically applicable to the systems up to order 15 for the case of the number of design parameters in fixedstructure
32
H. Anai and S. Hara
controller is 2 (e.g., PI control systems). In the case that controller has more than 3 parameters, our approach is practically applicable to the systems up to order 7 by using stored general forms. 6. C O N C L U S I O N We have presented a new attempt for robust control system design method based on quantifier elimination and also shown its ability. Our approach outputs a disjoint union R of semialgebraic sets Ri which describes the possible range of design parameters ; R = ni=1 Ri . Then the obtained results are applicable to the followings: • Visualization of possible region of design parameters by a projection to 2 or 3dimensional space. • Preprocessing (reduction to subproblems) for numerical optimization such that
min F () = min min F () , ∈R
i
∈Ri
where F () is an objective function in . • Reduction of the VCdimension for randomized algorithm. R EFERENCES [1] Abdallah C., Dorato P., Yang W., Liska R., Steinberg S. – Application of quantifier elimination theory to control system design, in: Proceedings of 4th IEEE Mediteranean Symposium on Control and Automation (Maleme, Crete), 1996, pp. 340–345. [2] Ackermann J. – Robust Control – Systems with Uncertain Physical Parameters, SpringerVerlag, 1993. [3] Akahori I., Hara S. – Computer aided control system analysis and design based on the concept objectorientation, Trans. of SICE 24 (5) (1988) 506–513 (in Japanese). [4] Anai H. – On solving semidefinite programming by quantifier elimination, in: Proc. of American Control Conference (Philadelphia), 1998, pp. 2814–2818. [5] Anai H. – Algebraic approach to analysis of discretetime polynomial systems, in: Proc. of European Control Conference, 1999. [6] Anai H., Hara S. – Fixedstructure robust controller synthesis based on sign definite condition by a special quantifier elimination, in: Proceedings of American Control Conference, 2000, pp. 1312–1316. [7] Anai H., Hara S. – Robust control system design by quantifier elimination, System, Control and Inform. 44 (6) (2000) 307–311 (in Japanese). [8] Anai H., Hara S. – A parameter space approach to fixedorder robust controller synthesis by quantifier elimination, To appear in International Journal of Control (2006). [9] Boyd S., Baratt C. – Linear Controller Design: Limits of Performance, PrenticeHall, 1991. [10] Collins G.E. – Quantifier Elimination for Real Closed Fields by Cylindrical Algebraic Decomposition, Lecture Notes in Comput. Sci., vol. 32, SpringerVerlag, 1975. [11] Collins G.E., Hong H. – Partial cylindrical algebraic decomposition for quantifier elimination, J. Symbolic Comput. 12 (3) (1991) 299–328. [12] Dorato P., Yang W., Abdallah C. – Application of quantifier elimination theory to robust multiobject feedback design, J. Symbolic Comput. 11 (1995) 1–6.
A bridge between robust control and computational algebra
33
[13] Dorato P., Yang W., Abdallah C. – Robust multiobjective feedback design by quantifier elimination, J. Symbolic Comput. 24 (1997) 153–159. [14] Doyle J., Francis B.A., Tannenbaum A.R. – Feedback Control Theory, Macmillan, 1992. [15] Gantmacher F.R. – The Theory of Matrices, vol. II, Chelsea, 1959. [16] GonzálezVega L. – A combinatorial algorithm solving some quantifier elimination problems, in: Caviness B., Johnson J. (Eds.), Quantifier Elimination and Cylindrical Algebraic Decomposition, in: Texts and Monographs in Symbolic Computation, SpringerVerlag, 1998, pp. 365–375. [17] Hara S., Kimura T., Kondo R. – H∞ control system design by a parameter space approach, in: Proceedings of MTNS91 (Kobe, Japan), 1991, pp. 287–292. [18] Jirstrand M. – Algebraic Methods for Modeling and Design in Control, Ph.D. thesis, Linköping University, 1996. [19] Kimura T., Hara S. – A robust control system design by a parameter space approach based on sign definition condition, in: Proceedings of KACC91 (Soul, Korea), 1991, pp. 1533–1538. [20] Kondo R., Hara S., Kaneko T. – Parameter space design for H∞ control, Trans. of SICE 27 (6) (1991) 714–716 (in Japanese). [21] Lafferriere G., Pappas G.J., Yovine S. – Decidable hybrid systems, Technical Report, VERIMAG, Univ. of Grenoble, 1998. [22] Loos R., Weispfenning V. – Applying linear quantifier elimination, Comput. J. 36 (5) (1993) 450– 462. [23] Ne˘si´c D. – Two algorithms arising in analysis of polynomial models, in: Proceedings of 1998 American Control Conference, 1998, pp. 1889–1893. [24] Neubacher A. – Parametric Robust Stability by Quantifier Elimination, Ph.D. thesis, RiscLinz, Oct. 1997. [25] Noro M., Takeshima T. – Risa/asir – a computer algebra system, in: ISSAC’92: Proceedings of the International Symposium on Symbolic and Algebraic Computation, ACM Press, 1992, pp. 387–396. [26] Saito O., et al. – Computer aided control engineering: Periphery of control engineering and computer algebraic manipulation, Systems and Control 29 (12) (1985) 785–794 (in Japanese). [27] Seidenberg A. – A new decision method for elementary algebra, Ann. of Math. 60 (1954) 365–374. [28] Sturm T. – Redlog, reduce library of algorithms for manipulation of firstorder formulas, Technical Report, Univ. of Passau, 1994. [29] Sturm T. – Real quadratic quantifier elimination in risa/asir, Technical Report ISISRR9613E, ISIS, Fujitsu Labs, 1996. [30] Tarski A. – Decision Methods for Elementary Algebra and Geometry, Univ. of California Press, Berkeley, 1951. ˘ [31] Siljak D.D. – New algebraic criterion for positive realness, J. Franklin Institute 291 (1971) 109–120. [32] Weispfenning V. – The complexity of linear problems in fields, J. Symbolic Comput. 5 (1–2) (1988) 3–27. [33] Weispfenning V. – Quantifier elimination for real algebra – the cubic case, in: ISSAC’94: Proceedings of the 1994 International Symposium on Symbolic and Algebraic Computation (July 20–22, 1994, Oxford, England, United Kingdom), ACM Press, New York, 1994, pp. 258–263. [34] Weispfenning V. – A new approach to quantifier elimination for real algebra, in: Caviness B.F., Johnson J.R. (Eds.), Quantifier Elimination and Cylindrical Algebraic Decomposition, in: Texts and Monographs in Symbolic Computation, SpringerVerlag, 1998, pp. 376–392.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
An introduction to the numerical analysis of multivariate polynomial systems
Hans J. Stetter Institut für Angewandte und Numerische Mathematik, Technical University, Vienna, Wiedner Hauptstrasse 810/115, A1040 Vienna, Austria Email:
[email protected] (H.J. Stetter)
1. I N T R O D U C T I O N Systems of linear equations in several (a few to very many) variables are routinely solved by blackbox software everywhere in scientific computing, and the same is true for individual polynomial equations in one variable. But systems of polynomial equations in several variables are a different matter; with the exception of simple low degree systems in very few (two or three) variables, such systems can generally not be “solved” by standard software packages, at least not in the following sense: Given n polynomial equations in n variables, with numerical coefficients, we would like to obtain all solution ntuples of the system, either in real or in complex nspace. On the other hand, polynomials are increasingly used in scientific computing as nonlinear models which can be easily handled and evaluated; therefore a need for having blackbox software for such problems definitely exists. What distinguishes multivariate polynomial systems from multivariate linear systems, in which sense are they more than just generalizations of linear systems? A first superficial view reveals that – although linear systems are indeed very special polynomial systems – the variety of potential terms in a polynomial system is growing rapidly with the permitted degree: For 5 variables, we have in a linear equation 6 potential terms quadratic equation 21 . . . cubic equation 56 . . . 4th degree equation 126 . . . nth degree equation n+5 potential terms 5 Generally, only a few of these terms will actually be present; this potential sparsity implies that polynomials with the same degree and the same number of variables can look and behave very differently. This also shows that the amount of data in a
36
H.J. Stetter
single multivariate polynomial equation can be quite large, even for a small number of variables. Also the number of potential solution ntuples is a major distinction: First of all, polynomial systems naturally share the property of univariate polynomial equations that their set of solutions is, generally, located in complex space, even when all coefficients are real, and that there may be multiple solutions (in a sense to be further defined). With linear systems, they share the property that, generally,1 a system of n equations in n variables has a finite number of solution ntuples; but while this number is always 1 for linear systems, it is anywhere between 1 and nv=1 dν for a polynomial system whose individual equations have total degrees2 dν . A realistic upper bound for the number of solutions (i.e. one which is generally assumed) requires a nontrivial analysis of the polynomial system for which, however, software has recently become available. In any case, this number can be quite large, e.g., up to 35 = 243 for a cubic system in 5 variables. On the other hand, we may only be interested in solutions in a very small section of the Cn or only in real solution ntuples. This shows that it not even so clear what we should request from a piece of software when we have a multivariate polynomial system to solve. There is one further distinction in the computational aspects of linear and of polynomial systems: It has been wellknown since ancient times that linear systems of any size may be “solved exactly” in the sense that the components of their only solution ntuple can be represented as a rational function of the coefficients of the system. While this permits – in principle – that the exact values of the solution components of a system with numerical coefficients (i.e. rational numbers which includes floatingpoint numbers) are determined as quotients of integers, this is virtually never done. Instead, sufficiently accurate numerical approximations are computed in floatingpoint computation even for (nontrivial) systems with rational coefficients: A quotient of two integers with hundreds of digits is numerically meaningless. Thus, computational linear algebra is really numerical linear algebra. For polynomial systems, on the other hand, the traditions of algebra as a part of pure mathematics have dominated the development of computational methods for a long time: The fact that – like linear systems – polynomial systems may be reduced to systems of a simpler structure by rational operations on their coefficients has not lead to the associated floatingpoint software as in linear systems but to software which uses multiprecision integers and exact rational arithmetic for the same purpose; this is the more incomprehensible as the resulting simpler systems cannot, generally, be further processed by exact computation but must be turned over to approximate computation in any case. This is another major reason why the development of blackbox software for multivariate polynomial systems lags behind. 1 For polynomial as for linear systems, it may happen that there are no solutions (inconsistent systems) or solution manifolds; but both cases are singular exceptions. 2 The total degree d of a multivariate polynomial is the maximal sum of exponents e in its terms ν e e x11 x22 · · · xnen .
An introduction to the numerical analysis of multivariate polynomial systems
37
In this introductory lecture, we wish to show that the numerical solution of regular polynomial systems (with as many equations as variables and a finite number of isolated zeros) is not so distinct from numerical linear algebra and thus open to a similar approach. We will exhibit how such systems are essentially equivalent to matrix eigenproblems – at first in an intuitive way and then more formally. (Further details on the relation between multivariate polynomials systems and matrix problems will also be explained in the contribution by B. Mourrain [4].) The second part of this presentation will deal with other numerical aspects of polynomial systems solving: If some or all coefficients of the system are only known to a limited accuracy, how may the solution set be characterized and what is the meaningful accuracy with which the solutions may be specified in this case. 2. A N
INTUITIVE APPROACH
We will use the following notations: j
j
For x = (x1 , . . . , xn ), j = (j1 , . . . , jn ), x j := x11 . . . xnn . For aj ∈ C, J ⊂ Nn0 , p(x) = j ∈J aj x j ∈ Pn := C[x1 , . . . , xn ]. j p1 (x) = j ∈J1 a1j x , System P : ... ... pn (x) = j ∈Jn anj x j . Zero set Z(P ) := {z ∈ Cn : pν (z) = 0, ν = 1(1)n} ⊂ Cn . We assume a regular situation, called a “complete intersection”: no. of polynomials in P = no. of variables, Z(P ) consists of m isolated points zµ ∈ Cn which depend continuously on the aνj ; the number m always counts multiplicities and is thus invariant (locally in the data space). We are not interested in the local task “Given a reasonably good approximation for a particular zero, compute a better approximation to that zero” which may be solved by Newton’s method (except in the case of dense clusters of zeros). We rather wish to determine the global information Where in Cn are the zeros of the regular polynomial system P ? without significant apriori information except – perhaps – the number of zeros of P . The localization of zeros is a nontrivial task even for one polynomial in one variable, even when the degree is low. Consider, e.g., p(x) = −25249 + 157410x − 340571x 2 + 253740x 3 + 123x 4 + x 5 .
38
H.J. Stetter
Known localization results (cf., e.g., [1]) yield, with a small amount of computation, that there are two zeros of modulus ≈500 and three zeros of modulus ≈.5. Simultaneous refinement methods (cf. again [1] or other sources) can find good approximations of the zeros from that information in a few iterations; but such methods are restricted to univariate problems. The software package Matlab finds excellent approximations to all 5 zeros by the following wellknown global approach: (i) Form the Frobenius matrix A of p (no computation), (ii) compute the eigenvalues of A (numerical linear algebra).
−62.1708 + 500.041 i
−62.1708 − 500.041 i Eigenvalues (A) ≈ . .494498 .423565 + .147274 i .423565 + .147274 i
Does an analogous approach exist for systems of n polynomials in n variables? We shall see that the answer is “yes” if we are willing to spend some numerical computation in step (i), too. Let us consider, at first, a very simple example, with 2 quadratic polynomials in two variables:
P=
p1 (x, y) = 3 + 2y − xy − y 2 + x 2 = 0, p2 (x, y) = 5 + 4y + 3x + xy − 2y 2 + x 2 = 0.
When we regard the monomials present in the two equations (i.e. y , x , xy , y 2 , x 2 ) as individual variables, we have two linear equations for five variables. All we can do at this point with standard linear algebra is an elimination of either x 2 or y 2 to obtain the equations: p1 − p2 = p3 (x, y) = −2 − 2y − 3x − 2xy + y 2 = 0, 2p1 − p2 = p4 (x, y) = 1 − 3x − 3xy + x 2 = 0.
But since we are no longer tied to linear equations, we can multiply equations by x or y or monomials in x, y ; this generates new relations but it may also introduce new “variables”: xp3 = p5 (x, y) = −2x − 2xy − 3x 2 + xy 2 − 2x 2 y = 0, yp4 = p6 (x, y) = y − 3xy − 3xy 2 + x 2 y = 0.
An introduction to the numerical analysis of multivariate polynomial systems
39
Let us write the system consisting of p3 to p6 in linear algebra style as 2 2 3 2 1 1 0 0 0 y2 2 0 1 0 0 x −1 0 3 3 y , 0 −3 1 −2 xy 2 = 0 0 2 2 x x2y 0 −1 0 3 xy 0 0 −3 1 which implies, by trivial linear algebra, 1 2 2 3 2 y2 2 x −1 0 3 3 y xy 2 = .6 .4 −2.2 −3.4 x . xy 1.8 .2 −6.6 −7.2 x2y More cleverly, we can combine the first and third and the second and forth row separately with trivial rows to obtain
1
0
1
0
0
1
1
y 2 2 3 2 · y =: Ay · y and = y· x 0 0 x 0 1 x xy .6 .4 −2.2 −3.4 xy xy 1 0 0 1 0 1 1 y 0 0 0 1 · y =: Ax · y . x· x = −1 0 x 3 3 x xy 1.8 .2 −6.6 −7.2 xy xy
Each pair (ξ, η) which satisfies P (ξ, η) = 0 must also satisfy these last two relations; thus – for each solution (ξ, η) of P – the vector (1, η, ξ, ξ η)T must be an eigenvector of each of the two matrices Ay and Ax above. Any linear algebra package provides us with the following set of eigenvectors and eigenvalues for the two matrices (with the first components normalized to 1 and rounded to the digits shown)
eigenvectors of Ay and Ax
1
1
1
2.9382 −2.5864 −.8759 ± .2205 i , , .0853 −4.5389 .1268 ∓ .7076 i , .0049 ± .6478 i .2505 11.7396
eigenvalues of Ay eigenvalues of Ax
2.9382, −2.5864, −.8759 ± .2205 i, 0.0853, −4.5389, .1268 ∓ .7076 i.
Thus, the 4 zeros of P = 0 (there cannot be more than four) are, to the digits shown, (.0853, 2.9382), (−4.5389, −2.5864), (.1268 ∓ .7076 i, −.8759 ± .2205 i).
40
H.J. Stetter
Note that, in the multivariate case, the eigenvalues tell only one component of the zeros while the eigenvectors display both components. Naturally, this only shows a line of thought along which one may hope to reduce regular polynomial systems to matrix eigenproblems. We will now expose the algebraic basis of what we have playfully done in the above example. 3. A L G E B R A I C
INTERPRETATION
Which algebraic interpretation of the eigenproblem for the univariate Frobenius matrix permits the generalization to the multivariate case? Any fixed univariate polynomial p of degree n generates an associated vector space of dimension n in the set P = C[x] of all univariate polynomials over C: When we collect all polynomials q ∈ P which leave the same remainder r upon division by p into the same “residue class” [q]p , these residue classes form a linear space because [γ1 q1 +γ2 q2 ]p = γ1 [q1 ]p +γ2 [q2 ]p . This linear space R[p] is called the quotient ring of P mod p and often denoted by P/p. Each element (= class) in R[p] contains exactly one polynomial of degree n − 1, the common remainder upon division by p of all polynomials in the class, which we may use to represent the element: [q]p ∈ R[p] ↔ r ∈ Pn−1 . Because of x · (γ1 q1 + γ2 q2 ) = γ1 xq1 + γ2 xq2 , multiplication by x (or by any fixed element in P) is a linear operation in P as well as in the quotient ring R[p] (which justifies the term “ring”). With the notation 1 x r(x) = (ρ0 , . . . , ρn−1 ) . = r T x, .. x n−1 p
we have xr(x) = r T x x. As we work with residue classes mod p , x n ≡ x n − p(x) = ν − n−1 ν=0 aν x , when we assume p as monic. Thus,
1
0
x p x· . ≡ .. n−1 −a0 x
1 .. −a1
.
...
1
x . . 1 . n−1 x −an−1
and
p x r T x ≡ r T Ax.
So the Frobenius matrix A of p is the matrix which represents multiplication by x in R[p] with respect to the basis (1, x, . . . , x n−1 ). (More formally, we should use [x ν ]p in place of the x ν as basis elements of R[p], but this is more confusing than helpful in the present context.) When we now realize that (1)
p
q1 (x) ≡ q2 (x)
⇐⇒
q1 (ξ ) = q2 (ξ ) at all zeros of p,
An introduction to the numerical analysis of multivariate polynomial systems
41
we find that the zeros ξ of p must satisfy, with x(ξ ) := (1, ξ, . . . , ξ n−1 )T , ξ x(ξ ) = Ax(ξ ),
which relates the polynomial equation p(x) = 0 with the eigenvalue problem for the Frobenius matrix A of p . (At a k fold zero of p , one must interpret the righthand side in (1) as q1(κ) (ξ ) = q2(κ) (ξ ), κ = 0(1)k − 1.) Now let us generalize: A regular multivariate polynomial system P (x) = {p1 (x), . . . , pn (x)}, pν (x) ∈ Pn , also defines a finitedimensional linear space R[P ] = Pn /P in Pn which consists of the residue classes [r]P mod P ; these are polynomial sets whose elements differ only by a “linear” (= polynomial) combination of the pν : P
[q1 ]P = [q2 ]P or q1 ≡ q2
iff
q1 (x) − q2 (x) =
n
γν (x) pν (x),
ν=1
with arbitrary γν (x) ∈ Pn , or iff q1 − q2 ∈ P , the polynomial ideal generated by P . Again, the linear space R[P ] is a ring because we can define multiplication by [q1 ]P · [q2 ]P := [q1 q2 ]P , and multiplication by a fixed element is a linear operation. Thus, with respect to a specified basis t(x) = (t1 (x), . . . , tm (x))T of R[P ], there exist socalled multiplication matrices Aν ∈ Cm×m , ν = 1(1)n, such that (2)
P
xν t(x) ≡ Aν t(x),
ν = 1(1)n.
We also note that – like in the univariate case (cf. (1)) – (3)
P
q1 (x) ≡ q2 (x)
⇐⇒ q1 (ξ ) = q2 (ξ )
at all zeros of P ,
with extensions to partial derivatives of the q in the case of multiple zeros of P ; cf., e.g., [6]. This establishes again that the dimension m of the linear space R[P ] equals the number of zeros of the polynomial system P , counting multiplicities. Now assume that we know a basis t(x) = (t1 (x), . . . , tm (x))T of R[P ] which consists of monomials tµ (x) = x jµ (cf. the notation at the beginning of Section 2). Further assume that we have determined at least one of the multiplication matrices Aν of R[P ] with respect to the basis t; cf. (2). Just as in the univariate case, (2) becomes an equality at the zeros ζµ = (ξ1µ , . . . , ξnµ ) ∈ Cn of P and we have .. .. . . (4) ξµν t(ζµ ) = Aν t(ζµ ) at the zeros ζµ of P . .. .. . . If the basis t(x) contains the terms xν , ν = 1(1)n, which is generally the case, the eigenvectors t(ζµ ) ∈ Cm display directly the components ξνµ , ν = 1(1)n, of
42
H.J. Stetter
the zeros ζµ , after normalization of the component which corresponds to the basis element x 0 = 1 (generally the first element). The eigenvalue for Aν of the eigenvector t(ζµ ) is ξνµ . Naturally, there may be fewer than m eigenvectors of Aν due to multiple eigenvalues: A k fold eigenvalue of geometric multiplicity k (i.e. with an eigenspace of dimension k ) indicates k zeros with the same xν component which cannot be separated with the aid of Aν ; they must be separated by the use of a different multiplication matrix. A k fold eigenvalue with geometric multiplicity 1 (i.e. with one eigenvector but an invariant subspace of dimension k ) on the other hand indicates a k fold zero of P , with its location displayed by the eigenvector; the structure of the invariant subspace displays the structure of the multiplicity. For more details, cf., e.g., [6]. A question comes to mind immediately: How is it possible that the n different multiplication matrices Aν have the same set of eigenvectors? From linear algebra we know that a family of commuting matrices has all its invariant subspaces in common (invariant subspaces of dimension 1 are eigenvectors). The matrices Aν must commute because multiplication is a commutative operation in Pn (all ≡ are mod P ): Aν1 Aν2 t(x) ≡ Aν1 xν2 t(x) ≡ xν2 Aν1 t(x) ≡ xν2 xν1 t(x) ≡ xν1 xν2 t(x) ≡ · · · ≡ Aν2 Aν1 t(x).
The preceding considerations support our procedure for the solution of the bivariate system in Section 2: Our transformation of the pν was aimed at the determination of a basis t(x) and associated multiplication matrices Ay and Ax for the quotient ring R[p1 , p2 ]. In this simple case of two quadratic equations in two variables the number of zeros and the dimension of the quotient ring has to be 4; thus there are 3 natural choices for a monomial basis of R[p1 , p2 ]: (1, y, x, xy)T ,
T 1, y, x, y 2 ,
T 1, y, x, x 2 .
It is also easily seen which further terms must be introduced for the determination of the multiplication matrices with respect to a particular basis: For our basis (1, y, x, xy)T , the nontrivial rows of Ax require polynomials which express x 2 and x 2 y in terms of the basis monomials. Had we aimed for the unsymmetric basis (1, y, x, x 2 )T , the nontrivial rows of Ax would have required relations for xy (in terms of x 2 but not of y 2 ) and x 3 , and it is obvious how we should have transformed p1 , p2 . In this case, the matrix Ay has three nontrivial rows, which express y 2 , xy, x 2 y in terms of the basis monomials. Obviously, in a more general case, the central task of the approach is the generation of a basis for R[P ] and the determination of at least one of the associated multiplication matrices Aν . Indirectly, this task can be solved through the computation of a Gröbner basis for P ; but this requires the apriori specification of a term order which does not appear at all in the preceding approach. Furthermore, the appropriate computation of an “approximate Gröbner basis”, in floatingpoint
An introduction to the numerical analysis of multivariate polynomial systems
43
arithmetic and for polynomials with floatingpoint coefficients, is still in a development status. On the other hand, even if one knows the dimension m of R[P ] beforehand as the result of a separate algorithm performing a socalled BKK count (cf., e.g., [3]), the selection of the basis is not trivial since not every choice of m powerproducts x jµ which include 1 and “leave no holes” yields a suitable normal set for a given system P ; cf. [10]. The transformation to a matrix eigenproblem is now widely acknowledged to be the best approach to the solution of a polynomial system of equations with numerical coefficients; it also lends itself best to an implementation in floatingpoint arithmetic for floatingpoint data. But, in spite of endeavors by several research groups, there is still no blackbox software available which implements this approach. 4. V A L I D
RESULTS FOR DATA OF LIMITED ACCURACY
In most “reallife problems”, there appear data which do not have a concise value but rather a range of potential values, indicated by a specified value and a tolerance. With such an “empirical” data quantity, we associate a family of neighborhoods, parametrized by a positive real δ : Let a¯ = (α¯ 1 , . . . , α¯ M ) ∈ CM or RM be the specified ¯ e) value and e = (ε1 , . . . , εM ), εj > 0, the tolerance. The δ neighborhood Nδ (a, contains the values a˜ = (α˜ 1 , . . . , α˜ M ) for which (5)
a˜
− a ¯ ∗e
∗ α˜ j − α¯ j  := . . . , ,... δ. ε j
For a¯ with real components, it must be clear whether the a˜ ∈ Nδ (p, ¯ e) are restricted to real values or not. For the norm ·∗ in (5), we will exclusively use the maxnorm v T ∗ := maxj vj ; for a˜ ∈ Nδ (a, ¯ e), it requires α˜ j − α¯ j  εj δ, j = 1(1)M . But other norms may also be considered. For an empirical polynomial in n 1 variables, with p(x) ¯ = j ∈J α¯ j x j , α¯ j ∈ C or R, the δ neighborhood Nδ (p, ¯ e) contains the polynomials p˜ , with (6)
p˜ − p ¯ ∗e
∗ α˜ j − α¯ j  ˜ δ, . . . , := , . . . , j ∈ J ε
α˜ j = α¯ j , j ∈ J \ J˜;
j
the coefficients with j ∈ J \ J˜ which are not subject to an indetermination (like 1) are called intrinsic. For more details, cf. [10]. If we had defined fixed neighborhoods N1 (a, ¯ e) of potential values, we would deal with interval data. Interval data do not account properly for the inherent vagueness of the tolerance in empirical data, like 10−5 ; they simply shift the discontinuity from the statements a˜ = a¯ and a˜ = a¯ to a˜ − a ¯ ε and a˜ − a ¯ > ε . Such discontinuities are meaningless for data in C or R and also destroy the possibility of approximate (e.g., floatingpoint) computation. With a parametrized family of neighborhoods Nδ , δ = 1 is not a sharp bound for the validity of a˜ but only a mark on a continuous
44
H.J. Stetter
δ scale: The validity of a˜ decreases with an increase of the value of δ necessary to achieve a˜ ∈ Nδ (a, ¯ e). Generally, we may have in mind an interpretation of δ like δ 1 ...
valid
3
probably valid
...
10
possibly valid
...
30
probably invalid
...
100
invalid
The presence of the validity parameter also permits the application scientist to adjust it to his judgment. It is our goal to solve algebraic problems with polynomial data of limited accuracy in a meaningful way. For an empirical algebraic problem, we consider the data → result mapping F from the space A of the empirical data to some result space Z ; it assigns to a particular input value a˜ ∈ A the value z˜ ∈ Z of the exact result of the algebraic problem for a˜ . We consider intrinsic data as a fixed part of the specification of the algebraic problem and also of the data → result mapping F introduced above. The sets Zδ := {z := F (a), a ∈ Nδ (a, ¯ e)} ⊂ Z are called (δ )pseudoresult sets of the empirical problem. In a situation described by the data → result mapping F , the values in a δ pseudoresult sets Zδ with δ = O(1) are valid approximate3 results of the empirical algebraic problem. So far, we have tacitly assumed some basic regularity of the algebraic problem: We expect that the domain of the data → result mapping F is open and sufficiently large in the data space A so that results exist for all a˜ in a neighborhood Nδ (a, ¯ e), δ < some δ¯ . We also expect that the family {Zδ } of pseudoresult sets consists of compact connected sets in the natural topology of the result space Z and that a full neighborhood of an approximate result value z˜ ∈ Zδ also consists of approximate results. These are natural assumptions for a problem whose results are to be determined by approximate computations. An empirical algebraic problem whose data → result mapping F meets these assumptions is called wellposed, otherwise it is called illposed. There are various nontrivial ways in which an algebraic problem may be illposed: (1) A (proper) result of the algebraic problem is only defined for data on a manifold S of a dimension < M in A. In this case, if the intersection of S with Nδ (a, ¯ e) is empty for δ = O(1), the family of pseudoresult sets is empty and no valid representation of the algebraic problem has an exact solution. On the other hand, if the domain of F on the manifold S has a nonempty intersection with neighborhoods Nδ (a, ¯ e) for δ δ0 , δ0 1, we may restrict F to this component and possibly arrive at a wellposed empirical problem. Note that the exact problem with the specified data a¯ has no solution if δ0 > 0! 3 Here, the word “approximate” serves only to emphasize that any δ pseudoresult – while being the ¯ e) in the classical sense – can only be understood as an approximate exact result for some a˜ ∈ Nδ (a, result for the problem at hand.
An introduction to the numerical analysis of multivariate polynomial systems
45
Example. Consider two real univariate empirical polynomials (p¯ i , ei ), i = 1, 2, whose p¯ i have disjoint simple zeros near some ζ0 = 0 while their remaining zeros are sufficiently distinct so that gcd(p¯ 1 , p¯ 2 ) = 1. In the data space A of all the empirical coefficients of the two polynomials, let S be the manifold where gcd(p˜ 1 , p˜ 2 ) has positive degree and assume that it intersects with N1 ((p¯ 1 , p¯ 2 ), (e1 , e2 )). Then the problem of determining a nontrivial pseudogcd of the two empirical polynomials is regular and has valid approximate solutions. (2) The dimension of the image F (A˜ δ¯ ) in the result space Z is lower than dim Z . This implies that, in each arbitrarily small neighborhood of a valid approximate result z˜ , there are infinitely many values z which cannot be interpreted as exact results of the algebraic problem for whatever values of the empirical data. Thus, the perturbations induced by numerical computation will generally prevent a computed result to be an element of a pseudoresult set although it may be very close to a valid approximate result. Example. Consider as numerical results the coefficients of a Gröbner basis with more elements than variables. The Gröbner basis elements can be consistent only if they satisfy certain syzygies which restrict the result coefficients to a submanifold of the result space Z . A numerically computed approximation will generally not satisfy the syzygies. Illposed problems abound in polynomial algebra. Thus, their numerical treatment for the case of empirical data is of particular interest; but this is not the subject of this introductory lecture. The explicit determination of pseudoresult sets of empirical algebraic problems, even to a low degree of relative accuracy, requires a very high computational effort in all but trivial situations. If the results have several complex components, even a representation of Zδ appears infeasible. The more is it important that we are able to check and verify whether some z˜ ∈ Z , from whatever source, is a valid approximate ¯ e) with solution. This is the case if there exist data a˜ in some neighborhood Nδ (a, δ = O(1) such that z˜ is the exact result of our problem with data a˜ . In the verification of this condition, the set of all data a˜ ∈ A for which this condition holds plays an important role. Thus, for an empirical problem with data → result function F : A → Z , and for a given result value z˜ ∈ Z , we define the equivalentdata set by M(˜z) := a˜ ∈ A: F (a) ˜ = z˜ . Generally, the equivalentdata set is an algebraic manifold in the space A of the empirical data. Moreover, M(z) is often a linear manifold in A; this is the case if the empirical data are coefficients of polynomials which occur in the problem in a linear fashion: polynomials are linear in their coefficients! The verification task Given z˜ ∈ Z:
∃a˜ ∈ Nδ (a, ¯ e): F (a) ˜ = z˜ ?
may be reduced to the following two steps:
46
H.J. Stetter
(i) Determine the equivalentdata manifold M(˜z); ¯ e). (ii) Check whether M(˜z) has a nonempty intersection with Nδ (a, ¯ e) are defined by the metric (5), step (ii) is Since the neighborhoods Nδ (a, equivalent to:
(ii) Find the shortest distance δ(˜z) of M(˜z) from a¯ in the metric (5). In this situation, we define (7)
δ(˜z) := min a − a ¯ ∗e a∈M(˜z)
as the backward error of the approximate result z˜ ; cf. (5) for the definition of · ∗e . ¯ e) is a The shortest distance from M(˜z) to a¯ is welldefined because M(˜z) ∩ Nδ¯ (a, compact set. For a linear manifold of codimension 1 (a “hyperplane”), there are explicit expressions for the shortest norm distance from a¯ as well as for the point where it is attained: Proposition. Consider the linear manifold M(c) of codimension 1 in CM specified M by the linear equation ¯ j ) = γ0 . The shortest maxnorm distance of j =1 γj (αj − α M(c) from a¯ is γ0  , attained for min max αj − α¯ j  = M a∈M(c) j j =1 γj  γj∗ γ0 ... , · ... amin = a¯ + M γj  j =1 γj 
where γ ∗ denotes the complexconjugate value. Proof. From  γ0  . γj 
γj αj 
γj  · max αj , we have max αj 

γ αj  j γj 
It is easily confirmed that equality is attained for amin from above.
=
2
Example. For an approximate zero z˜ ∈ Cs , s 1, of an empirical polynomial (p, ¯ e) with empirical support J˜, J˜ = M , the equivalentdata manifold is given by j M(˜z) = a ∈ A: (αj − α¯ j ) z˜ + p(˜ ¯ z) = 0 . j ∈J˜
Thus, by the proposition above, the backward error of z˜ with the weighted norm (5) is ¯ z) δ(˜z) := min a − a ¯ ∗e = p(˜ εj ˜zj . a∈M(˜z)
j ∈J˜
An introduction to the numerical analysis of multivariate polynomial systems
47
This result – which may be used for polynomials in any number of variables – permits a fast verification of the validity of an approximate zero of an empirical polynomial. It is true that the evaluation of p(˜ ¯ z) is an illconditioned task; but a residual of the correct order of magnitude is sufficient for our purpose. If the set M(˜z) is a linear manifold of a codimension >1 or a nonlinear algebraic manifold, the determination of the backward error is not so simple; but there exist wellknown numerical algorithms for solving the associated minimization problems (7). Also, we may generally assume that the equivalentdata manifold of a reasonably computed approximate result passes close by a¯ and restrict our search to a domain about that point. Moreover, we do not need the exact minimal value: If we have located some point on M(˜z) with a norm distance O(1) from a¯ , we are finished because z˜ is a valid approximate result. This wellknown backward error approach (cf., e.g., [2]) can also be extended to provide answers for existence questions in the case of illposed algebraic problems, like gcd’s, multivariate factors, etc. For more details, see, e.g., [8,5,9,10]. This shows that numerical methods may be used in areas of computational algebra where this has been considered infeasible for a long time. R EFERENCES [1] Bini D.A., Fiorentino G. – Design, analysis, and implementation of a multiprecision polynomial rootfinder, Numer. Algorithms 23 (2000) 127–173. [2] ChaitinChatelin F., Frayssé V. – Lectures on Finite Precision Computations Software – Environments – Tools, SIAM, Philadelphia, 1996. [3] Cox D., Little J., O’Shea D. – Using Algebraic Geometry, Springer Undergrad. Texts in Math., vol. 185, 1998. [4] Mourrain B. – An introduction to algebraic methods for solving polynomial equations, in: Hanzon B., Hazewinkel M. (Eds.), Constructive Algebra Systems Theory, Royal Netherlands Academy of Arts and Sciences, 2006, this volume. [5] Huang Y., Stetter H.J., Wu W., Zhi L. – Pseudofactors of multivariate polynomials, in: Traverso C. (Ed.), Proceed. ISSAC 2000, ACM Press, 2000, pp. 161–168. [6] Stetter H.J. – Eigenproblems are at the heart of polynomial systems solving, SIGSAM Bull. 30 (4) (1996) 22–25. [7] Stetter H.J. – Analysis of zero clusters in multivariate polynomial systems, in: Lakshman Y.N. (Ed.), Proceed. ISSAC’96, pp. 127–136. [8] Stetter H.J. – Polynomials with coefficients of limited accuracy, in: Ganzha V.G., Mayr E.W., Vorozhtsov E.V. (Eds.), Computer Algebra in Scientific Computing, Springer, 1999, pp. 409–430. [9] Stetter H.J. – Condition analysis of overdetermined polynomial systems, in: Ganzha V.G., Mayr E.W., Vorozhtsov E.V. (Eds.), Computer Algebra in Scientific Computing – CASC 2000, Springer, 2000, pp. 345–366. [10] Stetter H.J. – Numerical Polynomial Algebra, SIAM Publ., 2004, xvi + 472 pp.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
An introduction to algebraic methods for solving polynomial equations
B. Mourrain INRIA, GALAAD, B.P. 93, Sophia06902 Antipolis, France
A BSTRACT This chapter gives an introductive presentation of algebraic methods for solving a polynomial system f1 = · · · = fm = 0. Such methods are based on the study of the quotient algebra A of the polynomial ring modulo the ideal I = (f1 , . . . , fm ). We show how to deduce the geometry of the solutions, from the structure of A and in particular, how solving polynomial equations reduces to eigencomputations of multiplication operators. We mention briefly two general methods for computing the normal of elements in A, used to obtain a representation of the multiplication operators. A major operation in effective algebraic geometry is the projection, which is closely related to the theory of resultants. We present different notions and constructions of resultants and different methods for solving systems of polynomial equations, based on these formulations. Finally, we describe iterative methods, which can be applied to select a root (among the other roots), which maximise or minimise some criterion, or to count or isolate the roots in a given domain. These methods exploits the algebraic properties of the quotient algebra A. These developments are illustrated by explicit computations in MAPLE.
1. I N T R O D U C T I O N Polynomial system solving is ubiquitous in many applications such as Robotics, Computer vision, Signal processing, . . . . Specific methods like minimisation, Newton iterations, . . . are often used, but not always with guarantees on the result. In this chapter, we give an introductive presentation of algebraic methods for solving a polynomial system f1 = · · · = fm = 0. By a reformulation of the problem in terms of matrix manipulations, we obtain a better control of the structure and the accuracy of our computations. The tools that we introduce, are illustrated by explicit computations. A MAPLE package implements the algorithms described hereafter and is publicly available at http://www.inria.fr/galaad/logiciels/
50
B. Mourrain
multires.html. We encourage the reader to do the experimentation by himself, with this package. Our approach is based on the study of the quotient algebra A of the polynomial ring by the ideal I = (f1 , . . . , fm ). We show in a first part how to deduce the geometry of the solutions, from the structure of A. In particular, we recall how solving polynomial equations reduces to the computation of the eigenvalues or eigenvectors of the operators of multiplication in A. In the case of real coefficients, we also recall how to recover information on the real roots, from this structure. In the next part, we describe briefly a general method (known as Gröbner basis computation) for computing the normal of elements in A, which yields the algebraic structure of this quotient. We also mention a recent generalisation of this approach, which allows to combine more safely, symbolic and numeric computations. Another major operation in effective algebraic geometry is the projection. It is related to the theory of resultants, that we briefly describe. We present different notions and constructions of resultants and different methods for solving a system of polynomial equations, based on these formulations. In practice, according to the class of systems that we want to solve, we will have to choose the resultant construction adapted to the geometry of the problem. In the last section, we describe iterative methods, which can be applied to select a root which maximise or minimise some criterion, or to count or isolate the roots in a given domain. Here also, these methods rely heavily on the algebraic properties of the quotient A.
2. N O T A T I O N S 2.1. The equations and solutions Let R = K[x1 , . . . , xn ] = K[x] be the algebra of polynomials in the variables x = (x1 , . . . , xn ) over the field K. Let f1 , . . . , fm ∈ R = K[x1 , . . . , xn ] be m polynomials. Our objective is to solve the system f1 = 0, . . . , fm = 0. Let I be the ideal generated by these polynomials in the ring R . Let ZK (I ) be the set of solutions (with coordinates in the algebraic closure K of K) of the system of polynomial equations f1 = 0, . . . , fm = 0: ZK (I ) = {ζ ∈ Kn ; f1 (ζ ) = · · · = fm (ζ ) = 0}. We will also denote this variety by Z(I ). We will assume hereafter that Z(I ) is finite (the system of equations f = 0 has a finite number of solutions) or equivalently [25] that the variety Z(I ) is of dimension 0. Our algebraic approach for solving the polynomial system f1 = · · · = fm = 0 (also denoted f = 0) is based on the study of the quotient ring A that we are going to define now. 2.2. The quotient algebra We denote by A = R/I the quotient algebra of R by I , that is the set of classes of polynomials in R modulo the ideal I . The class of an element p ∈ R , is denoted by p ∈ A. Equality in A is denoted by ≡ and we have a ≡ a iff a − a ∈ I .
An introduction to algebraic methods for solving polynomial equations
51
The hypothesis that Z(I ) is finite implies that the Kvector space A is of finite dimension (say D ) over K [25,31]. As we will see, we will transform the resolution of the nonlinear system f = 0, into linear algebra problems in the vector space A, which exploits its algebraic structure. Let us start with an example of computation in the quotient ring A. Example 2.1. Let I be the ideal of R = K[x1 , x2 ] generated by > f1 := 13*x[1]^2+8*x[1]*x[2]+4*x[2]^28*x[1]8*x[2]+2: > f2 := x[1]^2+x[1]*x[2]x[1]1/6:
The quotient ring A = K[x1 , x2 ]/I is a vector space of dimension 4. A basis of A is 1, x1 , x2 , x1 x2 . We check that we have > expand(x[1]*x[1] f2); 1 x12 ≡ −x1 x2 + x1 + . 6 > expand(x[1]*(x[1]*x[2])+ 1/9*x[1]*f1(5/9+13/9*x[1] +4/9*x[2])*f2); x12 x2 ≡ −x1 x2 +
55 2 5 x1 + x2 + . 54 27 54
More generally, any polynomial in K[x1 , x2 ] can be reduced, modulo the polynomials f1 , f2 , to a linear combination of the monomials 1, x1 , x2 , x1 x2 , which as we will see form a basis of A. Hereafter, (xα )α∈E = xE will denote a monomial basis of A. Any polynomial can be reduced modulo the polynomials f1 , . . . , fm , to a linear combination of the monomials of the basis xE of A. 2.3. The dual that is, the space of An important ingredient of our methods is the dual space R linear forms : R → K. The evaluation at a fixed point ζ is a wellknown example of such linear forms: 1ζ : R → K such that ∀p ∈ R , 1ζ (p) = p(ζ ). Another class of linear forms is obtained by using differential operators. Namely, for any α = (α1 , . . . , αn ) ∈ Nn , consider the map
(1)
dα : R → K p → n
1
i=1 αi !
(dx1 )α1 · · · (dxn )αn (p)(0),
52
B. Mourrain
where dxi is the derivative with respect to the variable xi . For a moment, we assume that K is of characteristic 0. We denote this linear form dα = (d1 )α1 · · · (dn )αn and for any (α1 , . . . , αn ) ∈ Nn , (β1 , . . . , βn ) ∈ Nn observe that n β 1 if ∀i, αi = βi , α i d xi (0) = otherwise. 0 i=1 It immediately follows that (dα )α∈Nn is the dual basis of the primal monomial basis (xα )α∈Nn . Notice that (dα )α∈Nn can be defined even in characteristic = 0. Hereafter, we will assume again that K is a field of arbitrary characteristic. By applying Taylor’s expansion formula at 0, we decompose any linear form ∈ R α α α α as = α∈Nn (x ) d . The map → α∈Nn (x ) d defines a onetoone correspondence between the set of linear forms and the set K[[d1 , . . . dn ]] = α K[[d]] = { α∈Nn α d1 1 · · · dαnn } of formal power series (f.p.s.) in the variables d1 , . . . , d n . with K[[d1 , . . . , dn ]]. The evaluation at 0 correHereafter, we will identify R sponds to the constant 1, under this definition. It will also be denoted 10 = d0 . Example 2.2. The following computation gives the value of the linear form 1 + d1 + d1 d2 + d3 2 on the polynomial 1 + x1 + x1 x2 : > apply((1+d[1]+d[1]*d[2]+d[3]^2),(1+x[1]+x[1]*x[2])); 3
Let us next examine the structure of the dual space. We can multiply a linear , we is an R module) as follows. For any p ∈ R and ∈ R form by a polynomial (R define p · as the map p · : R → K such that ∀q ∈ R, p · (q) = (p q). For any α pair of elements p ∈ R and αi ∈ N, αi 1, we check that we have di i (xi p)(0) = αi −1 p(0). Consequently, for any pair of elements p ∈ R, α = (α1 , . . . , αn ) ∈ Nn , di where αi = 0 for a fixed i , we obtain that α
α
α −1 αi+1 di+1
i−1 xi · dα (p) = dα (xi p) = d1 1 · · · di−1 di i
· · · dαnn (p),
that is, xi acts as the inverse of di in K[[d]]. This is the reason why in the literature such a representation is referred to as the inverse system (see, for instance, [60]). If αi = 0, then xi · dα (p) = 0, which allows us to redefine the product p · as follows: Proposition 2.3. For any p ∈ R and any (d) ∈ K[[d]], we have p · = π+ p d−1 (d) , where π+ is the projection on the vector space generated by the monomials with positive exponents. See also [66,37].
53
An introduction to algebraic methods for solving polynomial equations
Example 2.1 (continued). > (1+x[1]+x[1]*x[2]) &.(1+d[1]+d[1]*d[2]+d[3]^2); 3 + d1 + d1 d 2 + d3 2 + d2
We check that the constant term of this expansion is the value of the linear form 1 + d1 + d1 d2 + d3 2 at the polynomial 1 + x1 + x1 x2 . the dual space of A. A linear form on A, can be identified Now, let denote by A , with a linear form on R , which vanishes on I . Conversely, any linear form of R ⊥ which vanishes on I , defines an element of A. Thus we will identify A and I , the that vanish on I . set of elements of R iff ζ ∈ Z(I ). Another interesting linear form ∈ A is the The evaluation 1ζ ∈ A trace, denoted hereafter by Tr. We will see its definition in Section 3.1 and its use in Section 3.3. , we associate to it, a quadratic linear form as Given such a linear form ∈ A follows: , let Definition 2.4. For any ∈ A Q : A × A → K (a, b) → (a b).
Example 2.1 (continued). Let be the linear form = 2 × 1(−1/3,5/6) + 2 × 1(1/3,7/6) .
We check that (−1/3, 5/6) and (1/3, 7/6) are in Z(f1 , f2 ). The matrix of Q in the basis {1, x1 , x2 , x1 x2 } of A is
(1)
(x1 ) [Q ] = (x ) 2 (x1 x2 )
(x1 )
(x2 )
(x12 )
(x1 x2 )
(x1 x2 )
(x22 )
(x12 x2 )
(x1 x22 )
(x1 x2 )
4
(x12 x2 ) =0 (x1 x22 ) 4 2 2 2 (x1 x2 ) 9
0
4
4 9 2 9 4 9
2 9 37 9 4 9
2 9 4 9 4 9 37 81
.
Hereafter we will see how to use the signature of QTr (and more generally of Qh·Tr for any h ∈ R ), in order to get information on the real roots in the case of the real field K = R. 3. T H E
GEOMETRY OF THE SOLUTIONS FROM THE STRUCTURE OF
In this section, we see how to recover the solutions from the structure of A.
A
54
B. Mourrain
3.1. The multiplication operators The first operator that comes naturally in the study of A is the operator of multiplication by an element of a ∈ A. For any element a ∈ A, we define the map Ma : A → A b → ab.
We will also consider the transposed operator → A Mat : A → Mat () = ◦ Ma .
The matrix associated to this operator in the dual basis of a basis of A is the transposed of the matrix of Ma in this basis. Example 3.1. Let us compute the matrix of multiplication by x1 in the basis (1, x1 , x2 , x1 x2 ) of A = K[x1 , x2 ]/(f1 , f2 ), where f1 , f2 are the polynomials of Example 2.1. We multiply these monomials by x1 and reduce them to a normal form. According to the computations of Example 2.1, we have: 1 1 × x1 ≡ x1 , x1 × x1 ≡ −x1 x2 + x1 + , x2 × x1 ≡ x1 x2 , 6 55 2 5 x1 x2 × x1 ≡ −x1 x2 + x1 + x2 + . 54 27 54 > M1 := matrixof([x[1], x[1]*x[1] f2, > x[1]*(x[1]*x[2])+ 1/9*x[1]*f1 > (5/9+13/9*x[1]+4/9*x[2])*f2], > [[1,x[1],x[2],x[1]*x[2]]]); 5 0 16 0 54 1 1 0 55 54 . M1 = 0 0 0 2 27 0 −1 1 −1
x[1]*x[2],
The multiplication map can be computed, when a normal form algorithm is available. This can be performed, for instance, by Gröbner basis computations (see Section 4.1 and its generalisation in Section 4.2). In Section 5, we will describe another way to compute implicitly the multiplication maps, based on resultant matrix computations. Our matrix approach is based on the following fundamental theorem (see [5,63, 81]): Theorem 3.2. Assume that ZKn (I ) = {ζ1 , . . . , ζd }.
An introduction to algebraic methods for solving polynomial equations
55
(1) The eigenvalues of the linear operator Ma (resp. Mat ) are {a(ζ1 ), . . . , a(ζd )}. (2) The common eigenvectors of (Mat )a∈A are (up to a scalar) 1ζ1 , . . . , 1ζd . Notice that if (xα )α∈E is a monomial basis of A, then the coordinates of the evaluation 1ζi in the dual basis of (xα )α∈E are (ζiα )α∈E where ζ α = 1ζ (xα ). Thus, if the basis (xα )α∈E contains 1, x1 , . . . , xn (which is often the case), the coordinates [vα ]α∈E (in the dual basis) of the eigenvectors of Mat yield all the coordinates of the vx v root: ζ = [ v11 , . . . , vx1n ]. It leads to the following algorithm: Algorithm 3.3 (Solving in the case of simple roots). Let a ∈ R and Ma be the matrix of multiplication in a basis xE = (1, x1 , . . . , xn , . . .) of A. 1. Compute the eigenvectors = [1 , x1 , . . . , xn , . . .] of Mta . x 2. For each eigenvector with 1 = 0, compute and output ζ = ( 11 , . . . , x1n ). The set of output points ζ contains the set of simple roots of Z(I ), since for such roots the eigenspace is onedimensional. But as we will see on the next example, it can also yield in some cases1 the multiple roots: Example 2.1 (continued). We compute the eigenvalues, their multiplicity, and the corresponding normalised eigenvector of the transposed of the matrix of multiplication by x1 : > neigenvects(transpose(M1),1); 1 1 5 5 1 1 7 7 − , 2, 1, − , , − , , 2, 1, , , . 3 3 6 18 3 3 6 18
As the basis chosen for the computation is (1, x1 , x2 , x1 x2 ), the previous theorem tells us that the solutions of the system can be read off, from the 2nd and the 3rd coordinates of the normalised eigenvectors: ζ1 = (− 13 , 56 ) and ζ2 = ( 13 , 76 ). Moreover, the 4th coordinate of these vectors is the product of the 2nd by the 3rd coordinates. In order to compute exactly the set of roots, counted with their multiplicity, we exploit the following theorem. It is based on the fact that commuting matrices share common eigenspaces. Theorem 3.4 ([63,66,24]). There exists a basis of A such that ∀a ∈ R , the matrix Ma is, in this basis, of the form 0 N1a a(ζi ) .. .. . with Ni = Ma = . . a 0 Nda 0 a(ζi ) 1
Depending on the type of multiplicity.
56
B. Mourrain
Here again, it leads to an algorithm: Algorithm 3.5 (Solving by simultaneous triangulation). INPUT : The matrices of multiplication Mxi (i = 1, . . . , n) in a basis of A. 1. Compute a (Schur) decomposition P such that all matrices Txi = PMxi P−1 (i = 1, . . . , n) are uppertriangular. 1 , . . . , t n ) of the triangular 2. Compute and output the diagonal vectors ti = (ti,i i,i i matrices Txi = (ti,k ). The first step is performed by computing an ordered Schur decomposition of Ml (where l is a generic linear form) which yields a matrix P of change of basis. Next, we compute the matrices Txi = PMxi P−1 (i = 1, . . . , n) which are triangular, since they commute with Ml . The decomposition of the multiplication operators in Theorem 3.4 is in fact induced by a decomposition of the algebra A = A1 ⊕ · · · ⊕ Ad ,
where Ai is the local algebra associated with the root ζi . More precisely, there exist elements e1 , . . . , en ∈ A, such that i, j = 1, . . . , d 2 e ≡ ei , i ei ej ≡ 0, i = j, e1 + · · · + ed ≡ 1.
These polynomials, which generalise the univariate Lagrange polynomials, are called the fundamental idempotents of A. They are such that Ai = ei A and ei (ζj ) = 1 if i = j and 0 otherwise. The dimension of the Kvector space Ai is the multiplicity µζi of ζi . See [84,63,31]. 3.2. The Chow form and the rational representation of the roots In some problems, it is important to have an exact representation of the roots. As the coordinates of these roots are algebraic numbers, we will represent them in terms of the roots of a univariate polynomial. More precisely, they will be the image of such roots by a rational map. It is the aim of the foregoing developments, to show how to construct explicitly such a representation. Definition 3.6. The Chow form of A is the homogeneous polynomial in u = (u0 , . . . , un ) of degree D , defined by: CI (u) = det(u0 + u1 Mx1 + · · · + un Mxn ).
According to Theorem 3.4, we have
An introduction to algebraic methods for solving polynomial equations
57
Theorem 3.7. The Chow form of A is CI (u) = (u0 + u1 ζ1 + · · · + un ζn )µζ . ζ ∈Z (I )
Example 5.1 (continued). We compute the Chow form of the variety I = (f1 , f2 ), using the matrices of multiplication by x1 and x2 , computed previously. > factor(det(u[0]+ u[1]*M1+ u[2]*M2)); 2 2 1 7 1 5 u0 − u1 + u2 . u0 + u1 + u2 3 6 3 6
We check that it is a product of linear forms, whose coefficients yield the roots ζ1 = (− 13 , 56 ) and ζ2 = ( 13 , 76 ). The exponents yield the multiplicity of the roots (here 2). As here the roots are rational, we can easily factorise this polynomial as a product of linear forms. But usually, this factorisation is possible only on an algebraic extension of the coefficient field. From this Chow form, it is possible to deduce a rational representation of the points of Z(I ): Theorem 3.8. Let (u) be a multiple of the Chow form C(u). Then for a generic vector t ∈ Kn+1 we have (t + u) = d0 (u0 ) + u1 d1 (u0 ) + · · · + un dn (u0 ) + R(u), ∂ gcd , ∂u 0
where di (u0 ) ∈ K[u0 ], R(u) ∈ (u1 , . . . , un )2 , gcd(d0 (u0 ), d0 (u0 )) = 1 and for all ζ = (ζ1 , . . . , ζn ) ∈ Z0 , ζi =
di (ζ0 ) , d0 (ζ0 )
i = 1, . . . , n,
for some root ζ0 of d0 (u0 ). See [73,3,75,32,57]. This result describes the coordinates of the points of Z0 as the image by a rational map of some roots of d0 (u0 ). It does not imply that any root of d0 (u0 ) yields a point in Z0 , so that this representation may be redundant. However the redundant factors can be removed by substituting the rational representation back into the equations f1 , . . . , fn . It leads to the following algorithm: Algorithm 3.9 (Univariate Rational Representation). INPUT : a multiple (u) of the Chow form I ⊂ R . 1. Compute the square free part of (u).
58
B. Mourrain
2. Choose a generic t ∈ Kn+1 and compute the first terms of d(t + u) = d0 (u0 ) + u1 d1 (u0 ) + · · · + un dn (u0 ) + · · · (u0 ) 0) 3. Compute the redundant rational representation ζ1 = dd1 (u , . . . , ζn = ddn (u , 0 (u0 ) 0 0) d0 (u0 ) = 0. 4. Factorise d0 (u0 ), keep the good prime factors and output the corresponding simplified rational univariate representations of the roots Z(I ).
In the last step, for each prime factors of d0 (u0 ), we compute the remainder ri and r0 , respectively of degree di and d0 and check if fi ( rr10 (u0 ), . . . , rrn0 (u0 )) vanishes for i = 1, . . . , m. Example 3.10. From the Chow form of the last example, we deduce: ξ1 = −
1 , 6(1 + u0 )
ξ2 =
11 + 12u0 , 12(1 + u0 )
u0 +
3 2
1 u0 + =0 2
which reduces to the constant representations u0 = −3/2,
x1 = 1/3,
u0 = −1/2,
x1 = −1/3,
x2 = 5/6, x2 = 7/6.
3.3. Real roots and radical Let suppose now that the input polynomials have real coefficients: fi ∈ R[x], i = 1, . . . , m. A natural question, which may arise in many practical problems is how many real solutions this polynomial system has? In order to answer it, we will use the properties of the following linear form: Definition 3.11. The linear form Tr is defined over K by Tr : R → K a → trace(Ma¯ ),
where trace(Ma¯ ) is the usual trace of the linear operator Ma¯ . According to Theorem 3.4, we also have ∀a ∈ A,
Tr(a) =
ζ ∈Z (I )
where µζ is the multiplicity of ζ .
µζ a(ζ ),
An introduction to algebraic methods for solving polynomial equations
59
Example 3.12.
1 6
0
1 1 Tr(x1 ) = trace 0 0 0 −1
0
5 54 55 54 2 27
1
−1
0 0
= 0.
By Theorem 3.4, this linear form can also be defined by Tr = 2 × 1(−1/3,5/6) + 2 × 1(1/3,7/6) . and for any element h in A, we associate the quadratic To this linear form Tr ∈ A form: Qh·Tr : (a, b) → Tr(hab)
with which we analyse the number of real roots. Theorem 3.13 (Hermite). Let h ∈ R[x]. Then we have (1) The rank of the quadratic form Qh is the number of distinct (complex) roots ζ such that h(ζ ) = 0. (2) The signature of Qh is #{ζ real with h(ζ ) > 0} − #{ζ real with h(ζ ) < 0}. See [72,44]. In particular, if h = 1, the rank of Q1 is the number of distinct roots and its signature is the number of real roots. This allow us to analyse more closely the geometry of the real roots, as it is illustrated now. Example 3.14. By a direct computation, we get Tr(1) = 4, Tr(x1 ) = 0, Tr(x2 ) = 4, Tr(x1 x2 ) = 29 and we deduce the value of the linear form Tr on the other interesting monomials by using the transposed operators Mtxi as follows: > > > > > > > >
T0 := evalm([4,0,4,2/9]): T1 := evalm(transpose(M1)&*T0): T2:= evalm(transpose(M2)&*T0): T11 := evalm(transpose(M1)&*T1): T12:= evalm(transpose(M2)&*T1): T112:= evalm(transpose(M2)&*T11): Q1 := matrix(4,4,[T0,T1,T2,T12]); Qx1 := matrix(4,4,[T1,T11,T12,T112]); 2 4 0 4 0 4 9 4 9 4 2 4 0 9 9 9 0 9 Qx1 = Q1 = 4 2 37 4 , 2 4 9 9 9 9 9 2 9
4 9
4 9
37 81
4 9
2 81
2 9 4 9 4 9 37 81
The rank and the signature of the quadratic forms Q1 , Qx1 are
4 9 2 81 37 81 4 81
.
60
B. Mourrain
> rank(Q1), signature(Q1), rank(Qx1), signature(Qx1); 2,
[2, 0],
2,
[1, 1],
which tell us (without computing these roots) that there are 2 real roots, one with x1 < 0 and one with x1 > 0. 3.4. The case of a complete intersection An important class of systems is those defined by n equations in n unknowns, such that A is zerodimensional. They are called zerodimensional (affine) complete is intersections. An important property of these quotients is that the dual A described by one element. More precisely we have: Theorem 3.15. Assume that A = K[x1 , . . . , xn ]/(f1 , . . . , fn ) is a finitedimensional such that for any linear form ∈ A , there vector space. Then there exists τf ∈ A exist a unique element a ∈ A such that = a · τf . is a free Amodule of rank 1. This characterises the finiteIt means that A dimensional Gorenstein algebras. An equivalent statement of the previous theorem is:
Theorem 3.16. The quadratic form Qτf defined in 2.4 is nondegenerate. See [79,2,53,10,29,32,21] for more details. . For that, we Let us see how we can compute explicitely such a generator of A introduce the following object: Definition 3.17. The Bezoutian f0 ,...,fn of f0 , . . . , fn ∈ R is the element of R ⊗K R defined by ! ! f (x) ! 0 ! . f0 ,...,fn (x, z):= !! .. ! ! fn (x)
θ1 (f0 )(x, z) .. .
··· .. .
θ1 (fn )(x, z)
···
! θn (f0 )(x, z) !! ! .. !, . ! ! θn (fn )(x, z) !
where z = (z1 , . . . , zn ) and θi (fj )(x, z) :=
fj (z1 , . . . , zi−1 , xi , . . . , xn ) − fj (z1 , . . . , zi , xi+1 , . . . , xn ) . xi − zi
Let f0 ,...,fn (x, z) = θαβ xα zβ , θα,β ∈ K. The Bezoutian matrix of f0 , . . . , fn is the matrix Bf0 ,...,fn = (θαβ )α,β .
An introduction to algebraic methods for solving polynomial equations
61
→ R We associate to the Bezoutian polynomial f0 ,...,fn , the map f0 ,...,fn : R such that for any ∈ R , f0 ,...,fn () =
θαβ xα zβ .
The Bezoutian was initially used by E. Bézout to construct the resultant of two polynomials in one variable [11]. Notice that the matrix of is precisely Bf0 ,...,fn in as follows: a convenient basis. This object allows us to characterise a generator of A Definition 3.18 (see [79,53,29,6]). Assume that Z(f1 , . . . , fn ) is finite over the such algebraic closure K of K. Then the residue τf is the unique linear form ∈ R that (1) τf (I ) = 0, (2) 1,f1 ,...,fn (τf ) − 1 ∈ I . The case of a zerodimensional projective varieties is detailed in [29]. Generalisations of this situation to projective toric varieties have also been studied in [20]. A new algorithm has been proposed in [32], to compute it in the general complete intersection case. and according to the Example 3.19. In order to defined the linear form τf ∈ A previous definition, we reduce the polynomial 1,f modulo (f1 (x), f2 (x)) and (f1 (z), f2 (z)) and we get > Dl:=Theta([1,f1,f2])+(f113*f2) 5*subs(x[1]=z[1],x[2]=z[2],f2); (−3 + 13 z1 + 4 z2 − 9 z2 z1 ) + x2 (−4 z2 − 4 z1 + 4)x1 (13 + 5 z1 − 4 z2 ) − 9 x1 x2
The associated matrix is > Br := deltaof(Dl,[1,x[1],x[2],x[1]*x[2]], [1,z[1],z[2],z[1]*z[2]]);
−3
13 B := 4 −9
13
4
5
−4
−4
−4
0
0
−9
0 . 0 0
Notice that this matrix is symmetric, of rank 4. In order to define the residue τf on A, we need to know the value of τf (1), τf (x1 ), τf (x2 ), τf (x1 , x2 ). By definition, the residue is the linear form such that applied to the monomial in z
62
B. Mourrain
of B , we obtain the polynomial 1 ∈ K[x1 , x2 ]/(f1 , f2 ). In other words, if [τf ] = [τf (1), τf (x1 ), τf (x2 ), τf (x1 , x2 )] we have 1 τf (x1 ) 0 B τ (x ) = 0 . f 2 0 τf (x1 , x2 )
τf (1)
An easy computation > linsolve(Br,[1,0,0,0]); 1 0, 0, 0, − 9
shows that have τf (1) = τf (x1 ) = τf (x2 ) = 0, τf (x1 , x2 ) = − 19 . This means that τf . If we assume that we known the first is − 19 times the dual element of x1 x2 in A term of this element as series in di (see Example 5.1), we have > tf := 1/9*DualBs[4];
1 1 5 5 2 1 545 4 223 3 − d2 d1 + d1 2 − d2 2 − d2 3 − d2 2 d1 + d2 d1 2 − d2 − d2 d1 9 9 36 12 9 9 648 648 8 1 2 + d2 2 d1 2 − d2 d1 3 + d1 4 + O d5 . 81 162 81
Using algebraic relations between x1 and f1 , . . . , fn (see Section 5.12), we propose in [32] an algorithm to compute the residue, which does not involve normalisation in the quotient A. This yields in particular a new way to handle the structure of A. Consider now the dual basis (wα )α∈E in A of the monomial (xα )α∈E , for the nondegenerate symmetric bilinear form Qτf , defined by Qτf (a, b) = τf (a b) for any a, b ∈ A: Qτf xα , wβ =
1 0
if α = β, otherwise.
Then we have the following representation formula, which gives us directly the normal form of an element in terms in the basis (xα )α∈E or (wα )α∈E and τf : Proposition 3.20 (projection formula). For any p in R , we have p≡
α∈E
τf (wα p)xα ≡
α∈E
τf xα p wα .
An introduction to algebraic methods for solving polynomial equations
63
We immediately deduce that 1,f ≡ xα wα (z) ≡ wα (x)zα , α∈E
α∈E
and that the matrix of f0 ,f in the monomial basis (xα )α∈E × (zα )α∈E of A ⊗ A is Bf0 ,f1 ,...,fn := τf (f0 wα wβ ) α,β∈E ,
which explains why it is symmetric (see 3.19). An interesting relation connects the trace and the residue: Theorem 3.21 ([79,53,29]). Assume that f = (f1 , . . . , fn ) defines a complete intersection and let Jf ∈ R be their Jacobian. Then Tr = Jf · τf . Example 3.22. The Jacobian of the polynomial f1 , f2 of Example 2.1 is > J := det(jacobian([f1,f2],[x[1],x[2]])); 10x1 2 − 16x1 x2 + 16x1 − 8x2 2 + 16x2 − 8 > Tr := J &. tf; 4 2 37 4 4 13 4 4 + 4 d2 + d 1 2 + d 1 d 2 + d 2 2 + d 1 2 d 2 + d 1 d 2 2 + d 2 3 + d 1 4 9 9 9 9 9 3 81 5 2 3 37 2 2 109 1513 4 3 + d1 d2 + d1 d2 + d1 d2 + d2 + O d . 81 81 162 324
This shows that the trace of 1 is 4 (that is the dimension of A), that the trace of Mx1 is the coefficient of d1 in this expansion (that is 0), that the trace of Mx2 is 4,. . . We can check these values, from our previous computations. To compute Tr up to degree 4, we need to compute τf up to degree 6 using Proposition 2.3. 4. C O M P U T I N G
IN THE QUOTIENT ALGEBRA
Algebraic solvers exploit the properties of the quotient algebra A, which means that they require to know how to compute effectively in this quotient. This is performed by a socalled normal form algorithm. We are going to describe two approaches to compute such a normal form. 4.1. Gröbner basis Gröbner basis is a major tool of effective algebraic geometry, which yields algorithmic answers to many question of this domain [25,7,1,28]. It is closely related to the use of a monomial ordering. Let us recall its definition.
64
B. Mourrain
Definition 4.1. A monomial ordering is a total order < on the set of monomials of K[x] such that (i) ∀α = 0, 1 < xα , (ii) ∀(α, β, γ ) ∈ (Nn )3 , xα < xβ ⇒ xα+γ < xβ+γ . Given such an ordering >, we define the leading term of a polynomial p ∈ R as the term of p (the coefficient times its monomial) whose monomial is maximal for >. We denote it by L> (p). Given an ideal I of R = K[x], we also denote by L> (I ) the set of leading terms of the elements p ∈ I . Because of property (ii), L> (I ) is a monomial ideal. By Dickson lemma [25] or by Noetherianity, the ideal L> (I ) is generated by a finite set of monomials. This naturally leads to the definition of Gröbner bases: Definition 4.2. A finite subset G = {g1 , . . . , gt } of an ideal I ⊂ K[x] is a Gröbner basis of I for the monomial order >, iff we have L> (I ) = (L> (g1 ), . . . , L> (gt )). The interesting property which characterises a Gröbner basis is the following. For any p ∈ R , let N (p) be the remainder of p by division by G, according to the leading terms of G (see [25]). The polynomial N (p) is such that any of its monomial is not divisible by the monomials L> (gi ), i = 1, . . . , d . Then, we have N (p) = 0 iff p ∈ I . In addition, the polynomial N (p) is the normal form of p modulo the ideal I . It implies that a basis B of A = R/I is the set of monomials which are not in L> (I ). This allow us to define the multiplication table by an element a ∈ A as follows: we multiply first the elements as usual polynomials and then normalise by reduction by the Gröbner basis G. Example 4.3. We compute the Gröbner basis of (f1 , f2 ) for the degree ordering refined by the lexicographic ordering: > with(Groebner); G := gbasis([f1,f2],tdeg(x[1],x[2]));
[30x1 x2 − 30x1 − 25 − 24x2 2 + 48x2 , 15x1 2 + 12x2 2 − 24x2 + 10, 216x2 3 − 648x2 2 + 5x1 + 632x2 − 200].
The leading monomials are x1 x2 , x12 , x23 . The set of monomials outside L> (I ) and which forms a basis of A is {1, x1 , x2 , x22 }. Let us compute the matrix of multiplication by x1 in this basis, using our Gröbner basis G. > L:= map(u>normalf(u,G,tdeg(x[1],x[2])), > [x[1],x[1]^2,x[1]*x[2],x[1]*x[2]^2]);
An introduction to algebraic methods for solving polynomial equations
65
839 x2 + 8/5x2 2 x1 , −4/5x2 2 + 8/5x2 − 2/3x1 + 5/6 + 4/5x2 2 − 8/5x2 , − 270 53 85 − + x1 + 54 54
> matrixof(L,[[1,x[1],x[2],x[2]^2]]); 85 0 −2/3 5/6 54 53 1 0 1 54 0 8/5 −8/5 − 839 . 270 0 −4/5 4/5 8/5
Efficient algorithms have been developed over these decades to compute Gröbner bases. We mention in particular [36,45,46,74]. 4.2. General normal form Unfortunately, the construction of Gröbner bases is not numerically stable as shown on the following example: Example 4.4. Consider first the system: > f1 := x[1]^2+x[2]^2 x[1]+x[2]2; > f2 := x[1]^2x[2]^2 + 2*x[2]3; > gbasis([f1,f2],tdeg(x[1],x[2])); " 2 # 2x2 − x1 − x2 + 1, 2x1 2 − x1 + 3x2 − 5 .
The leading monomials are x12 , x22 and the corresponding monomial basis of A is {1, x1 , x2 , x1 x2 } Consider now a small perturbation: > gbasis([f1,f2+1./10000000*x[1]*x[2]],tdeg(x[1],x[2])); "
− 2x2 2 + x1 + x2 − 1 + 0.0000001x1 x2 ,
x12 + x22 − x1 + x2 − 2, x23 − 10000000.9999999999999950000000000000125x22 + 5000000.2500000124999993749999687500015625000781250x1 + 5000000.7500000374999931249999062500171875002343750x2 − 5000000.2500000624999993749998437500015625003906250.
The leading monomials are now x1 x2 , x12 , x23 and the corresponding basis of A is {1, x1 , x2 , x22 }. As we see on this simple example, in the result of a small perturbation, basis may “jump” from one set of monomials to another, though the two set of solutions are very closed to each other from a geometric point of view. Moreover, some of the polynomials of the Gröbner basis have large coefficients.
66
B. Mourrain
Thus, Gröbner basis computations may introduce artificial discontinuities, due to the choice of a monomial order. A recent generalisation of these normal form computation has been proposed in [64,68]. This construction is based on a new criterion, which gives a necessary and sufficient condition for a projection onto this set of polynomials, to be a normal form modulo the ideal I . It can be reformulated as follows: Theorem 4.5. Let B be a vector space of R connected to 1.2 Let B + = B ∪ x1 B ∪ · · · ∪ xn B , N : B + → B be a Klinear map such that NB = IB is the identity on B . Let I = (ker(N )) be the ideal generated by the kernel of N . We define Mi : B → B b → N (xi b).
The two properties are equivalent: (1) For all 1 i, j n, Mi ◦ Mj = Mj ◦ Mi . (2) R = B ⊕ I . If this holds, the map B reduction along ker(N ) is canonical. This leads to a completionlike algorithm which starts with the vector space K0 = f1 , . . . , fm generated by the polynomials that we want to solve and iterates the construction Ki+1 = Ki+ ∩ L, where L is a fixed vector space. We stop when Ki+1 = Ki . See [64,68] for more details. This approach allows us to fix first the set of monomials in which we want to do linear operations and thus allows us to treat more safely polynomials with approximate coefficients. It can be adapted very naturally to Laurent polynomials, which is not the case for Gröbner basis computation. Moreover it can be specialised very efficiently to systems of equations, for which the basis of A is known a priori, such as in the case of a complete projective intersection [68]. Example 4.6. For the perturbed polynomial of the previous example, we get the normal forms for the monomial on the border of B : x12 = −0.00000005x1 x2 + 1/2x1 − 3/2x2 + 5/2, x22 = +0.00000005x1 x2 + 1/2x1 + 1/2x2 − 1/2, x2 x1 2 = 0.49999999x1 x2 − 0.74999998x1 + 1.75000003x2 + 0.74999994, x1 x2 2 = 0.49999999x1 x2 − 0.25000004x1 − 0.74999991x2 + 1.25000004].
This set of relations yields directly the matrices of multiplication by the variables x1 , x2 in A. 2
Any monomial m ∈ B is of the form xi1 m with m ∈ B .
An introduction to algebraic methods for solving polynomial equations
5. P R O J E C T I O N
67
METHODS
Projection is one of the more used operation in Effective Algebraic Geometry [28, 25]. It allows to reduce the dimension of the problem that we have to solve and often to simplify it. The resultant is a tool to perform it and has many applications in this domain. It leads in particular to efficient methods for solving polynomial equations, based on matrix formulations [34]. We are going to present here several notions and constructions of these resultants. 5.1. Resultants Before considering the multivariate case, let us first recall the construction of the wellknown Sylvester matrix in the univariate case. Given two univariate polynomials, f0 = f0,0 + · · · + f0,d0 x d0 of degree d0 and f1 = f1,0 + · · · + f1,d1 x d1 of degree d1 , let S be the matrix of f0 , xf0 , . . . , x d1 −1 f0 , f1 , xf1 , . . . , x d0 −1 f1
in the monomial basis {1, . . . , x d0 +d1 −1 }. This matrix is called the Sylvester matrix of f0 and f1 . Let V0 , V1 , and V denote the vector spaces generated by the monomials {1, . . . , x d1 −1 }, {1, . . . , x d0 −1 }, and {1, . . . , x d0 +d1 −1 }, respectively. Then, the Sylvester matrix is the matrix of the map S : V0 × V1 → V such that ∀(q0 , q1 ) ∈ V0 × V1 , S(q0 , q1 ) = f0 q0 + f1 q1 , in the corresponding monomial bases. The determinant of this (d0 + d1 ) × (d0 + d1 ) matrix is the resultant Res(f0 , f1 ) of f0 and f1 . It vanishes iff f0 and f1 have a common root (in K), assuming that f0,d0 = 0, f1,d1 = 0. Thus, we have projected the problem of a common root of the two polynomials onto a problem in the space of coefficients Res(f0 , f1 ) = 0. We can generalise this approach to the multivariate case as follows: let f0 , f1 , . . . , fn ∈ R be n + 1 polynomials in n variables, of degree d0 , . . . , dn . The matrices used to construct resultants, as in the work of F.S. Macaulay [59] for instance, are matrices associated to maps of the form: (2)
S : V0 × · · · × Vn → V (q0 , . . . , qn ) →
n
fi qi ,
i=0
where Vi = xEi is a vector space generated by a finite number of monomials. We denote by Ei the set of exponents of these monomials: Ei = {βi,1 , βi,2 , . . .}. The vector space V = xF is also a vector space generated by monomials, whose exponents are in the set F . The matrix of this map, in the canonical monomial bases, is obtained as follows. The image of an element (0, . . . , 0, xβi,j , 0, . . . , 0) is the polynomial xβi,j fi . Its expansion in the monomial basis of V gives the
68
B. Mourrain
corresponding column of the matrix of S . The matrix of S can be divided into blocks [S0 , S1 , . . . , Sn ]:
(3)
xα1 .. V . αN x
$
V0
%&
V1
' $ %& '
$
Vn
%&
.. .. .. . . . β0,1 β β x f0 . . . x 1,1 f1 . . . . . . x n,1 fn . . . .. .. .. . . .
'
.
The columns of the block Si correspond to the multiples of fi expressed in the monomial basis xF .
5.2. Projective resultant Let ν0 = ni=0 di − n and let Rk be the set of polynomials in R , of degree k . In order to construct the resultant of these polynomials (in fact the homogenisation fih of these polynomials), F.S. Macaulay [59] took for Vi a vector space Vi = xEi ⊂ Rν−di generated by some of the monomials of degree ν − di , and for V the vector space V = xF = Rν of polynomials of degree ν . The construction is such that d when f0 = 1 and fi = xi i we get the identity matrix. We illustrate this construction for 3 polynomials in 2 variables.
Example 5.1. Let us compute the Macaulay matrix associated to the polynomials f1 , f2 of Example 2.1, and a generic linear form f0 = u0 + u1 x1 + u2 x2 : > S := mresultant([u[0]+u[1]*x[1]+u[2]*x[2],f1,f2], > [x[1],x[2]]);
u0
u2 u 1 0 0 0 0 0 0 0
0
0
0
0
0
2
0
0
u0
0
0
2
0
−8
0
− 16
0
u0
0
0
2
−8 − 16
u1
u2
u0 −8
−8
8
0
u1
0
−8
13 −1
0
u2
0
0 −8
4
0
0
0
0
0
0
13
0
1
0
0
0
0
4
0
0
0
0
0
0
u1 13
8
0
1
1
0
0
u2 8
4
0
0
1
0
0
0
0 −1
− 16
0 −1 1 1 . 0 0 0 0 0
An introduction to algebraic methods for solving polynomial equations
69
We have E2 = {1, x1 , x2 }, E1 = {1, x1 , x2 }, E0 = {1, x1 , x2 , x1 x2 }, ) ( F = 1, x1 , x2 , x1 x2 , x12 , x13 , x12 x2 , x22 , x1 x22 , x23 .
When n = 1, this construction yields the Sylvester matrix of the two polynomials f0 , f1 . F.S. Macaulay has shown [59] that the resultant of the homogenised polynomials f0h , . . . , fnh is the ratio of the determinant of S by another subminor of S. The matrix S may be degenerate, independently of f0 . This is the case, for instance, when the number of isolated roots of Z(f1 , . . . , fn ), counted with multiplicities, is not the bound ni=1 di , given by Bezout’s theorem. If we are not in this degenerate situation, we will say that f1 , . . . , fn is a generic system for Macaulay’s construction. A fundamental property of this construction is that for generic systems f1 , . . . , fn , the set of monomials xE0 is a basis of A = R/(f1 , . . . , fn ) [59]. 5.3. Toric resultant A refined notion of resultants (on toric varieties) has been studied recently, which takes into account the actual monomials appearing in the polynomials fi . Its construction follows the same process as in the previous section, except that the notion of degree is changed. We consider n + 1 Laurent polynomials f0 , . . . , fn ∈ L = K[t1±1 , . . . , tn±1 ], and we replace the constrains on the degree by constrains on the support3 of the polynomials [40,82,33]: Let fix a polytope Ai ⊂ Zn and assume that the support of fi is in Ai : fi = ci,α tα . α∈Ai
We denote by A the Minkowski sum of these polytopes (A = A0 ⊕ · · · ⊕ An ), to which we associate the toric variety TA as follows. We consider the map σ : (K∗ )n → PN t → tα0 : · · · : tαN ,
where A = {α0 , . . . , αN } ⊂ Zn . The closure of its image is the Toric variety TA . The toric resultant is the necessary and sufficient condition on the coefficients of the polynomials fi , i = 0, . . . , n, such that they have a common root in TA . Let us fix a vector δ ∈ Qn . For any polytope C , we denote by C δ , the set of points in C ∩ Zn , when we remove all the facets, for which the innerproduct of the normal with δ is negative. Let Vi be the vector space generated by a certain subset of the * monomials xβ with β ∈ ( j =i Aj )δ and V is the vector space generated by xα , 3
The support of p =
α α cα x
is the set of α ∈ Zn such that cα = 0.
70
B. Mourrain
* S, from which a square with α ∈ F = ( ni=0 Ai )δ . This defines a map S+ and matrix + matrix is deduced [18]. Its determinant is a nonzero multiple of the toric resultant over TA .
Example 5.2. We consider the system f0 = c0,0 t1 t2 + c0,1 t1 + c0,2 t2 + c0,3 , f1 = c1,0 t1 t2 + c1,1 t1 + c1,2 t2 + c1,3 , f2 = c2,0 , t1 2 + c2,1 t2 2 + c2,1 t1 + c2,2 t2 + c2,3 .
A resultant matrix, which yields a multiple of the toric resultants and computed by the algorithm described in [18] is > S:= spresultant([f0,f1,f2],[t[1],t[2]]);
c0,3
c0,2 0 c0,1 0 c0,0 0 0 0 0 0 0
0
0
0
c1,3
0
0
0
0
c2,3
0
c0,3
0
0
c1,2
c1,3
0
0
0
c2,2
0
0
c0,3
0
0
0
c1,3
0
0
0
c2,3
0
c0,2
0
c1,2
c1,3
0
c2,1
c2,2
0
c0,1
0
c1,1
0
c1,3
0
c2,1
c0,1
0
c1,1
0
c1,2
0
0
c2,1
c0,0
0
0
0
c1,0
0
0
0
0
0
c0,2
0
0
0
c1,2
0
0
0
c2,1
0
0
c0,0
c0,1
0
0
c1,0
c1,1
c1,2 c2,0
0
0
0
c0,0
0
0
0
c1,0
0
0
0
0
0
0
0
0
0
0
c1,1
0
c2,0
0
0
0
0
0
0
0
c1,0
0
0
c0,3 c1,1 0
0
c0,2 c1,0
0
0 0 c2,3 0 c2,2 . c2,1 0 c2,1 0 0 c2,0
We observe that there are 4 columns in f0 , which is also the generic number of roots of f1 = 0, f2 = 0.
The construction will not be degenerate, when the polynomials f1 , . . . , fn intersect properly, in the underlying projective toric variety. In this case, we will say that the system f1 , . . . , fn is generic for this construction. In this case, the dimension of A is the mixedvolume of the polytopes of f1 , . . . , fn [40]. Here, again we have the property that for generic systems f1 , . . . , fn with support respectively in A1 , . . . , An , the set of monomials xE0 is a basis of A = R/(f1 , . . . , fn ) [71,35].
An introduction to algebraic methods for solving polynomial equations
71
5.4. Resultant over a unirational variety A natural extension of the toric case consists in replacing the monomial parametrisation by “any” rational one. The input system, also defined on an open subset of Kn is of the form k0 f ( t ) = c0,j κ0,j (t), 0 j =0 .. fc := . kn f ( t ) = cn,j κn,j (t), n
(4)
j =0
where t = (t1 , . . . , tn ) and the κi,j are nonzero rational functions, which we can assume to be polynomials by reduction to the same denominator. Let Ki = (κi,j )j =0,...,ki be the vector of polynomials defining fi , and U be the open subset of Kn such that Ki (t) = 0, for i = 0, . . . , n. Assume that there exists polynomials σ0 , . . . , σN ∈ R defining a map σ : U → PN t → σ0 (t) : · · · : σN (t) ,
and homogeneous polynomials ψi,j (x0 , . . . , xN ), i = 0, . . . , n, j = 0, . . . , ki , such that κi,j (t) = ψi,j σ0 (t), . . . , σN (t)
and
deg(ψi,j ) = deg(ψi,0 ) 1.
Let X o be the image of σ and X its closure in PN . We are looking for conditions on the coefficients c = (ci,j ) such that the “homogenised” system has a root in X . Under the following hypotheses:
(D)
the Jacobian matrix of σ = (σi )i=0,...,N is of rank n at one point of U, for generic c, f1 = · · · = fn = 0 has a finite number of solutions in U,
it is proved in [15], that the resultant ResX (fc ) can be defined. In order to compute a nontrivial multiple of this resultant, we use the Bezoutian matrix defined in 3.17. In the multivariate case, we have the following property. Theorem 5.3 ([15]). Assume that the conditions (D) are satisfied. Then any maximal nonzero minor of the Bezoutian matrix Bf0 ,...,fn is divisible by the resultant ResX (fc ). This leads us to an algorithm for computing a nontrivial multiple of generalised resultant, that we illustrate below:
72
B. Mourrain
Example 5.4. Here is an example where the classical and toric resultants are degenerate. Consider the three following polynomials: 2 2 f0 = c0,0 + c0,1 t1 + c0,2 t2 + c0,3 t1 + t2 , 2 f1 = c1,0 + c1,1 t1 + c1,2 t2 + c1,3 t1 2 + t2 2 + c1,4 t1 2 + t2 2 , 2 f2 = c2,0 + c2,1 t1 + c2,2 t2 + c2,3 t1 2 + t2 2 + c2,4 t1 2 + t2 2 .
We are looking for conditions on the coefficients ci,j such that these three polynomials have a common “root”. The resultant of these polynomials over P2 is zero (whatever the values of (ci,j ) are), for the homogenised polynomials f0h , f1h , f2h vanish at the points (0 : 1 : i) and (0 : 1 : −i) . For the same reason, the toric resultant also vanishes (these polynomials have common roots in the associated toric variety). Now applying the previous results, we consider the map σ : K2 → P3 (t1 , t2 ) → 1 : t1 : t2 : t12 + t22 ,
whose Jacobian is of rank 2. Let ψ0 = (x0 , x1 , x2 , x3 ), ψ1 = x02 , x0 x1 , x0 x2 , x0 x3 , x32 , ψ2 = x02 , x0 x1 , x0 x2 , x0 x3 , x32 ,
where (x0 : x1 : x2 : x3 ) are the homogeneous coordinates of P3 . We have fi = ci,j ψi,j ◦ σ , for i = 0, 1, 2. For generic values of the coefficients ci,j , the system f1 = f2 = 0 has a finite number of solutions in K2 , and so that by Theorem 5.3, any nonzero maximal minor of Bf0 ,f1 ,f2 is divisible by ResX (f0 , f1 , f2 ). Computing a maximal minor of this Bezoutian matrix of size 12 × 12, and rank 10, yields a huge polynomial in (ci,j ), containing 207805 monomials. It can be factorised as q1 q2 (q3 )2 ρ , with q1 = −c0,2 c1,3 c2,4 + c0,2 c1,4 c2,3 + c1,2 c0,3 c2,4 − c2,2 c0,3 c1,4 , q2 = c0,1 c1,3 c2,4 − c0,1 c1,4 c2,3 − c1,1 c0,3 c2,4 + c2,1 c0,3 c1,4 , q3 = c0,3 2 c1,1 2 c2,4 2 − 2c0,3 2 c1,1 c2,1 c2,4 c1,4 + c0,3 2 c2,4 2 c1,2 2 + · · · , ρ = c2,0 4 c1,4 4 c0,2 4 + c2,0 4 c1,4 4 c0,1 4 + c1,0 4 c2,4 4 c0,2 4 + c1,0 4 c2,4 4 c0,1 4 + · · · .
The polynomials q3 and ρ contain respectively 20 and 2495 monomials. As for generic equations f0 , f1 , f2 , the number of points in the varieties Z(f0 , f1 ), Z(f0 , f2 ), Z(f1 , f2 ) is 4 (see for instance [62]), ResX (f0 , f1 , f2 ) is homogeneous of degree 4 in the coefficients of each fi . Thus, ResX (f0 , f1 , f2 ) corresponds to the last factor ρ .
An introduction to algebraic methods for solving polynomial equations
73
5.5. Residual resultant In many situations coming from practical problems, the equations have commons zeroes which are independent of the parameters of the problems, and which are not interesting. We are going to present here a resultant construction, which allows us to remove these degenerated solutions, when they form a complete intersection [16] (see also [14,22]). Let g1 , . . . , gr be r homogeneous polynomials of degree k1 · · · kr in R = K[x0 , . . . , xn ] and d0 · · · dn be n + 1 integers such that dn k1 and r n + 1. We suppose that (g1 , . . . , gr ) is a complete intersection and that dn kr + 1. We consider the following system:
(5)
r f0 (x) = hi,0 (x) gi (x), i=1 . fc := .. r f ( x ) = hi,n (x) gi (x), n i=1
i,j where hi,j (x) = α=dj −ki cα xα is generic homogeneous polynomial of degree dj − ki . The polynomial fi is generic of degree di in the ideal G = (g1 , . . . , gr ). i,j We are looking for a condition on the coefficients c = (cα ) such that fc has a solution “outside” the variety Z(G) defined by G. Such a condition is given by the residual resultant defined in [16]. This resultant is constructed as a general resultant over the blowup of Pn along the ideal G.
Theorem 5.5 ([16]). There exists an irreducible and homogeneous polynomial of K[c], denoted ResG,d0 ,...,dn , which satisfies ResG,d0 ,...,dn (f0 , . . . , fn ) = 0 ⇔ F sat = Gsat ⇔ F sat : Gsat = R ⇔ Z(F : G) = ∅, where F sat and Gsat are respectively the saturations of the ideals F = (f0 , . . . , fn ) and G. i,j
The degree of ResG,d0 ,...,dn in the coefficients (cα ) of each fj is
(6)
Nj =
Prj P1
(k1 , . . . , kr ),
74
B. Mourrain
where, rj (T ) = σn (d)+ nl=r σn−l (d)T l , with the notations d = (d0 , . . . , dj −1 , dj +1 , . . . , dn ), σ0 (d) = (−1)n , σ1 (d) = (−1)n−1 l=j dl , σ2 (d) = (−1)n−2 × j1 =j,j2 =j,j1 n, defining a finite number of roots. We still consider a map of the form S : V1 × · · · × Vm → V m (q1 , . . . , qm ) → fi qi , i=1
S. which yields a rectangular matrix + A case of special interest is the case where this matrix is of rank N − 1, where S. In this case, it can be proved [31] that Z(f1 , . . . , fm ) N is the number of row of + is reduced to one point ζ ∈ Kn and if (xα )α∈F is the set of monomials indexing the rows of + S, that " α #t S = 0. ζ α∈F +
Using Cramer’s rule, we see that ζ α /ζ β (α, β ∈ F , ζ β = 0) can be expressed as the ratio of two maximal minors of + S. If 1, x1 , . . . , xn ∈ xF (which is the case most of the time), we obtain ζ as a rational function of maximal minors of + S, and thus of the input coefficients of polynomials fi . Algorithm 5.10 (Solving an overconstrained system defining a single root). INPUT : A system f1 , . . . , fm ∈ K[x1 , . . . , xn ] (m > n) defining a single solution.
78
B. Mourrain
S for one the proposed resultant formulation. • Compute the resultant matrix + St and check that it is generate by one vector w = • Compute the kernel of + [w1 , wx1 , . . . , wxn , . . .] OUTPUT :
ζ =[
wx1 wxn w1 , . . . , w1 ].
Let us illustrate this algorithms, with a projective resultant construction. Example 5.11. We consider the case of 3 quadrics: > > > >
f1 := x1^2x1*x2+x2^23; f2 := x1^22*x1*x2+x2^2+x1x2; f3 := x1*x2+x2^2x1+2*x29; S := mresultant([f1,f2,f3],[x1,x2]);
−3
0 0 −1 0 0 0 0 0 1 1 0 0 0 0
0
0
0
0
0
0
0
0
0
0
−9
0
0
−3
0
0
0
0
−1
0
0
0
−9
2
0
0
0
−3
0
0
0
1
0
0
0
0
−1
0
0
0
0
−3
0
−1
−2
0
1
0
−1
1
−9
0
1
−1
0
−1
−2
0
1
1
0
0
0
−1
2
−1
1
0
0
1
0
−1
−2
−1
1
0
2
0
0
0
1
−2
0
0
1
0
0
0
0
0
1
0
0
−1
1
0
0
−2
0
0
0
0
1
1
0
0
1
0
0
0
1
0
1
0
0
1
0
0
0
0
0
1
1
0
0
0
0
0
0
−9
0
0
0
0
0
1
0
−1
−9
2
1
0
0
0
1
0
1
1
0
0
0
0
0
0
0
−1
1
0
0
0
0
0
0
1
2
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0 −9 2 1 1 0 0 . 0 −1 0 0 0 0 0
The rows of S are indexed by " # 1, x2 , x1 , x1 x2 , x1 2 x2 , x1 x2 2 , x1 3 x2 , x1 2 x2 2 , x1 x2 3 , x1 2 , x2 2 , x1 3 , x2 3 , x1 4 , x2 4
We compute the kernel of the transposed matrix, in order to check its rank and to deduce the common root ζ of the system: > kernel(transpose(S)); ( ) [1, 2, 1, 2, 2, 4, 2, 4, 8, 1, 4, 1, 8, 1, 16]
Considering the list of monomials which index the rows of S, we deduce that ζ = (1, 2).
An introduction to algebraic methods for solving polynomial equations
79
In the case where the overdetermined system has more than one root, we can follow the same approach. We chose a subset E of F (if possible containing the monomials 1, x1 , . . . , xn ) such that the matrix indexed the monomials xF −E is of rank, the rank r = N − D of + S. The set xE will be the basis of A. Assuming that E S with the monomials xi x , i = 1, . . . , n, are also in xF , we complete the matrix + the block of the coefficients f0 xE0 , where f0 = u0 + u1 x1 + · · · + un xn . By a Schur complement computation, we deduce the matrix of multiplication by f0 in the basis xE of A. Now, by applying the algorithms of Section 3.1, we deduce the roots of the overdetermined system f1 , . . . , fm . See, e.g., [34] for more details on this approach. 5.9. Solving by hiding a variable Another approach for solving a system of polynomial equations consists in hiding a variable (that is, in considering one of the variables as a parameter), and in searching the value of this hidden variable (or parameter) for which the system has a solution. Typically, when we have n equations f1 = 0, . . . , fn = 0 in n variables, we “hide” a variable, say xn , and apply one of the resultant constructions described before to the overdetermined system f1 = 0, . . . , fn = 0 in the n − 1 variables x1 , . . . , xn−1 and a parameter xn . This will lead us to a resultant matrix S(xn ), which entries are polynomial in xn . It can be decomposed as S(xn ) = Sd xnd + Sd−1 xnd−1 + · · · + S0 ,
where Si has coefficients in K and the same size than S(xn ). We are looking for the values ζn of xn , for which the system has a solution ζ = (ζ1 , . . . , ζn−1 ) in the corresponding variety X (of dimension n − 1) associated with the resultant formulation. This implies that (7)
v(ζ )t S(ζn ) = 0,
where v(ζ ) is the vector of monomials indexing the rows of S, evaluated at ζ . Conversely, for generic systems of the corresponding resultant formulation, there is only one point ζ above the value ζn . Thus the vectors v satisfying S(ζ )t v = 0 are scalar multiples of v(ζ ). From the entries of this vector we can usually deduce the other coordinates of the point ζ . This will be assumed hereafter.4 The relation (7) implies that v(ζ ) is a generalised eigenvector of St (xn ). Computing such vectors can be transformed into the following linear generalised eigenproblem I 0 ··· 0 0 I ··· 0 . .. .. .. . . . 0 ... ... . . . . . (8) − ζn . . w = 0. 0 · · · 0 . .. I I 0 . St0 St1 . . . Std−1 0 · · · 0 −Std 4
Notice however that this genericity condition can be relaxed by using duality, in order to compute the points ζ above ζn (when they form a zerodimensional fiber) from the eigenspace of S(ζn ).
80
B. Mourrain
The set of eigenvalues of (8) contains the values of ζn for which (7) has a solution. The corresponding eigenvectors w are decomposed as w = (w0 , . . . , wd−1 ) so that the corresponding solution vector v(ζ ) of (7) is v(ζ ) = w0 + ζn w1 + · · · + ζnd−1 wd−1 . This yields the following algorithm: Algorithm 5.12 (Solving by hiding a variable). INPUT : f1 , . . . , fn ∈ R . 1. Construct the resultant matrix S(xn ) of f1 , . . . , fn (as polynomials in x1 , . . . , xn−1 with coefficients in K[xn ]), adapted to the geometry of the problem. 2. Solve the generalised eigenproblem S(xn )t v = 0. 3. Deduce the coordinates of the roots ζ = (ζ1 , . . . , ζn ) of f1 = · · · = fn = 0. OUTPUT :
The roots of f1 = · · · = fn = 0.
Here again, we reduce the resolution of f1 = 0, . . . , fn = 0 to an eigenvector problem. Example 5.13. We illustrate this algorithm on the system f1 = x1 x2 + x3 − 2, f2 = x1 2 x3 + 2x2 x3 − 3, f3 = x1 x2 + x2 2 + x2 x3 − x1 x3 − 2. We hide x3 and use the projective resultant formulation of Section 5.2. We obtain a 15 × 15 matrix S(x3 ), and compute its determinant: > S:= mresultant([f1,f2,f3],[t1,t2]): det(S); x3 4 (x3 − 1) 2x3 5 − 11x3 4 + 20x3 3 − 10x3 2 + 10x3 − 27 .
The root x3 = 0 does not yield an affine root of the system f1 = f2 = f3 = 0 (the corresponding point is at infinity). Substituting x3 = 1 in S(x3 ), we get a matrix of rank 14. The kernel of S(1)t is generated by [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
which implies that the corresponding root is (1, 1, 1). For the other eigenvalues (which are the roots of the last factor), we proceed similarly in order to obtain the 5 other (simple) roots of f1 = f2 = f3 = 0. 5.10. The isolated points from resultant matrices In this section, we consider n equations in n unknowns, but we do not assume necessarily that the variety Z(f1 , . . . , fn ) is zerodimensional. We are interested
An introduction to algebraic methods for solving polynomial equations
81
in computing a rational representation of the isolated points. We denote by I0 the intersection of the primary components of I corresponding to isolated points of Z = Z(I ) and Z0 = Z(I0 ). The variety Z is zerodimensional iff Z = Z0 . We denote by C0 (u) the Chow form associated with the ideal I0 (see Section 3.2). Let us however consider first the case where I = I0 define a 0dimensional variety. Let f0 = u0 + u1 x1 + · · · + un xn be a generic affine form (the ui are considered as variables). Let us choose one of the previous resultant construction for f0 , . . . , fn , which yields a matrix . A B S= C D such that D is invertible (if it exists). The blocks A, C are depending only on the coefficients of f0 . From Section 5.6 and according to the relation . . .A − BD−1 C B A B I 0 = −D−1 C I 0 D C D we deduce that det(S) = det(Mf0 ) det(D).
In other words det(S) is scalar multiple of the Chow form det(Mf0 ) = C(u). Such a construction applies for a system which is generic for any of resultant formulation that we have presented. Applying Algorithm 3.9, we obtain a rational representation of the roots. In the case where our variety Z(f1 , . . . , fn ) is not zerodimensional, we can still deduce a rational representation of the isolated points of the variety, from the previous resultant construction in (at least) two ways. When the system is not generic for a given construction, a perturbation technique can be used. Introducing a new parameter and considering a perturbed regular system f (for instance f = f + f0 ), we obtain a resultant matrix S (u), which determinant is of the form (u, ) = k k (u) + k+1 k+1 (u) + · · · .
It can be shown that the trailing coefficient (in ) k (u) = 0 of the determinant of the resultant matrix is a multiple of the Chow form of the isolated points Z(I0 ). Applying Algorithm 3.9 to this multiple of the Chow form yields a rational representation of the isolated points. See [47,23,17,43,54] for more information and examples. The use of a new parameter has a cost, that we want to remove. This can be done by exploiting the properties of the Bezoutian matrix defined in 3.17: Proposition 5.14 ([32,15]). Any nonzero maximal minor (u) of the Bezoutian matrix of the polynomials f0 = u0 + u1 x1 + · · · + un xn , f1 , . . . , fn is divisible by the Chow form C0 (u) of the isolated points.
82
B. Mourrain
The interesting point here is that we get directly the Chow form of the isolated points of Z even if this variety is not zerodimensional. In other words, we do not need to consider perturbed systems, to compute a multiple of C0 (u). Another advantage of this approach is that it yields an “explicit” formulation for (u), and its structure can be handled more carefully (for instance, by working directly on the matrix form instead of dealing with the expansion of the minors). It leads to the following algorithm: Algorithm 5.15 (Univariate rational representation of the isolated points). INPUT : f1 , . . . , fn ∈ K[x1 , . . . , xn ] 1. Compute a nonzero multiple (u) of the Chow form of f1 , . . . , fn , from an adapted resultant formulation of f0 = u0 + u1 x1 + · · · + un xn , f1 , . . . , fn (for instance using the Bezoutian matrix). 2. Apply Algorithm 3.9, in order to get a rational representation of the isolated (and maybe some embedded) roots. In practice, instead of expanding completely the polynomial d(t + u) in Algorithm 3.9, it would be advantageous to consider u1 , . . . , un as infinitesimal numbers (i.e. u2i = ui uj = 0, for i, j = 1, . . . , n) in order to get only the first terms d0 (u0 ) + u1 d1 (u0 ) + · · · + un dn (u0 ) of the expansion. Moreover, we can describe these terms as sums of determinants of matrices deduced from the resultant matrices. This allows us to use fast interpolation methods to compute efficiently the polynomials di (u0 ). 5.11. Geometric decomposition of a variety In this section, we are still interested in polynomial systems f1 , . . . , fm such that the variety Z(f1 , . . . , fm ) is not necessarily zerodimensional. This variety may have isolated components of dimension 0 but also components of higher dimension. We want to compute the different (irreducible) components of such a decomposition. Different methods for computing the decomposition of a variety already exist [88, 56,51,4,38,41,7,84,42,47,23]. We present yet another one, based on direct matrix computations. To be in the case of a square system in order to apply one of the resultant formulations, we construct n linear combinations with constant coefficients of f1 , . . . , fm without adding other isolated points. Applying Algorithm 5.15 to this new system, we can compute a rational representation of the isolated points of Z(f1 , . . . , fm ). The next goal is to compute the isolated components of higher dimension. For this purpose, we proceed inductively from the lowestdimensional components to the highestdimensional components. We first reduce the description of isolated components of dimension 1, to a zerodimensional problem, by considering one variable (say x1 ) as a parameter. We assume, for the moment, that the projection from the isolated curves onto the line x2 = · · · = xn = 0 is dominant, or that these
An introduction to algebraic methods for solving polynomial equations
83
curves are in Noether position, with respect to the variable x1 . Then, these curves correspond to “isolated points” in K(x1 )[x2 , . . . , xn ]/(f1 , . . . , fm ). In order to get a square system, we replace the input polynomials f1 , . . . , fm by n − 1 generic combinations of them. Again applying Algorithm 5.15, we compute the isolated points of this variety, which give us a rational representation of the isolated curves of the initial variety. Hiding a new variable and iterating this procedure will give us the isolated components of dimension 2, 3 and so on. This yields the following algorithm: Algorithm 5.16 (Geometric decomposition of an algebraic variety). INPUT : f1 , . . . , fm be m equations in n variables with coefficients in K. 1. If m > n, choose random combinations g1 , . . . , gn of the input polynomials. 2. Compute the Bezoutian matrix of u0 + u1 x1 + · · · + un xn , g1 , . . . , gn , and a maximal nonzero minor (u). 3. According to Algorithm 5.15, compute a rational representation of the isolated roots of the system from (u). 4. Choose one variable (say x1 , or a random combination of x1 , . . . , xn ) as a parameter and proceed to step 1, with n replaced by n − 1 and K by K(x1 ). OUTPUT :
the rational representations of the components of the variety Z(f1 , . . . , fm ). This decomposition is not minimal, since some of the output components may be included into components of higher dimension. By an elimination method, one can deduce from its rational representation, the implicit equation (in the variables xi ) of an irreducible component. This allows us to test easily the inclusion of irreducible components and thus to obtain a minimal representation. By taking x1 as a parameter, we may miss irreducible components which are in hyperplanes of the form x1 − a = 0, a ∈ K. In order to avoid this problem, we can redo the computation with x2 as parameter, and then with x3 , . . . , xn . The only possible irreducible varieties that we could miss will be in the intersection of hyperplanes xi − ai = 0, i = 1, . . . , n. Thus these components are points which are already computed. Let d 2 be a bound on the degrees of f1 , . . . , fm . Then the degree of the polynomials involved in the rational representations of Algorithm 5.16 is bounded by d O(n) . The number of arithmetic operations for computing these polynomials is 2 bounded by d O(n ) . If we know that the variety is zerodimensional, the arithmetic cost is bounded by d O(n) . Moreover if a straightline program representation is used, the cost for the equidimensional decomposition is bounded by d O(n) and the memory space needed for the rational representations is bounded d O(n) L, where L bounds the size of the representation of the initial polynomials fi [43,57]. Notice that using an adapted version of Algorithm 5.16, we can also determine the different components of the residual of two varieties (see [16]).
84
B. Mourrain
Example 5.17. We illustrate Algorithm 5.16 on the following example. It gives the irreducible components over Q. f1 = 3x1 x3 − 3x1 2 x3 2 x2 − 2x1 x2 − 3x22 + 2x1 2 x22 x3 + 2x1 3 − 2x1 4 x2 x3 + 3x1 x23 x3 , f = x 3 − x 4x x − x + x x x 2 − x 3x 2 + x 4x 3x + x 4 − x 5x x 2 3 3 1 2 3 1 2 3 1 3 1 3 2 1 1 2 3 3 − x 4x x − x 2 + x 3x x + x − x x 2x − x 2 + x 3x x , + x 1 1 2 3 2 2 1 3 2 1 2 3 1 1 2 3 3 − x 4x x − x x + x 2x 2x − x + x x x 2 − x 3x 2 + x 4x 3x = x f 3 3 3 1 2 1 3 1 3 2 3 1 2 3 1 3 1 3 2 4 5 3 4 2 2 3 + x1 − x1 x2 x3 + x1 − x1 x2 x3 − x2 + x1 x2 x3 + x1 − x1 x2 x3 . > decomp(S,
[x[1],
x[2],
x[3]]);
In a first step, we compute a maximal minor of the Bezoutian matrix of degree 29 and its squarefree part d of degree 6. Then 14 9 1 221 (u0 + 1) u0 + u0 + u0 + , 25 5 5 140 113 d1 = − (5u0 + 9)(5u0 + 1)(u0 + 1), 2500 2 (5u0 + 1)(10u0 + 17)(5u0 + 9)(u0 + 1), d2 = 625 1 2825u0 2 + 1345u0 − 1381 + 875u0 3 (u0 + 1). d3 = 1250
d0 =
We take each factor of d0 , reduce d1 , d2 , d3 by this factor and check that the corresponding rational representation defines points in the variety. This yields the 3 following representations: u0 + 1 = 0,
x1 = 0,
x2 = 0,
x3 = 0,
u0 +
9 5
= 0,
x1 = 0,
x2 = 0,
x3 = 1,
u0 +
1 5
= 0,
x1 = 0,
x2 = 0,
x3 = −1,
that is, 3 points (0, 0, 0), (0, 0, 1), (0, 0, −1), the first one being embedded as we will see. As their coordinates are independent of u0 , the equations in u0 are not useful. In fact these points can be directly read off from the linear factors of d = u0 (u0 − u3 )( u0 + u3 ) (u3 u2 u0 − u2 2 u1 + u3 u1 2 ). For the components of dimension 1, taking x1 as parameter we obtain a minor (x1 , u) of degree 29, and 1 377 1 3 13 3 2 607 6 + x12 + x13 − u0 x14 − x u + x d0 = u0 − 10 5 10 45000 600 1 0 225000 1 169 7 15091 4 199 1133 5 2 x + x + u0 x15 + x + x 2 u3 + 75000 1 2700000 1 7500 300000 1 25 1 0
An introduction to algebraic methods for solving polynomial equations
85
1759 671 133 4 2 9619 2 13 3 u0 + x1 − x u + x + x 54000 600000 1500 1 0 1350000 1 3125 1 9 149 22981 3409 2 3539 u0 x13 + u0 x12 − u − − x1 u30 + 50 15000 270000 54000 0 675000 13 5 2 1 2 4 3 49 2 2 9 7 + x1 u0 − x1 u0 + x13 u40 + x1 u0 + x1 u20 + x13 u30 250 5 10 375 250 50 4 4 11 3 183 5 + u0 − u + u0 x1 + u0 . 5 300 0 10000 −
Only the first factor of d0 is relevant. It yields the following representation of a cubic curve: u0 −
1 3 1 + x1 2 + x1 3 = 0, 10 5 10
x2 = x1 2 ,
x3 = x1 3 .
As x2 and x3 are independent of u0 , the first equation is not needed. We proceed similarly for the components of dimension 2, taking x1 and x2 as parameters. The minor (x1 , x2 , u) is of degree 15 and d0 factors into two parts but only one is relevant. This yields the following rational representation: 2 7 = 0, u0 x1 x2 + x1 x2 − 5 10
x3 =
1 . x1 x2
5.12. Algebraic dependency relations, implicitisation Another interesting application of these projection operators is the computation of algebraic relations between n + 1 polynomials in n variables. Proposition 5.18 ([30,32]). Let u = (u0 , . . . , un ) be new parameters and assume that A = R/(f1 , . . . , fn ) is a vector space of finite dimension D . Then, every nonidentically zero maximal minor (u0 , . . . , un ) of the Bezoutian matrix of the polynomials f0 − u0 , . . . , fn − un in K[u][x] satisfies the identity (f0 , . . . , fn ) = 0. Example 5.19. We illustrate the previous method by this example. > g0:= x; g1:= x^2+y^2+z^2; g2:= x^3+y^3+z^3; > g3:= x^4+y^4+z^4; > R := melim([g0u[0],g1u[1],g2u[2],g3u[3]],[x,y,z]); > factor("); 8 10 9 2 7 2 3 6 − 12u12 0 − 24u0 u1 − 16u0 u2 + 24u1 − 12u3 u0 + 48u0 u2 u1 + −8u2 − 24u1 u0 + −24u21 u2 + 24u3 u2 u50 + −24u22 u1 + 6u3 u21 + 3u23 + 15u41 u40 + 8u31 u2 − 24u1 u3 u2 + 16u32 u30 + −6u51 − 12u3 u22 + 6u23 u1 + 12u21 u22 u20 2 + u61 − 3u21 u23 + 12u1 u3 u2 2 − 2u33 − 4u42 − 4u31 u22 .
The function melim computes a maximal minor of the Bezoutian matrix of f0 , . . . , fn . In this case, the Bezoutian matrix is of size 50 × 50 and of rank 24. Its
86
B. Mourrain
nonzero maximal minor is of degree 24 in (u0 , u1 , u2 ). We check that substituting u0 by x , u1 by f1 , . . . u3 by f3 yields 0. We can also use this to eliminate variables, for instance when we want to compute the implicit equation of a parametrised curve (or surface): Example 5.20. We want to compute the implicit equation of the parametrised surface: x=
r2 − t2 − 1 , r2 − 1
y=
3 rt + r 3 + t 3 + 2 t − r , r(r 2 − 1)
1 z= . r
We take the polynomials defining x, y, z in terms of r and s and eliminate the parameters r and s between the 3 equations: > > > > >
d := r^3r; p1 := d*x+r*(t^2r^2+1); p2 := d*y(t^3+2*t+3*r*t+r^3r); p3 := d*z+(1r^2); factor(melim([p1,p2,p3],[r,t])): −5z2 −y 2 z2 + y 2 + 2z2 y − 2y − z4 x + z4 x 3 + z4 x 2 − z4 + 6z3 x 2 − 6z3 + 2z2 x 2 − 2z2 x 3 − 12z2 + 11z2 x + 12zx − 6zx 2 − 6z − 3x 2 + 3x + x 3 .
Its last term, of degree 7, is the implicit equation of the surface. This can also be used to compute equations of offset curves (resp. surfaces) and equidistant curves (resp. surfaces) [49]. See also [80,26]. 6. C O N T R O L L E D
ITERATIVE METHODS
The structure of A can also be exploited to device numerical iterative methods, which will converge to algebraic objects associated with the roots. Such methods are particularly interesting, when we are not looking for all the roots but only for a specific one. 6.1. Modified Newton methods A very common approach for solving (polynomial) equations is based on the classical Newton iteration: x := x − Jf (x)−1 f(x), where f = (f1 , . . . , fn ), fi ∈ K[x] and Jf is the Jacobian matrix of the polynomial map x → f(x). Unfortunately, this method suffers form a lack of certification, since we cannot ensure a global convergence, nor a convergence to a specific root. We are going to describe two different approaches, which aim at resolving these problems.
An introduction to algebraic methods for solving polynomial equations
87
The first class of methods exploits homotopy techniques. They start from a system f0 = 0 for which the solutions are known and deform it into the system f = 0 that we want to solve. This deformation F (t, x) depends on a real parameter t , such that F (0, x) = f0 and F (1, x) = f(x). For instance, a linear homotopy is of the form F (t, x) = (1 − t)f0 (x) + t f(x). In a good situation, the homotopy paths start form the root of the system f0 (x) = 0 and ends at the roots of the system f(x) = 0. These paths are followed numerically from t = 0 to t = 1, iterating prediction–correction steps based on Newton method. A first problem consists in certifying that the number of roots is the same for the two systems f0 (x) = 0, f(x) = 0, otherwise some paths may go to infinity. Several techniques has been developed recently to adapt the number of paths to the actual number of roots of the final system f(x) = 0 (see [61,85,58,50]). They exploit in particular the toric resultant construction, in order to get a more accurate number of roots than the Bezout bound. This applies for systems which are generic for the toric resultant formulation. Another problem that such methods has to face is that some path can diverge in the middle of the interval [0, 1]. Generically, this does not occur, since the critical locus is an algebraic variety of codimension at least 1 and thus is a real variety of codimension at least 2. As the path are real curves of dimension 1, they generically do not intersect the critical locus. Numerical techniques for detecting and avoiding these divergence problems have been developed in order to obtain a better control on the output of such homotopy methods [85,39]. Another class of methods exploits the relation between coefficients and roots. For a univariate polynomial p(x) = x n + an−1 x n−1 + · · · + a0 , it consists in applying Newton methods to the vector " # f(ζ ) = ai − (−1)n−i σn−i (ζ ) i=0,...,n−1 ,
where σi is the i th elementary function. The unknown are the n roots ζ = [ζ1 , . . . , ζn ] of p = 0. Notice that the set of solutions of f(x) = 0 are the n! permutations of the vector ζ . Thus, if the Newton iteration converges to any of these solutions, it yields directly all the roots of p(x) = 0. Applying Newton iteration to f is also known as Weierstrass’s method [86], which corresponds to an explicit and simple form for the iteration step: p(ζi ) , j =i (ζj − ζi )
ζ i = ζi −
for i = 1, . . . , n.
Acceleration of this method, known as Durand–Kerner and Aberth methods have been studied and used successfully for solving large and difficult univariate polynomials [12]. Recently, generalisations of this approach to the multivariate case have also been proposed [8,78]. The basis idea is to express the relations between the normalisation on a basis B of the quotient A and the coordinates of the roots of the system. The idempotents (see end of Section 3.1), which extend in some sense the univariate
88
B. Mourrain
Lagrange polynomials, are used to get a simple form of the Newton iteration in this context: −1 ∂R ∂Rf f1 1 ∂x1 (z,zi ) ∂xn (z,zi ) V (z) f1 (zi ) ... V z ( ) .. .. .. .. , zi = zi − . . . . ∂R ∂Rfn fn (z,z ) fn (zi ) i ∂x1 ∂xn (z,zi ) . . . V (z) V (z) where
! ! xα1 ! ! α1 !z ! 1 RQ (z, x) = ! . ! .. ! ! α1 ! zD
... ... .. . ...
xαD α z1 D .. . αD zD
! Q (x) !! ! Q (z1 ) !! .. !! . ! ! Q (zD ) !
and {x α1 , . . . , xαD } is a basis of A. Here also any fix point of this iteration function corresponds to the set of all the (simple) roots of the system. Combining a deformation approach with this Newton step yields a onepath following approach which ends at a point representing all the roots of the system. We have assumed here that the roots of f(x) = 0 are simple but the case of multiple roots (with a fixed type of multiplicity) and of overdetermined systems can also be treated with such approach. See [78] for more details. 6.2. Iteration in A The structure of the quotient algebra A can be exploited more directly by explicit iteration methods in A. Identifying an element a ∈ A with its multiplication operators Ma , we can for instance, extend the wellknown power method of linear algebra to this context. This gives the following induction process u0 = h;
un+1 ≡ λn un a,
for n > 0,
where λn is a normalisation scalar (for instance the L2 Hermitian norm un a). This method, also known as Bernoulli’s method, converges linearly to a multiple of the idempotent eζ ∈ A associated with the root ζ such that the norm a(ζ ) is maximal. If several such roots exist, the sequence un will oscillate but, by classical techniques fix points can be easily extracted. Once the idempotent is known, computing the associated root is strait forward. This method can be applied directly if we known a normal form algorithm in A. It can also be applied, by using implicitly a normalisation process using resultant computation, as it is illustrated in [13]. The product a un ∈ A is computed by solving a linear system deduced from the resultant matrix S (see Section 5.6). If a is invertible in A, the product a −1 un can also be computed similarly. Exploiting the sparsity of the involved matrices, without computing explicitly the matrix of multiplication, we obtain fast and controlled methods which converge (linearly) to
89
An introduction to algebraic methods for solving polynomial equations
a specified roots. Accelerating the convergence is also possible just as with the shift inverse power method [87]. In order to device superfast methods for selecting a specific root, we replace the linear convergence of Bernoulli’s iteration by the following process: u0 = a;
un+1 ≡ λn u2n ,
for n > 0,
where λn is also a normalisation scalar. This method, known as Sebastio e Sylva’s method [19], is converging quadratically from its starting point to a multiple of the idempotent eζ ∈ A such that the norm a(ζ ) is maximal (if it is unique). Example 6.1. Let us consider a system with two real and two complex roots: > f1:= x[1]^2+2*x[1]*x[2]x[1]1; > f2:= x[1]^2+x[2]^28*x[1];
Approximation of the roots are ζ1
ζ2
ζ3
ζ4
6.8200982
−0.19395427 + 0.20520688 i
−0.19395427 − 0.20520688 i
0.36781361
−2.8367388
−0.61937124 − 1.3895199 i
−0.61937124 + 1.3895199 i
1.6754769
We illustrate the Sebastio e Sylva method by computing first, the root for which x1  is maximal. We start with u0 = x1 . After 4 iterations, we obtain u4 = 7.6055995 + 7.7975926x1 − 0.46159096x2 − 15.740471x1 x2 .
By multiplying it by x1 and x2 in A, we obtain ζ1 = (6.820095, −2.836734). If we start with 1 −1 78 228 32 16 x1 − x2 − x1 x2 , u0 ≡ x1 − ≡− − 2 35 35 35 7 the algorithm should converge to the root for which x1 is the closest to 12 . Indeed, after 4 iterations, we obtain u4 = 0.15292071 + 0.89409187x1 + 0.16270766x2 + 0.29923055x1 x2 ,
which yields the root ζ4 = (0.3678148, 1.675476). Another iteration, known as Joukovski’s iteration [48], can also be used here: u0 = h;
un+1 ≡
1 un ± u−1 n , 2
for n > 0.
It also converges quadratically to a sum of idempotents depending on the real or imaginary part of the complex numbers h(ζ ) for ζ ∈ Z(I ). See [19,66] for more details.
90
B. Mourrain
Efficient algorithms for solving univariate polynomials based on these iteration steps have been proposed in [19]. They rely on a fast multiplication algorithm in A which exploits the properties of the dual basis of the monomial basis. Using Fast Fourier Transform, the complexity of such arithmetic operation in A is bounded by O(d log(d)) (where d is the degree of the univariate polynomial defining A). Experimentations with polynomials of degree up to 106 shows the power and practical impact of such methods [67]. It is a challenging and open problem to generalise these techniques to the multivariate cases. A step in this direction has been done in [66], for the case of affine complete intersection systems. We use the residue (or any other generator of the dual of A), in order to get a fast multiplication algorithm in A. Exploiting fast polynomial multiplication, we obtain a quasiquadratic complexity bound for the arithmetic operations in A, in terms of the dimension D of A. Application of these tools to select a specific root, to count and isolated the almost real roots, the roots in a given box, . . . are given in [66]. 6.3. Isolating the solutions In many problems, we are not interested necessarily by all the roots. Moreover, only the real roots often correspond to physical solutions. In the previous section, we described methods which allow to select specific roots, among the set of all complex roots. In this section, we are going to present methods which only consider the real roots. The goal is here to output a set of domain, which are small enough, and which contains one and only one real root. A standard way to perform such computation is to apply a dichotomy process, which splits a bounded domain into several subdomains if it may contain reals roots, or which removes it, if it does not contain a real root. This process will be repeated iteratively on each subdomains, until the refinement is small enough. A key ingredient of such approach is an exclusion function which asserts when a domain does not contain a real root. This exclusion function has different forms. In the univariate case, one can for instance use Sturm’s method [9,77], which counts the number of roots in an interval [a, b]. Descartes rule which yields the number of real on [0, +∞[ plus an even positive number can also be used. This leads to the socalled Uspensky’s method [83,76], which can be adapted to any interval using Bernstein polynomials and de Casteljau bisection algorithm [69]. Another type of exclusion methods is also used in [27]. It is based on a Taylor expansion of the polynomial at a given point. Combined with Newton process, it yields a global methods for computing the real roots of a univariate polynomials and certifying the output. Exclusion functions based on interval analysis [52] can also be used to certify that an interval does not contain a real root. The extension of these methods to higher dimension exists but may suffer from prohibitive bisection costs. We just mentioned the Weyl methods, extended in [70], which uses Turan test to localise the roots of a polynomial in the complex plane and the application of topological degree theory to localise real roots [69].
An introduction to algebraic methods for solving polynomial equations
91
R EFERENCES [1] Adams W., Loustaunau P. – An Introduction to Gröbner Bases, Amer. Math. Soc., Providence, RI, 1994. [2] Aizenberg L., Kytmanov A.M. – Multidimensional analogues of Newtons formulas for systems of nonlinear algebraic equations and some of their applications, Translated from Sibirsk. Mat. Zh. 22 (1981) 19–39. [3] Alonso M., Becker E., Roy M., Wörmann T. – Zeros, multiplicities and idempotents for zero dimensional systems, in: GonzálezVega L., Recio T. (Eds.), Algorithms in Algebraic Geometry and Applications, in: Progr. Math., vol. 143, Birkhäuser, Basel, 1996, pp. 1–15. [4] Aubry P. – Ensembles triangulaires de polynômes et résolution de systèmes algébriques. Implantation en axiom, Ph.D. thesis, Univ. Paris VI, 1999. [5] Auzinger W., Stetter H.J. – An elimination algorithm for the computation of all zeros of a system of multivariate polynomial equations, in: Proc. Intern. Conf. on Numerical Math., in: Internat. Ser. Numer. Math., vol. 86, Birkhäuser, 1988, pp. 12–30. [6] Becker E., Cardinal J., Roy M., Szafraniec Z. – Multivariate Bezoutians, Kronecker symbol and Eisenbud–Levin formula, in: GonzálezVega L., Recio T. (Eds.), Algorithms in Algebraic Geometry and Applications, in: Progr. Math., vol. 143, Birkhäuser, Basel, 1996, pp. 79–104. [7] Becker T., Weispfenning V., Kredel H. – Gröbner Bases. A Computational Approach to Commutative Algebra, Graduate Texts in Math., vol. 141, SpringerVerlag, Berlin, 1993. [8] Bellido A.M. – Construction of iteration functions for the simultaneous computation of the solutions of equations and algebraic systems, Numer. Algorithms 6 (1994) 313–351. [9] Benedetti R., Risler J. – Real Algebraic and SemiAlgebraic Sets, Hermann, 1990. [10] Berenstein C., Gay R., Vidras A., Yger A. – Residue Currents and Bezout Identities, Progr. Math., vol. 114, Birkhäuser, 1993. [11] Bézout E. – Recherches sur les degrés des équations résultantes de l’évanouissement des inconnues et sur les moyens qu’il convient d’employer pour trouver ces équations, Hist de l’Aca. Roy. des Sciences (1764) 288–338. [12] Bini D. – Numerical computation of polynomial zeros by means of Aberth’s method, Numer. Algorithms 13 (1996). [13] Bondyfalat D., Mourrain B., Pan V.Y. – Solution of a polynomial system of equations via the eigenvector computation, Linear Algebra Appl. 319 (2000) 193–209. [14] Bruns W., Kustin A.R., Miller M. – The resolution of the generic residual intersection of a complete intersection, J. Algebra 128 (1990) 214–239. [15] Busé L., Elkadi M., Mourrain B. – Generalized resultant over unirational algebraic varieties, J. Symbolic Comput. 29 (2000) 515–526. [16] Busé L., Elkadi M., Mourrain B. – Residual resultant of complete intersection, J. Pure Appl. Math. 164 (2001) 35–57. [17] Canny J. – Generalised characteristic polynomials, J. Symbolic Comput. 9 (1990) 241–250. [18] Canny J., Emiris I. – A subdivisionbased algorithm for the sparse resultant, J. ACM 47 (2000) 417–451. [19] Cardinal J.P. – On two iterative methods for approximating the roots of a polynomial, in: Renegar J., Shub M., Smale S. (Eds.), Proc. AMSSIAM Summer Seminar on Math. of Numerical Analysis (Park City, Utah, 1995), in: Lectures in Appl. Math., vol. 32, Amer. Math. Soc., Providence, 1996, pp. 165–188. [20] Cattani E., Dickenstein A. – A global view of residues in the torus, J. Pure Appl. Algebra 117 & 118 (1996) 119–144. [21] Cattani E., Dickenstein A., Sturmfels B. – Computing multidimensional residues, in: GonzálezVega L., Recio T. (Eds.), Algorithms in Algebraic Geometry and Applications, in: Progr. Math., vol. 143, Birkhäuser, Basel, 1996. [22] Chardin M., Ulrich B. – Liaison and Castelnuovo–Mumford regularity, Amer. J. Math. 124 (6) (2002) 1103–1124. [23] Chistov A. – Algorithm of polynomial complexity for factoring polynomials and finding the components of varieties in subexponential time, J. Soviet Math. 34 (1986) 1838–1882.
92
B. Mourrain
[24] Corless R., Gianni P., Trager B. – A reordered Schur factorization method for zerodimensional polynomial systems with multiple roots, in: Küchlin W. (Ed.), Proc. ISSAC, 1997, pp. 133– 140. [25] Cox D., Little J., O’Shea D. – Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra, Undergrad. Texts Math., SpringerVerlag, New York, 1992. [26] Cox D.A. – Equations of parametric curves and surfaces via syzygies, Contemp. Math. 286 (2001) 1–20. [27] Dedieu J.P., Yakoubsohn J.C. – Computing the real roots of a polynomial by the exclusion algorithm, Numer. Algorithms 4 (1993) 1–24. [28] Eisenbud D. – Commutative Algebra with a View Toward Algebraic Geometry, Graduate Texts in Math., vol. 150, SpringerVerlag, Berlin, 1994. [29] Elkadi M., Mourrain B. – Approche effective des résidus algébriques, Rapport de Recherche 2884, INRIA, Sophia Antipolis, 1996. [30] Elkadi M., Mourrain B. – Some applications of Bezoutians in effective algebraic geometry, Rapport de Recherche 3572, INRIA, Sophia Antipolis, 1998. [31] Elkadi M., Mourrain B. – Introduction à la résolution des systèmes polynomiaux, Mathématiques et Applications (SMAI) (2006). [32] Elkadi M., Mourrain B. – Algorithms for residues and Lojasiewicz exponents, J. Pure Appl. Algebra 153 (2000) 27–44. [33] Emiris I., Canny J. – Efficient incremental algorithms for the sparse resultant and the mixed volume, J. Symbolic Comput. 20 (1995) 117–149. [34] Emiris I., Mourrain B. – Matrices in elimination theory, J. Symbolic Comput. 28 (1999) 3–44. [35] Emiris I., Rege A. – Monomial bases and polynomial system solving, in: Proc. ACM Intern. Symp. on Symbolic and Algebraic Computation (Oxford, July 1994), pp. 114–122. [36] Faugère J. – A new efficient algorithm for computing Gröbner basis (F4), J. Pure Appl. Algebra 139 (1999) 61–88. [37] Fuhrmann P. – A Polynomial Approach to Linear Algebra, SpringerVerlag, 1996. [38] Gallo G., Mishra B. – Efficient algorithms and bounds for Wu–Ritt characteristic sets, in: Effective Methods in Algebraic Geometry (MEGA’90) (Castiglioncello, Italy), in: Progr. Math., vol. 94, Birkhäuser, 1991, pp. 119–142. [39] Gao T., Li T., Verschelde J., Wu M. – Balancing the lifting values to improve the numerical stability of polyhedral homotopy continuation methods, Appl. Math. Comput. 114 (2000) 233–247. [40] Gelfand I., Kapranov M., Zelevinsky A. – Discriminants, Resultants and Multidimensional Determinants, Birkhäuser, Boston, 1994. [41] Gianni P., Trager B., Zacharias G. – Gröbner bases and primary decomposition of polynomial ideals, J. Symbolic Comput. 6 (1998) 149–167. [42] Giusti M., Heintz J. – Algorithmes – disons rapides – pour la décomposition d’une variété algébrique en composantes irréductibles et équidimensionnelles, in: Effective Methods in Algebraic Geometry (MEGA’90) (Castiglioncello Italy), Progr. in Math., vol. 94, Birkhäuser, 1991, pp. 169–193. [43] Giusti M., Heintz J. – La détermination des points isolés et de la dimension d’une variété algébrique peut se faire en temps polynomial, in: Proc. Int. Meeting on Commutative Algebra (Cortona, 1991), in: Sympos. Math., vol. XXXIV, pp. 216–255. [44] GonzalezVega L., Rouillier F., Roy M. – Symbolic recipes for polynomial system solving, in: Cohen A., et al. (Eds.), Some Tapas of Computer Algebra, Springer, 1999. [45] Grayson D.R., Stillman M.E. – Macaulay 2, a software system for research in algebraic geometry, Available at http://www.math.uiuc.edu/Macaulay2. [46] Greuel G.M., Pfister G., Schoenemann H. – Singular, a computer algebra system for polynomial computations, Available at http://www.singular.unikl.de/team.html. [47] Grigoryev D. – Factorization of polynomials over finite field and the solution of systems of algebraic equations, J. Soviet Math. 34 (1986) 1762–1803. [48] Henrici P. – Applied and Computational Complex Analysis, vol. I, Wiley, 1988. [49] Hoffmann C. – Geometric and Solid Modeling, Morgan Kaufmann, 1989.
An introduction to algebraic methods for solving polynomial equations
93
[50] Huber B., Sturmfels B. – Bernstein’s theorem in affine space, Discrete Comput. Geom. 17 (1997) 137–142. [51] Kalkbrener M. – A generalized Euclidean algorithm for computing triangular representations of algebraic varieties, J. Symbolic Comput. 15 (1993) 143–167. [52] Kearfott R.B. – Interval arithmetic techniques in the computational solution of nonlinear systems of equations: Introduction, examples and comparisons, in: Lectures in Appl. Math., Amer. Math. Soc., 1990, pp. 337–357. [53] Kunz E. – Kähler Differentials, Adv. Lectures Math., Friedr. Vieweg and Sohn, 1986. [54] Lakshman Y.N., Lazard D. – On the complexity of zerodimensional algebraic systems, in: Effective Methods in Algebraic Geometry (MEGA’90) (Castiglioncello, Italy), in: Progr. Math., vol. 94, Birkhäuser, 1991, pp. 217–225. [55] Lazard D. – Algèbre linéaire sur k[x1 , . . . , xn ] et élimination, Bull. Soc. Math. France 105 (1977) 165–190. [56] Lazard D. – A new method for solving algebraic equations of positive dimension, Discrete Appl. Math. 33 (1991) 147–160. [57] Lecerf G. – Computing an equidimensional decomposition of an algebraic variety by means of geometric resolutions, in: Proc. ISSAC, 2000, pp. 209–216. [58] Li T.Y. – Numerical solution of multivariate polynomial systems by homotopy continuation methods, Acta Numerica 6 (1997) 399–436. [59] Macaulay F. – Some formulae in elimination, Proc. London Math. Soc. 1 (1902) 3–27. [60] Macaulay F. – The Algebraic Theory of Modular Systems, Cambridge Univ. Press, 1916. [61] Morgan A., Sommese A. – A homotopy for solving general polynomial systems that respects mhomogeneous structures, Appl. Math. Comput. 24 (1987) 101–113. [62] Mourrain B. – Enumeration problems in geometry, robotics and vision, in: González L., Recio T. (Eds.), Algorithms in Algebraic Geometry and Applications, in: Progr. Math., vol. 143, Birkhäuser, Basel, 1996, pp. 285–306. [63] Mourrain B. – Computing isolated polynomial roots by matrix methods, J. Symbolic Comput., Special Issue on Symbolic–Numeric Algebra for Polynomials 26 (1998) 715–738. [64] Mourrain B. – A new criterion for normal form algorithms, in: Fossorier M., Imai H., Lin S., Poli A. (Eds.), Proc. AAECC, in: Lecture Notes in Comput. Sci., vol. 1719, Springer, Berlin, 1999, pp. 430–443. [65] Mourrain B., Pan V.Y. – Asymptotic acceleration of solving multivariate polynomial systems of equations, in: Proc. STOC, ACM Press, 1998, pp. 488–496. [66] Mourrain B., Pan V.Y. – Multivariate polynomials, duality and structured matrices, J. Complexity 16 (2000) 110–180. [67] Mourrain B., Prieto H. – A framework for symbolic and numeric computations, Rapport de Recherche 4013, INRIA, 2000. [68] Mourrain B., Trébuchet P. – Solving projective complete intersection faster, Proc. ISSAC, 2000, pp. 231–238. [69] Mourrain B., Vrahatis M., Yakoukshon J. – On the complexity of isolating real roots and computing with certainty the topological degree, J. Complexity 18 (2002) 612–640. [70] Pan V. – Optimal and nearly optimal algorithms for approximating polynomial zeros, Comput. Math. (with Appl.) 31 (1996) 97–138. [71] Pedersen P., Sturmfels B. – Mixed monomial bases, in: GonzálezVega L., Recio T. (Eds.), Effective Methods in Algebraic Geometry (Proc. MEGA ’94, Santander, Spain), in: Progr. Math., vol. 143, Birkhäuser, Boston, 1996, pp. 307–316. [72] Pedersen P.S., Roy M.F., Szpirglas A. – Counting real zeros in the multivariate case, in: Galligo A., Eyssette F. (Eds.), Effective Methods in Algebraic Geometry (MEGA’92) (Nice, France), in: Progr. Math., Birkhäuser, 1993, pp. 203–223. [73] Renegar J. – On the computational complexity and geometry of the first order theory of reals (I, II, III), J. Symbolic Comput. 13 (1992) 255–352. [74] Robbianno L. – Cocoa, computational commutative algebra, Available at http://www.singular. unikl.de/team.html.
94
B. Mourrain
[75] Rouillier F. – Solving zerodimensional polynomial systems through rational univariate representation, Appl. Algebra Engrg. Comm. Comput. 9 (1999) 433–461. [76] Rouillier F., Zimmermann P. – Efficient isolation of a polynomial real roots, Preprint. [77] Roy M. – Basic algorithms in real algebraic geometry: From Sturm theorem to the existential theory of reals, in: Lectures on Real Geometry in memoriam of Mario Raimondo. [78] Ruatta O. – A multivariate Weierstrass iterative rootfinder, in: Mourrain B. (Ed.), Proc. Intern. Symp. on Symbolic and Algebraic Computation (London, Ontario), ACM Press, 2001, pp. 276–283. [79] Scheja G., Storch U. – Über Spurfunktionen bei vollständigen Durchschnitten, J. Reine Angew. Math. 278 (1975) 174–190. [80] Sederberg T., Chen F. – Implicitization using moving curves and surfaces, in: Proceedings of SIGGRAPH, 1995, pp. 120–160. [81] Stetter H.J. – Eigenproblems are at the heart of polynomial system solving, SIGSAM Bull. 30 (1996) 22–25. [82] Sturmfels B. – Sparse elimination theory, in: Eisenbud D., Robianno L. (Eds.), Computational Algebraic Geometry and Commutative Algebra, Cambridge Univ. Press, Cambridge, 1993, pp. 264–298. [83] Uspensky J. – Theory of Equations, MacGrawHill, 1948. [84] Vasconcelos W. – Computational Methods in Commutative Algebra and Algebraic Geometry, Algorithms Comput. Math., vol. 2, SpringerVerlag, 1998. [85] Verschelde J., Verlinden P., Cools R. – Homotopies exploiting Newton polytopes for solving sparse polynomial systems, SIAM J. Numer. Anal. 31 (1994) 915–930. [86] Weierstrass K. – Neuer Beweis des Satzes, dass jede ganze rationale Function einer veränderlichen dargestellt werden kannals ein Product aus linearen Functionen derselben veränderlichen, Mathematische Werke, Tome 3, 1903. [87] Wilkinson J. – The Algebraic Eigenvalue Problem, Oxford Univ. Press, London, 1965. [88] Wu W.T. – Basic principles of mechanical theorem proving in elementary geometries, J. Automated Reasoning 2 (1986) 221–252.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
Zerodimensional ideals or The inestimable value of estimable terms ✩
Lorenzo Robbiano Dipartimento di Matematica, Via Dodecaneso 35, 16146 Geneva, Italy Email:
[email protected] (L. Robbiano)
It’s like switching your point of view. Things sometimes look complicated from one angle, but simple from another. (Achilles to the Tortoise, in Gödel, Escher, Bach)
(Douglas R. Hofstadter) A BSTRACT This survey describes several aspects of the theory of zerodimensional ideals. It is organized according to the following themes: 1. Introduction 2. Zerodimensional ideals and commuting endomorphisms 3. Zerodimensional ideals and rewrite rules
1. I N T R O D U C T I O N There are many ways of looking at zerodimensional ideals. These algebraic objects correspond to systems of polynomial equations whose set of solutions is finite. Of course the above sentence is vague, since we do not mention where we are looking for solutions, or, to put it in more challenging way, we do not specify what we mean by solutions. In any event, it is possible to make a solid foundation for a theoretical approach to algebraic systems of polynomial equations, and I refer the interested reader to Section 3.7 of [8] for a thorough discussion of the topic. Consider the polynomial ring P = K[x, y] and the ideal I = (y). The quotient ring P /J is isomorphic to K[x], an infinitedimensional K vector space. Instead, ¯ as a basis. if we pick the ideal I = (x 2 , y), then the quotient ring P /I has {1, x} The last is an example of a zerodimensional ideal. Let I be a zerodimensional ideal in P = K[x1 , . . . , xn ]. We may look at it from the inside by describing a set ✩
On the authors’ request the editors would like to stress that this contribution dates from 2001 and hence reflects the stateoftheart in 2001.
96
L. Robbiano
of generators, or from the outside by describing a set of polynomials whose residue classes in P /I form a basis of P /I as a K vector space. Let us see how these points of view behave in the case of a univariate polynomial ring P = K[x]. In this case every ideal I is principal, hence the description “from the inside” can be given by specifying the unique monic generator of I . Suppose that I is an ideal generated by f = a0 + a1 x + a2 x 2 + · · · + ad−1 x d−1 + x d . It is clear that {f } is also a Gröbner basis of I and that O = (1, x, . . . , x d−1 ) maps to a basis of P /I . However O does not capture all the information about I , and so we need something more. We consider the endomorphism of P /I given by “multiplication by x ” and represent it via the matrix associated to O . This is the wellknown companion matrix of f , from which the polynomial f can be reconstructed (see Remarks 2.19 and 2.20). The main reason why we are interested in stepping aside from the theory of Gröbner bases is that there are bases of P /I which cannot be obtained with a Gröbner basis computation (see Example 3.5). Besides, many bases of this type are very interesting both from a theoretical and a computational point of view (see, for instance, [10]). In Section 2 we build up the theoretical background for a description of zerodimensional ideals “from the outside”, i.e. via multiplication endomorphisms which have the important property of being pairwise commuting. In general, given a finitedimensional vector space V and a finite set of pairwise commuting endomorphisms 1 , . . . , n of V , one can ask: do they come from multiplication endomorphisms of a vector space of the type P /I , where I is a zerodimensional ideal? To answer this question, we use a P module structure on V given by f (x1 , . . . , xn )v = f (1 , . . . , n )(v), and show that the answer is affirmative if and only if V is cyclic (see Proposition 2.10). Then we describe an algorithm which checks whether V is cyclic (see Proposition 2.17). Several examples at the end of the section illustrate this piece of theory. In Section 3 we switch our point of view a little and introduce rewrite rules. It is wellknown that the confluence of rewrite rules associated to polynomials is guaranteed if and only if the marking of the polynomial is coherent, i.e. induced by a term ordering (see Theorem 3.8 taken from [13]). So we must focus our attention on a different way of rewriting. We introduce a normal remainder function from the noncommutative polynomial ring to V (see Definition 3.14), and we study its properties. Then we introduce border bases (see Definition 3.18) and matrices associated to them (see Definition 3.20). The final Theorem 3.23 and its Corollary 3.24 make the desired connection between commuting matrices, border bases, and the representation of a zerodimensional ideal “from the outside”. This chapter was largely inspired by [10] which started the algorithmic, as well as the theoretical, investigation of these special bases related to zerodimensional ideals. Our contribution should be considered as an introduction to a piece of mathematics which will certainly play an important role in the near future. The ideas developed here have been already used in [6] to solve an important problem in statistics.
Zerodimensional ideals
97
Conventions Throughout the entire paper we adopt the conventions and terminology used in the recent book [8]. In particular P denotes the multivariate polynomial ring K[x1 , . . . , xn ] over a field K , and Tn the monoid of terms (power products) in the indeterminates x1 , . . . , xn . An important convention that we are using consistently is the following. Tuples of elements are denoted with calligraphic letters, while their underlying set is denoted with the corresponding roman character. For instance, when G = (g1 , . . . , gr ), then automatically G denotes {g1 , . . . , gr }. What about matrices? Let V be a finitely generated K vector space, F a tuple of vectors in V , and E a basis of V . We denote by MEF the matrix whose columns are the E coordinates of the vectors in F . For ∈ EndK (V ), the matrix ME(E ) will be simply denoted by ME . 2. Z E R O  D I M E N S I O N A L
IDEALS AND COMMUTING ENDOMORPHISMS
We remind the reader that the theory of term orderings can be found in Section 1.4 of [8], and start by recalling the following fundamental result (see [8, Proposition 3.7.1]) which characterizes zerodimensional ideals. Proposition 2.1. Let I be an ideal in P , let K be the algebraic closure of K , and let P = K[x1 , . . . , xn ]. The following conditions are equivalent. (a) The ideal I P is contained in only finitely maximal ideals of P . (b) For i = 1, . . . , n, we have I ∩ K[xi ] = (0). (c) The K vector space P /I is finitedimensional. Let σ be a term ordering on Tn . Then the above conditions are equivalent to the following ones (d) The set Tn \ LTσ {I } is finite. α (e) For every i ∈ {1, . . . , n}, there exists a number αi 0 such that we have xi i ∈ LTσ (I ). Definition 2.2. An ideal I = (f1 , . . . , fs ) in P is called zerodimensional if it satisfies the equivalent conditions of Proposition 2.1. The number dimK (P /I ), finite by the above proposition, is usually called the multiplicity of P /I , and is denoted by mult(P /I ). This is our starting point and we shall find condition (c) especially useful. Our next goal is to relate zerodimensional ideals to finite sets of pairwise commuting endomorphisms. Definition 2.3. Let V be a finitedimensional K vector space, and let EndK (V ) be the (noncommutative) K algebra whose elements are the K linear endomorphisms of V . The multiplication in EndK (V ) is given by composition. Suppose that
98
L. Robbiano
1 , . . . , n ∈ EndK (V ) are such that i j = j i for i, j ∈ {1, . . . , n}. We say that 1 , . . . , n are pairwise commuting endomorphisms.
Given pairwise commuting endomorphisms, we may define several algebraic structures attached to them. Proposition 2.4. Let V be a finitedimensional K vector space, and 1 , . . . , n ∈ EndK (V ) pairwise commuting endomorphisms. (a) There exists a unique K algebra homomorphism : P → EndK (V ) which is defined by (1) = Id, the identity isomorphism of V , and (xi ) = i for i = 1, . . . , n. We have (f (x1 , . . . , xn )) = f (1 , . . . , n ). (b) There exists a P module structure on V defined by f (x1 , . . . , xn )v = f (1 , . . . , n )(v). We say that V is a P module via 1 , . . . , n . (c) For every w ∈ V , the K linear map αw : EndK (V ) → V which is defined by the rule αw () = (w), induces the map w = αw ◦ : P → V which is therefore defined by w (f ) = f (1 , . . . , n )(w) = f (x1 , . . . , xn )w , and is a P module homomorphism. (d) Given a tuple W = (w1 , . . . , wr ), a P module homomorphism W : P r → V is defined by W (f1 , . . . , fr ) = ri=1 wi (f ). Proof. All the proofs are easy and left to the reader.
2
Proposition 2.5. Let V be a finitedimensional K vector space, let 1 , . . . , n ∈ EndK (V ) be pairwise commuting endomorphisms, and let w ∈ V . The map w is characterized by the following three properties (i) w (1) = w , (ii) w is K linear, (iii) w (xi f ) = i (w (f )) for all f ∈ P . Proof. It follows from the definition that w satisfies the above properties. Now, let ϕ : P → V be a map with the three properties above. We want to show that ϕ = w . By (ii) it suffices to show that ϕ and w coincide on terms and by (i) we can make induction on the degree. So let t be a term of degree d and assume that ϕ and w coincide on terms of degree at most d − 1. We write t = xi t with deg(t ) = d − 1. Property (iii) implies that ϕ(t) = i (ϕ(t )) = i (w (t )) = w (t), and we are done. 2 Lemma 2.6. Let V be a finitedimensional K vector space and let 1 , . . . , n ∈ EndK (V ) be pairwise commuting endomorphisms. (a) The map induces a K algebra isomorphism P / Ker() ∼ = K[1 , . . . , n ]. (b) Ker() is a zerodimensional ideal. (c) Ker() = AnnP (V ).
99
Zerodimensional ideals
(d) For w ∈ V we have Ker(w ) ⊇ Ker(w ), hence Ker(w ) is a zerodimensional ideal, and w defines a K linear map w : P / Ker() to V . Proof. The proof of (a) follows immediately from the definitions. To prove (b) it suffices to say that for every i = 1, . . . , n the characteristic polynomial of i vanishes at i , hence the claim follows from Proposition 2.1(b). To prove (c) we observe that f (x1 , . . . , xn ) annihilates V if and only if f (1 , . . . , n ) is the zero endomorphism, and this happens if and only if f (x1 , . . . , xn ) ∈ Ker(). Claim (d) is clear. 2 Let us consider a fundamental case of commuting endomorphisms. Proposition 2.7. Let I be a zerodimensional ideal in P , let x1 , . . . , xn ∈ EndK (P ) be the endomorphisms defined by the multiplications by x1 , . . . , xn respectively. (a) The maps x1 , . . . , xn induce maps in EndK (P /I ), still denoted by x1 , . . . , xn . (b) The K algebra homomorphism : P → EndK (P /I ) defined by (xi ) = xi for i = 1, . . . , n is such that Ker() = I . (c) The map induces a K algebra isomorphism P /I ∼ = K[x1 , . . . , xn ]. Proof. First of all, to prove that the maps xi pass to the quotient we need to show that xi (I ) ⊆ I . This is clear since xi is multiplication by xi . To show the inclusion I ⊆ Ker() we first observe that (f (x1 , . . . , xn )) = f (x1 , . . . , xn ). Now it suffices to show that f ∈ I implies that f (x1 , . . . , xn ) is the zero map. In other words it suffices to show that f (x1 , . . . , xn )(h) ∈ I for every h ∈ P , and this follows from the fact that f (x1 , . . . , xn )(h) = f h. To prove that Ker() ⊆ I , let f ∈ Ker(). Then f (x1 , . . . , xn ) is the zero map, which implies that f (x1 , . . . , xn )(1) = f ∈ I . Finally we observe that the proof of (c) follows from (b) and Lemma 2.6(a). 2 Remark 2.8. Following Propositions 2.4 and 2.7, we can say that the K vector space P /I is a cyclic P module via x1 , . . . , xn , generated by 1, and 1 coincides with the canonical homomorphism P → P /I . Definition 2.9. The endomorphisms x1 , . . . , xn ∈ EndK (P /I ) described above are called the multiplication endomorphisms of P /I . Let O = (f¯1 , . . . , f¯µ ) be a O O basis of P /I as a K vector space. Then the matrices M , . . . , M associated x x 1
n
to x1 , . . . , xn with respect to O are called the multiplication matrices associated to O , and simply denoted by MxO1 , . . . , MxOn . Let us go back to the general situation. We are going to study the conditions under which the map w is surjective.
100
L. Robbiano
Proposition 2.10. Let V be a finitedimensional K vector space, let 1 , . . . , n ∈ EndK (V ) be pairwise commuting endomorphisms, let J = Ker() = AnnP (V ), and let w ∈ V . The following conditions are equivalent (a) There exists a tuple (h1 , . . . , hµ ) of polynomials whose classes form a basis of P /J as a K vector space, such that the corresponding tuple (w (h1 ), . . . , w (hµ )) is a basis of V as a K vector space. (b) For every tuple (h1 , . . . , hµ ) of polynomials whose classes form a basis of P /J as a K vector space, the corresponding tuple (w (h1 ), . . . , w (hµ )) is a basis of V as a K vector space. (c) We have V = w, i.e. V is a cyclic P module via 1 , . . . , n generated by w . (d) The map w : P → V is surjective. (e) The map w : P /J → V is an isomorphism of K vector spaces. Moreover, these conditions imply that J = Ker(w ). Proof. The implications (a) ⇒ (e), (e) ⇒ (d), and (b) ⇒ (a) are clear, so let us prove that (d) ⇒ (c). For every v ∈ V we have v = w (f ) for a suitable f ∈ P , so that v = w (f ) = f (x1 , . . . , xn )w . Now we prove that (c) ⇒ (b). For every v ∈ V there exists f ∈ P such that µ v = f (x1 , . . . , xn )w . The class of f modulo J is uniquely represented as i=1 ai hi , µ µ hence v = f (x1 , . . . , xn )w = ( i=1 ai hi (x1 , . . . , xn ))(w) = i=1 ai hi (1 , . . . , n )(w). This implies that (h1 (1 , . . . , n )(w), . . . , hµ (1 , . . . , n )(w)) = (w (h1 ), µ . . . , w (hµ )) is a tuple of generators. Suppose now that i=1 ai w (hi ) = µ µ a h ( , . . . , )(w) = 0 . Then a h ( , . . . , ) vanishes on V , hence i i 1 n i i 1 n i=1 i=1 µ a h (x , . . . , x ) ∈ J , hence a = 0 for i = 1, . . . , µ , and the proof of the i i 1 n i i=1 equivalence is complete. The last claim follows from (e). 2 The next result shows that every K linear surjective map from P to V can be interpreted as a particular w . Proposition 2.11. Let V be a finitedimensional K vector space whose dimension is denoted by µ, let ϕ : P → V be a K linear surjective map whose kernel is an ideal, and let w = ϕ(1). Then the formula i (ϕ(f )) = ϕ(xi (f )) defines pairwise commuting endomorphisms i ∈ EndK (V ) for i = 1, . . . , n, so that ϕ = w with respect to 1 , . . . , n , i.e. so that ϕ(f ) = f (1 , . . . , n )(w). Proof. Clearly ϕ induces an isomorphism ϕ¯ : P /I → V and the conclusion follows easily from Remark 2.8. 2 Proposition 2.12. Let V be a finitedimensional K vector space, let 1 , . . . , n ∈ EndK (V ) be pairwise commuting endomorphisms, and let w ∈ V be such that
Zerodimensional ideals
101
V = w as a P module via 1 , . . . , n . Then the equality i = w xi (w )−1
holds for every i = 1, . . . , n. Proof. We observe that w is an isomorphism by Proposition 2.10(e). Now, for every v ∈ V there exists f (x1 , . . . , xn ) ∈ P such that v = f (x1 , . . . , xn )w . Then w xi (w )−1 (v) = w xi (f (x1 , . . . , xn )) = w (xi f (x1 , . . . , xn )) = i f (1 , . . . , n )(w) = i (f (x1 , . . . , xn )w) = i (v). 2 We are ready to make the transition from endomorphisms to matrices. The following lemma collects some elementary facts, whose proofs are left to the reader. Lemma 2.13. Let V be a finitedimensional K vector space, and let 1 , . . . , n ∈ EndK (V ) be pairwise commuting endomorphisms. Let E , F be bases of V , and let F ME1 , . . . , MEn , MF 1 , . . . , Mn , be the corresponding matrices. E −1 E E (a) MEi = MEF MF i (MF ) , for every i = 1, . . . , n, and hence M1 , . . . , Mn F are simultaneously similar to MF 1 , . . . , Mn . F (b) The K algebras K[1 , . . . , n ] and K[M1 , . . . , MF n ] are naturally isomorphic. F (c) The conditions f (ME1 , . . . , MEn ) = 0 and f (MF 1 , . . . , Mn ) = 0 are equivalent for every polynomial f (x1 , . . . , xn ) ∈ P .
Theorem 2.14. Let V be a finitedimensional K vector space, let 1 , . . . , n ∈ EndK (V ) be pairwise commuting endomorphisms, assume that there exists w ∈ V such that V = w as a P module via 1 , . . . , n , and let J = Ker() = AnnP (V ). Let E be a basis of V , let O be a basis of P /J , let ME1 , . . . , MEn be the matrices of 1 , . . . , n associated to E , and let MxO1 , . . . , MxOn be the multiplication matrices associated to O . Then w (O) is a basis of V and we have w (O) (a) M = MxOi for every i = 1, . . . , n. i
(b) MEi = ME
w (O )
MxOi (ME
w (O )
)−1 , for every i = 1, . . . , n, and hence ME1 , . . . ,
MEn are simultaneously similar to MxO1 , . . . , MxOn .
Proof. We put B = w (O) and start the proof with the observation that B is a basis of V as a K vector space by Proposition 2.10, so that (a) follows from Proposition 2.12. To prove (b), we combine (a) with the relation MEi = E −1 MEB MB (see Lemma 2.13(a)) and deduce MEi = MEB MxOi (MEB )−1 i (MB ) which concludes the proof. 2
102
L. Robbiano
At this point we illustrate the theory with some applications, remarks and examples. In general, if we want to compute AnnP (V ) we can use the criteria explained in the next remark. Remark 2.15. Let V be a finitedimensional K vector space, let 1 , . . . , n ∈ EndK (V ) be pairwise commuting endomorphisms, and let J = Ker() = AnnP (V ). Then a Gröbner basis of J can be computed in the following way. We choose a basis E of V and consider the homomorphism : P → K[ME1 , . . . , MEn ], defined by (1) = Iµ , the identity matrix, and (xi ) = MEi for i = 1, . . . , n. From the above lemma we know that J = Ker( ). a a Let t = x1 1 x2 2 · · · xnan be a power product. Then (t) = (ME1 )a1 (ME2 )a2 · · · (MEn )an . Of course a matrix can be represented as a tuple and the linear dependence among matrices can be checked. Therefore a variant of the Buchberger– Möller Algorithm (see [1] and [2]) computes J . Remark 2.16. Let E = (e1 , . . . , eµ ) be a basis of V as a K vector space. Then E generates V as a P module. Consequently the map E : P µ → V is a surjective map of P modules. If M denotes its kernel we get an isomorphism of P modules P µ /N ∼ = V . How do we compute N ? One can see that a variant of Buchberger– Möller Algorithm (see the above remark) applied to modules does the job. Suppose we want to check whether V is cyclic. We can proceed using the algorithm described below. Proposition 2.17. Let V be a finitedimensional K vector space, let 1 , . . . , n ∈ EndK (V ) be pairwise commuting endomorphisms, and let J = Ker() = AnnP (V ). Consider the following sequence of instructions. (1) Compute a Gröbner basis of J , and hence a tuple of terms O = (t1 , . . . , tµ ), whose residue classes are a basis of P /J as a K vector space. This step can be performed using the above Remark 2.15. (2) If dim(V ) = µ return “V is not a cyclic P module via 1 , . . . , n ”. (3) Choose a basis E of V and compute ME1 , . . . , MEn . (4) Let u1 , . . . , uµ be independent indeterminates and define a generic vector u by putting MEu = (u1 , . . . , uµ )tr . Then compute det(Au ) where Au = t1 ME1 , . . . , MEn · MEu , . . . , tµ ME1 , . . . , MEn · MEu .
(5) If det(Au ) = 0 has no solutions in K , in particular if det(Au ) = 0, return “V is not a cyclic P module via 1 , . . . , n ”. (6) Choose a tuple (a1 , . . . , aµ ) on which det(Au ) does not vanish, let w be defined by MEw = (a1 , . . . , aµ )tr . Return V = w. This is an algorithm which verifies whether V is a cyclic P module via 1 , . . . , n , and in the affirmative case computes a generator.
103
Zerodimensional ideals
Proof. There is no problem of termination and the correctness follows from Proposition 2.10(c). 2 We illustrate the preceding facts with a specific example. Example 2.18. Let 1 , 2 be the endomorphisms of Q3 defined by the following two matrices in Mat3 (Q)
0
M1 = ME1 = 0 0
1
0
1
1
1
0
0
1
and M2 = ME2 = 0 0
1
2
1
1
1
with respect to the canonical basis E = (e1 , e2 , e3 ). We want to find the representation P /J ∼ = Q[1 , 2 ], where P = Q[x, y], as explained in Lemma 2.6. In other words we want to compute generators of J . We pick a degree compatible term ordering, and we follow the scheme of computation suggested in Remark 2.15. It turns out that it suffices to compute M21 , M1 M2 , M21 to get the relations M21 − M2 , M1 M2 − M1 − M2 , M21 − M1 − 2M2 , and check that the set {x 2 − y, xy − x − y, y 2 − x − 2y} is a Gröbner basis, hence a set of generators of J . The next goal is to check whether Q3 is a cyclic P module via 1 , 2 . We follow the suggestion of Proposition 2.17. The triple O = (1, x, y) is such that the residue classes of its components form a basis of P /J as a Qvector space. Then we pick a generic vector u, and compute (1(M1 , M2 )u, x(M1 , M2 )u, y(M1 , M2 )u) = (Id u, M1 u, M2 u). We let u = (u1 , u2 , u3 ), and construct the matrix which represents these three vectors with respect to E . We get
Mu = u2
u2 + u3
u2
u1
u2 + u3
u3
u2
2u2 + u3 . u2 + u3
We observe that det(Mw ) = (u22 − u2 u3 − u23 )(u1 − u3 ), and hence every rational vector (a1 , a2 , a3 ) such that a1 = a3 is a generator of Q3 as a P module. For instance, we may pick w = 1e1 + 1e2 + 0e3 = (1, 1, 0). The three vectors w , 1 (w), 2 (w) form a basis of Q3 , and hence Q3 is a cyclic P module via 1 , 2 . Therefore w (O) is defined by
1
MEw (O) = 1 0
1
1
1
2
1
1
whose inverse is
1
E −1 =1 Mw (O) 0
0 −1 1
−1
1 . 1
104
L. Robbiano
Finally we use Theorem 2.14 to conclude that
0
0
w (O) M = MO x =1 1 0
0
0
1
1
1
and
0
w (O) M = MO y =0 2 1
0
0
1
1
1
2
O and that ME1 , ME2 are simultaneously similar to MO x , My , via the matrix MEw (O) . In other words
0
0
1
0
1
1
1=1
0
1
0
0
1
1
1
1
0
1
2 1
0
1
1
1
1
1
0
0
1
0
1 1
0
1
1
0
0
0
1
1
−1
1
2
0
1
1
1
1
1
and
0 0
2
1=1
1
1
0
1
2 0
1
1
1
1
1 1
1
2
0
−1
1
2
1
1
.
What happens if we choose as w a vector which annihilates det(Mw )? Let us choose w = (1, 2, 1). In this case we may compute Ker(w ) again by using a B–M scheme of computation. We get Ker(w ) = (x − y − 1, y 2 − 3y + 1), hence Ker(w ) ⊃ Ker() (see Lemma 2.6(d)), and w is not an isomorphism. Remark 2.19. Let ∈ EndK (V ). Then we have K[x]/(m()) ∼ = K[], where m() is the minimal polynomial of . Let m() = a0 + a1 x + a2 x 2 + · · · + ad−1 x d−1 + x d . We may choose (1, x, . . . , x d−1 ) so that O = (1, x, ¯ . . . , x¯ d−1 ) is a basis of K[x]/(m()), and it is clear that 0 0 0 ... 0 −a0 1 0 0 ... 0 −a1 0 1 0 ... 0 −a2 O Mx = . ... ... ... ... ... . . . 0 0 0 . . . 0 −a d−2 0 0 0 . . . 1 −ad−1 This is the companion matrix or Frobenius companion matrix of m(). Remark 2.20. In the case of a single endomorphism, the conditions of Proposition 2.10 are equivalent to the condition dimK (V ) = d . Namely, if condition (a) is satisfied then dimK (V ) = dimK (P /(m()) = d . Conversely, if dimK (V ) = d then m() coincides with the characteristic polynomial of , so that there is only one invariant polynomial and V is cyclic.
105
Zerodimensional ideals
Example 2.21. Consider the following matrix 0 0 0 0 0 0 0 0 M= 1 0 0 0 0 1 0 0 and the corresponding endomorphism of K 4 defined by ME = M. The minimal polynomial of is x 2 , while the characteristic polynomial is x 4 . Then 4 = dim(K 4 ) > deg(m()), and K 4 is not a cyclic K[x]module via . Example 2.22. Let 1 , 2 be the endomorphisms of Q3 defined by the following two matrices in Mat3 (Q) 0 0 0 0 0 0 M1 = ME1 = 1 0 0 and M2 = ME2 = 0 0 1 0
0
0
0
0
0
with respect to the canonical basis E = (e1 , e2 , e3 ). It is easy to check that they commute, M21 = M1 M2 = M22 = 0, and there is no linear dependence relating Id, M1 , M2 . Therefore we obtain P /J ∼ = Q[1 , 2 ], where J = (x 2 , xy, y 2 ). In this case dim(P /J ) = dim(V ), but, unlike the univariate case (see Remark 2.20), V is not a cyclic P module via 1 , 2 . To verify this claim we apply Proposition 2.17 and compute the matrix Au which turns out to be
u1
0
0
Au = u2
u1
u1
u3
0
0
hence det(Au ) = 0. 3. Z E R O  D I M E N S I O N A L
IDEALS AND REWRITE RULES
Suppose that we are given a zerodimensional proper ideal I in P . We let : P → P /I be the canonical surjective map and µ = multK (P /I ), which is finite by Proposition 2.1. Let V be a finitely generated K vector subspace of P , and let ϕ : P → V be a surjective K linear map such that ϕV = idV and such that Ker(ϕ) = I . We observe that P is the direct sum of the two vector subspaces I and V , and that V is isomorphic to P /I . Therefore the map ϕ is uniquely defined by the given data, and V is a cyclic P module generated by ϕ(1) and the theory developed in the previous section applies. In particular we use Proposition 2.11 to obtain endomorphisms of V induced by the multiplication endomorphisms on P and P /I (see Definition 2.9) which we still denote by x1 , . . . , xn . For a basis O of V , O we denote by MO x1 , . . . , Mxn the corresponding matrices. We are going to use this setting in the following case.
106
L. Robbiano
Definition 3.1. We let O = (h1 , . . . , hµ ) be a tuple of polynomials such that O = (h¯ 1 , . . . , h¯ µ ) is a basis of P /I as a K vector space. We denote by V (O) the K vector subspace of P generated by the elements of O . The linear map ϕ : P → µ V (O) is called NFO,I . It turns out to be defined by NFO,I (f ) = i=1 ai hi , where µ the elements a1 , . . . , aµ are given by (f ) = i=1 ai h¯ i . (a) The polynomial NFO,I (f ) is called the normal form of f with respect to (O, I ), and the linear map NFO,I is called the normal form map with respect to (O, I ). (b) The matrices MO xi are called the multiplication matrices of NFO,I . (c) If there exists an algorithm which computes NFO,I (f ) for every f ∈ P , we say that NFO,I is explicit. Remark 3.2. From the above definitions we get the following properties. (a) (b) (c) (d)
f − NFO,I (f ) ∈ I for every f ∈ P . NFO,I (NFO,I (f )) = NFO,I (f ) for every f ∈ P . NFO,I (f g) = NFO,I (NFO,I (f )NFO,I (g)) for every f, g ∈ P . V (O) is a cyclic P module generated by NFO,I (1).
For instance we may consider the zerodimensional ideal I = (x 2 , y) in P = K[x, y], and choose O = (1, x). For f ∈ P we may uniquely write f = a1 + a2 x mod I . Then NFO,I (f ) = a1 + a2 x . Corollary 3.3. Let I be a zerodimensional proper ideal in P , let O = (t1 , . . . , tµ ) be a tuple of polynomials such that O = (t¯1 , . . . , t¯µ ) is a basis of P /I as a K vector O space, let w = NFO,I (1), and let MO x1 , . . . , Mxn be the multiplication matrices of NFO,I . For f ∈ P we have O O NFO,I (f ) = O · f MO x1 , . . . , Mxn · Mw hence NFO,I is an explicit normal form map with respect to (O, I ). Proof. The claim is a direct consequence of Proposition 2.11 applied to NFO,I .
2
Gröbner basis theory tells us that a typical example of O is given by O = Tn \ LTσ {I }, where I is a zerodimensional proper ideal in P . In that case O has the important property defined below. Definition 3.4. Let O be a nonempty set of power products such that whenever t t for some t ∈ O then t ∈ O . Then O is called an order ideal of monomials or a standard set of power products or a complete set of estimable terms. In particular, 1 ∈ O. However, there are complete sets of estimable terms which form a basis modulo an ideal I , but they are not the complement of LTσ (I ), no matter what term ordering σ is chosen. To illustrate this remark we recall the following example from [14].
Zerodimensional ideals
107
Example 3.5. Consider the following set {(0, 0), (0, −1), (1, 0), (1, 1), (−1, 1)} of five points in A2Q , denote by I ⊂ P = Q[x, y] the vanishing ideal of this set, and let O = (1, x, y, x 2 , y 2 ). The evaluation matrix of the elements of O at the points ¯ x, has determinant −4, hence (1, ¯ y, ¯ x¯ 2 , y¯ 2 ) is a Qbasis of P /I . Consequently, there is a K linear surjective map P → V (O) which sends every polynomial f to the uniquely defined element a1 + a2 x + a3 y + a4 x 2 + a5 y 2 of V (O) such that the residue class of f in P /I is a1 + a2 x¯ + a3 y¯ + a4 x¯ 2 + a5 y¯ 2 . However, this map is not induced by a map of the type NFσ,I , since {1, x, y, x 2 , y 2 } is not of the form T2 \ LTσ (I ) for some term ordering σ . To prove this fact, consider f = x 2 + xy − x − 12 y 2 − 12 y . It is in I , since it vanishes at the five points. For any term ordering σ , we have x 2 >σ x and y 2 >σ y . If x >σ y , then x 2 >σ xy >σ y 2 . If y >σ x , then y 2 >σ xy >σ x 2 . This means that there are only two possibilities for the leading term of f : either it is x 2 or y 2 . In both cases we get that LTσ (f ) is an element of O but by definition LTσ (f ) ∈ LTσ (I ). O We compute MO x and My . It turns out that 0 0 0 0 0 0 0 0 0 0 1 0 1 1 1 0 1 0 0 0 O 1 1 , 1 1 M MO = = 0 0 1 0 0 1 x y 2 2 2 2 0 1 −1 0 −1 0 −1 0 0 0 0 0 12 0 12 0 12 1 12 0 O O O while w = (1, 0, 0, 0, 0)tr . One can check that MO x My = My Mx . Let us compute NFO,I (x 6 y 2 ). We may use the formula of Corollary 3.3, and get
6 O 2 1 1 NFO,I x 6 y 2 = O · MO My · w = y + y 2 . x 2 2
At this point we are going to switch our point of view. We begin with the following definition borrowed from [13]. Definition 3.6. A pair (g, t) is said to be a marked polynomial if g is a nonzero polynomial and t ∈ Supp(g) with the coefficient of t in g being 1. We also say that g is marked by t . Let G = (g1 , . . . , gν ) be a tuple of nonzero polynomials, T = (t1 , . . . , tν ) a tuple of power products. If (g1 , t1 ), . . . , (gν , tν ) are marked polynomials, we say that G is marked by T . The tuple G is said to be marked coherently by T if there exists a term ordering σ with the property that ti = LTσ (gi ) for every i = 1, . . . , ν . Definition 3.7. Let G = (g1 , . . . , gν ) be a tuple of nonzero polynomials marked by T = (t1 , . . . , tν ). Let g, g ∈ P , and suppose there exist a constant c ∈ K , a term t ∈ Tn , and an index i ∈ {1, . . . , ν} such that g = g − ctgi and t · ti ∈ / Supp(g ). Then we say that g reduces to g in one step using the rewrite rule defined by the marked polynomial (gi , ti ). The passage from g to g is also called a reduction step and we gi write g −−→ g .
108
L. Robbiano
Gröbner basis theory shows that when a tuple of polynomials G is marked coherently, the associated rewriting system has the property that every chain gi1
gi2
f1 −−→ f2 −−→ · · · becomes eventually stationary (see [8, Proposition 2.2.2(c)]). The fundamental result of [13] shows that this is the only case.
Theorem 3.8. A tuple G of marked polynomials is marked coherently if and only if every sequence of reduction steps associated to G terminates. Let us illustrate this fact with simple examples. Example 3.9. Let P = K[x, y], G = (xy − x 2 − y 2 ) marked by xy . Then the g g sequence x 2 y −−→ x 3 + xy 2 −−→ x 3 + x 2 y + y 3 can be prolonged indefinitely. Example 3.10. Let P = K[x, y], G = (g1 , g2 , g3 , g4 , g5 ), where g1 = x 3 , g2 = y 3 , g3 = xy − x 2 − y 2 marked by xy , g4 = x 2 y , g5 = xy 2 . The ideal generated by G is zerodimensional, nevertheless the sequence g3
g1
g3
g2
x 2 y −−→ x 3 + xy 2 −−→ xy 2 −−→ x 2 y + y 3 −−→ x 2 y g4
can be prolonged indefinitely, while of course x 2 y −−→ 0 terminates in one step, and g3 g5 g1 the sequence x 2 y −−→ x 3 + xy 2 −−→ xy 2 −−→ 0 terminates in a finite number of steps. The examples show that in order to get termination we need something more. Definition 3.11. Let O = {t1 , . . . , tµ } be a complete set of estimable terms. In this case we define O + = {t ∈ Tn \ O  ∃i ∈ {1, . . . , n}, t ∈ O with xi t = t}, and we call it the border of O (see also [9]). Remark 3.12. We make a couple of easy remarks about O + . (a) If O is a complete set of estimable terms, every term t is either in O or a multiple of an element of O + . (b) Suppose that t = xi t ∈ O ∪ O + . Then also t ∈ O ∪ O + . Definition 3.13. We let P = K x1 , . . . , xn be the ring of noncommutative polynomials in the indeterminates x1 , . . . , xn and let π : P → P be the canonical surjective K linear homomorphism of rings. We let x1 , . . . , xn be ordered so that x1 > x2 > · · · > xn . Then every commutative term t is canonically represented as a a t = x1 1 x2 2 · · · xnan , and with this convention we define a K linear map λ : P → P a a which associates to t the noncommutative term x1 1 x2 2 · · · xnan . Clearly π ◦ λ = idP . In the following, we shall omit λ and identify f ∈ P with λ(f ) ∈ P . Definition 3.14. Let O be a tuple of terms in P such that O is a complete set of estimable terms, let O+ = (b1 , . . . , bν ) be such that its underlying set is O + , and let
Zerodimensional ideals
109
G = (g1 , . . . , gν ) be a tuple of nonzero polynomials marked by O+ and such that Supp(bj − gj ) ⊆ O for i = 1, . . . , ν . Then we define a K linear function P → V (O) called NRO,G , or simply NR by the following rules
(a) NR(τ ) = π(τ ) if π(τ ) ∈ O , (b) NR(xi τ ) = bj − gj if π(xi τ ) = bj ∈ O + and π(τ ) ∈ O . (c) NR(xi τ ) = NR(xi NR(τ )) if neither (a) nor (b) applies. It is easy to check that NRO,G is defined. It is called the normal remainder function with respect to (O, G). We say that f is irreducible with respect to (O, G) if Supp(π(f )) ⊆ O , and hence the equality NRO,G (f ) = π(f ) holds true. Proposition 3.15. Let O be a tuple of terms in P such that O is a complete set of estimable terms, let O+ = (b1 , . . . , bν ) be such that its underlying set is O + , and let G = (g1 , . . . , gν ) be a tuple of nonzero polynomials marked by O+ and such that Supp(bj − gj ) ⊆ O for i = 1, . . . , ν . Let I be the ideal generated by G and let f ∈ P . Then (a) π(f ) − NRO,G (f ) ∈ I for every f ∈ P . (b) NRO,G (xi f ) − NRO,G (f xi ) ∈ I for every f ∈ P and every i = 1, . . . , n. Proof. Claim (b) follows from claim (a) since π(xi f ) = π(f xi ), hence NR(xi f ) − NR(f xi ) = NR(xi f ) − π(xi f ) + π(f xi ) − NR(f xi ). So let us prove (a). We may assume that f is a noncommutative term. If it is a base case, then the conclusion follows from the definition. Otherwise we may write f = xi1 · xi2 · · · xis τ , where π(τ ) ∈ O and π(xis τ ) ∈ O + . We call σ (f ) this number s . We have π(f ) = xi1 · xi2 · · · xis π(τ ), hence π(f ) − NRO,G (f ) = f1 + f2 where f1 = xi1 · xi2 · · · xis−1 xis π(τ ) − NR(xis τ ) , f1 = xi1 · xi2 · · · xis−1 NR(xis τ ) − NRO,G (f )
hence f2 = xi1 · xi2 · · · xis−1 NR(xis τ ) − NR xi1 · xi2 · · · xis−1 NR(xis τ )
we observe that σ (xis π(τ )) = 1 and σ (xi1 · xi2 · · · xis−1 NR(xis τ )) = s − 1. An easy inductive argument on σ concludes the proof. 2 In the next example we make clear that the function NRO,G : P → V (O) may take different values on noncommutative power products which have the same projection to P .
110
L. Robbiano
Example 3.16. Consider O = (1, x, y, x 2 , y 2 ) which is a complete set of estimable terms such that O + = {xy, x 3 , y 3 , x 2 y, xy 2 }. Consider the tuple of polynomials G = (xy − x 2 − y 2 , x 3 − x 2 , y 3 − y 2 , x 2 y − x 2 , xy 2 − y 2 ) marked by O+ . Then NR yx 2 = x 2 while NR x 2 y = NR x NR(xy) = NR x x 2 + y 2 = x 2 + y 2 . Consequently, we obtain that y 2 = (x 2 + y 2 ) − x 2 ∈ I . The next important investigation is about the possibility of having particular situations where NRO,G does not depend on the representation of the power products. We need the following result. Proposition 3.17. Let I be a zerodimensional proper ideal in P , let O = {t1 , . . . , tµ } be a complete set of estimable terms such that O = {t¯1 , . . . , t¯µ } is a basis of P /I as a K vector space, and let O + = {b1 , . . . , bν }. (a) For each j = 1, . . . , ν there exists a unique linear combination µk=1 akj tk with µ akj ∈ K for k ∈ {1, . . . , µ}, j ∈ {1, . . . , ν}, and gj = bj − k=1 akj tk ∈ I . (b) The ideal I is generated by {g1 , . . . , gν }.
Proof. Clearly P = V (O) ⊕ I , hence (a) follows immediately. Let J be the ideal generated by {g1 , . . . , gν }. To complete the proof we need to show that J = I . Since J ⊆ I it suffices to show that P = V (O) + J , hence it suffices to show that t ∈ V (O) + J for every term t . We know that 1 ∈ V (O), since O is a complete set of estimable terms, so we make induction on deg(t), and assume that the claim is proved up to degree d − 1. Let t be a term of degree d > 0. There exists i such that µ t = xi t for a suitable term t of degree d − 1. Then t = i=1 ai ti + g with ai ∈ K µ for i = 1, . . . , µ and g ∈ J . Therefore t = i=1 ai xi ti + xi g . Now, either xi ti ∈ O or xi ti ∈ O + . In the first case there is nothing to prove. If xi ti ∈ O + , then xi ti belongs to V (O) + J by assumption, and the proof is complete. 2 Definition 3.18. Following Proposition 3.17, we let O+ = (b1 , . . . , bν ) be such that its underlying set is O + , call G the tuple (g1 , . . . , gν ) and let G be marked by O+ . Then the pair (G, O+ ) is called the border basis of I with respect to O . We observe that in the case where O = Tn \ LTσ (I ) border bases were discussed in [9]. Example 3.19. We continue the study of the running Example 3.5. In that case we have O = {1, x, y, x 2 , y 2 }. We get O + = {xy, x 3 , y 3 , x 2 y, xy 2 } and we know already that fxy = xy − x − 12 y + x 2 − 12 y 2 , fx 3 = x 3 − x , fy 3 = y 3 − y . It is easy to check that fx 2 y = x 2 y − 12 y − 12 y 2 and fxy 2 = xy 2 − x − 12 y + x 2 − 12 y 2 . In conclusion, the set F = {fxy , fx 3 , fy 3 , fx 2 y , fxy 2 } with fxy marked by xy , fx 3 by x 3 , fy 3 by y 3 , fx 2 y by x 2 y , fxy 2 by xy 2 , is the border basis of I with respect to O . Definition 3.20. Let O = (t1 , . . . , tµ ) be such that O is a complete set of estimable terms and let O+ = (b1 , . . . , bν ) be such that its underlying set is O + . Let G = {g1 , . . . , gν } be a tuple of polynomials marked by O+ and such that Supp(gk − bk ) ⊆
111
Zerodimensional ideals
O for k = 1, . . . , ν . We construct matrices M1 , . . . , Mn ∈ Matµ (K) in the following way. If xk tj ∈ O , then the j th column of Mk is (0, . . . , 0, 1, 0, . . . , 0)tr , with 1 in the position corresponding to the power product xk tj . If xk tj ∈ O + there exists i ∈ {1, . . . , ν} such that xk tj = bi , and in this case the j th column of Mk contains the coefficients of (t1 , . . . , tµ ) in the representation of the polynomial gi − bi as a linear combination of the elements in O . The matrices M1 , . . . , Mn are called the matrices associated to (O+ , G). We now make an interesting connection between NRO,G and the matrices associated to (O+ , G).
Proposition 3.21. Let O = (t1 , t2 , . . . , tµ ) be a tuple such that O is a complete set of estimable terms, let t1 = 1, let O+ = (b1 , . . . , bν ) be such that its underlying set is O + , and let G = (g1 , . . . , gν ) be a tuple of nonzero polynomials marked by O+ and such that Supp(bj − gj ) ⊆ O for i = 1, . . . , ν . Finally let M1 , . . . , Mn ∈ Matµ (K) be the matrices associated to (O+ , G). Then NRO,G (xi1 xi2 · · · xis ) = O · Mi1 Mi2 · · · Mis · MO t1 for every noncommutative power product xi1 xi2 · · · xis . Proof. The assumption t1 = 1 implies NR(1) = O · MO t1 = 1, hence the proposition is proved for the term 1, and we can make induction on the degree. Let τ = xi1 xi2 · · · xis and use the inductive assumption to say that NR(xi2 · · · xis ) = O · Mi2 · · · Mis ·MO t1 . Then NR(τ ) = NR(xi1 NR(xi2 · · · xis )) = NR(xi1 O ·Mi2 · · · Mis · MO ) . We call v the column matrix Mi2 · · · Mis · MO t1 t1 so that NR(τ ) = NR(xi1 O · v). Therefore it suffices to prove that NR(xi1 O · v) = O · Mi1 · v for every column matrix v . We use the linearity of NR and reduce the problem O to prove that NR(xi1 O · MO tj ) = O · Mi1 · Mtj for j = 1, . . . , µ. By definition NR(xi1 O · MO tj ) = NR(xi1 tj ), and again by definition NR(xi1 tj ) is the linear combination of the components in O with coefficients the j th column of Mi1 . In other words NR(xi1 tj ) = O · Mi1 · MO tj , and the proof is complete. 2 Example 3.22. Consider Example 3.16. We construct the matrices associated to (O+ , G) and get
0
1 Mx = 0 0 0
0
0
0
0
0
0
0
0
0
1
1
1
0
1
0
0
0 0 , 0 1
0
0 My = 1 0 0
0
0
0
0
0
0
0
0
0
1
0
1
1
1
0
0
0 0 . 0 1
112
L. Robbiano
We compute M2x My − My M2x and get 0 0 M = M2x My − My M2x = 0 0 1
0
0
0
0
0
0
0
0
0
0
−1
0
1
0
0
0
0 0 . 0 0
Using Proposition 3.21 we see that NRO,G (x 2 y − yx 2 ) = (1, x, y, x 2 , y 2 ) · M · 2 2 MO t1 = y . Again we check that y ∈ I , as we did in Example 3.16. The following result makes the connection between quotient bases and associated matrices, and is the main result in this section. Theorem 3.23. Let O = (t1 , . . . , tµ ) be such that O is a complete set of estimable terms, and assume that O+ = (b1 , . . . , bν ) is such that O + = {b1 , . . . , bν }. Let G = (g1 , . . . , gν ) be a tuple of nonzero polynomials marked by O+ and such that Supp(bj − gj ) ⊆ O for i = 1, . . . , ν , and let I be the ideal generated by G . The following conditions are equivalent. (a) The set O = (t¯1 , . . . , t¯µ ) is a basis of P /I as a K vector space. (b) The pair (G, O + ) is the border basis of I with respect to O . (c) The matrices M1 , . . . , Mn associated to (G, O + ) are pairwise commuting. Proof. We start the proof by showing that (a) ⇒ (b). The assumption implies that the ideal I is zerodimensional, hence we may use Proposition 3.17(a) to say that for each j = 1, . . . , ν there exists a unique linear combination µk=1 akj tk with akj ∈ K for k ∈ {1, . . . , µ}, j ∈ {1, . . . , ν}, and gj = bj − µk=1 akj tk ∈ I . The uniqueness implies that gi = gi for i = 1, . . . , µ, so that (G, O + ) is the border basis of I with respect to O . Now we prove that (b) ⇒ (c). If the pair (G, O + ) is the border basis of I with respect to O , then O is a basis of P /I , hence we may define NFO,I : P → V (O). We use the uniqueness of NFO,I , Corollary 3.3, and Proposition 3.21 to deduce O that Mk = MO xk for every k = 1, . . . , n. Since the matrices Mxk are pairwise commuting, we are done. Finally we prove that (c) ⇒ (a). We use Proposition 3.21 which allows to get a K linear map NRO,G : P → V (O). The assumption implies that this map factorizes through P , hence we get a map ϕ : P → V (O), and now we prove that ϕ is surjective. It suffices to show that ϕ(ti ) = ti for i = 1, . . . , µ. This is true for i = 1 by construction, and we make induction on the degree. So let ti = 1. It is possible to write ti = xk tj for a suitable k and a term tj ∈ O . Then ϕ(ti ) = O O O · Mk · tj (M1 , . . . , Mn ) · MO t1 = O · Mk · Mtj . But Mk · Mtj is the j th column of Mk , which is (0, . . . , 0, 1, 0, . . . 0)tr with 1 in the i th position, by the very definition of Mk . Therefore we have proved that ϕ is surjective, and hence O is a basis of
Zerodimensional ideals
113
P / Ker(ϕ). It is easy to see that Ker(ϕ) is an ideal, and we use Proposition 3.17 applied to Ker(ϕ) to conclude that Ker(ϕ) = (g1 , . . . , gν ) = I . Now the proof is complete. 2
Corollary 3.24. If the equivalent conditions of the theorem are satisfied we have (a) The ideal I is zerodimensional, and dimK (P /I ) = µ. (b) The matrices M1 , . . . , Mn associated to (O+ , G) are the multiplication matrices of NFO,I . (c) The normal remainder function NRO,G induces a map P → V (O), which is NFO,I . Proof. The proof of (a) follows immediately from claim (a) of the theorem. The proof of (b) is already included in the proof of (b) ⇒ (c) of the theorem. The proof of (c) is included in the proof of (c) ⇒ (a) of the theorem. 2 Example 3.25. We go back to Example 3.5 and compute NFO,I (x 6 y 2 ) again, but now we use the border basis F computed in Example 3.19. We proceed as follows. We write x 6 y 2 = x 3 y 2 · x 3 , we use the marked polynomial fx 3 to make a reduction step and get x 4 y 2 which we write x 4 y 2 = xy 2 · x 3 . We use fx 3 again and get x 2 y 2 = xy · xy . We use fxy and get
1 1 1 1 xy x + y − x 2 + y 2 = x 2 y + xy 2 − x 3 y + xy 3 . 2 2 2 2 We use fx 3 and then fy 3 and get 1 1 1 1 x 2 y + xy 2 − xy + xy = − xy + x 2 y + xy 2 . 2 2 2 2
Finally we use fxy , fx 2 y , fxy 2 and get −
1 1 2 1 2 1 1 1 1 1 1 1 −x 2 + y 2 + x + y + y 2 + y + x + y + x + y = y + y2. 2 2 2 2 2 2 2 2 2 2
Therefore NFO,I (x 6 y 2 ) = 12 y + 12 y 2 , which agrees with the value computed in Example 3.5. We can follow different paths, but Corollary 3.24(c) implies that we always arrive at the same conclusion. R EFERENCES [1] Abbott J., Bigatti A., Kreuzer M., Robbiano L. – Computing ideals of points, J. Symbolic Comput. 30 (1999) 341–356. [2] Abbott J., Kreuzer M., Robbiano L. – Computing zerodimensional schemes, Preprint, 2001. [3] Auzinger W., Stetter H.J. – An elimination algorithm for the computation of all zeros of a system of multivariate polynomial equations, in: Numerical Mathematics, Proc. Int. Conf. Singapore 1988, in: Int. Ser. Numer. Math., vol. 86, 1988, pp. 11–30.
114
L. Robbiano
[4] Buchberger B. – On Finding a Vector Space Basis of the Residue Class Ring Modulo a Zero Dimensional Polynomial Ideal (in German), Ph.D. thesis, Universität Innsbruck, Innsbruck, 1965. [5] Caboara M., Robbiano L. – Families of ideal in statistics, in: Küchlin (Ed.), Proceedings of the ISSAC97 Conference (Maui, Hawaii, July 1997), 1997, pp. 404–409. [6] Caboara M., Robbiano L. – Families of estimable terms, Preprint. [7] Capani A., Niesi G., Robbiano L. – CoCoA, a system for doing computations in commutative algebra, Available via anonymous ftp from cocoa.dima.unige.it, Version 4.0, 1998. [8] Kreuzer M., Robbiano L. – Computational Commutative Algebra, vol. 1, SpringerVerlag, 2000. [9] Marinari M.G., Möller H.M., Mora T. – Gröbner bases of ideals defined by functionals with an application to ideals of projective points, Appl. Algebra Engrg. Comm. Comput. 4 (2) (1993) 103–145. [10] Mourrain B. – Criterion for normal form algorithms, in: Proceedings of the 13th International Symposium, Applied Algebra, Algebraic Algorithms and ErrorCorrecting Codes, in: Lecture Notes in Comput. Sci., vol. 1719, 1999, pp. 430–443. [11] Pistone G., Riccomagno E., Wynn H.P. – Computational Commutative Algebra in Statistics, Monographs on Statistics and Applied Probability, vol. 89, Chapman & Hall/CRC, 2000. [12] Pistone G., Wynn H.P. – Generalised confounding and Gröbner bases, Biometrika 83 (3) (1996) 653–666. [13] Reeves A., Sturmfels B. – A note on polynomial reduction, J. Symbolic Comput. 16 (1993) 273– 277. [14] Robbiano L. – Gröbner bases and statistics, in: Buchberger B., Winkler F. (Eds.), Gröbner Bases and Applications (Proc. of the Conf. 33 Years of Gröbner Bases), in: London Math. Soc. Lecture Notes Series, vol. 251, Cambridge University Press, 1998, pp. 179–204. [15] Tenberg R. – Duale Basen nulldimensionaler Ideale und Anwendungen, Dissertation Universität Dortmund, September 1999, Shaker Verlag, 2000.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
Computational aspects of computer algebra in systemtheoretic applications
J.M. Maciejowski Engineering Department, Cambridge University, Cambridge CB2 1PZ, England Email:
[email protected] (J.M. Maciejowski)
A BSTRACT We summarise some experiences in using computer algebra to investigate some systemtheoretic problems. Our main message is that numerical methods are much more straightforward to use than symbolic methods, and so should be used in preference if possible. However, computer algebra can tackle some interesting systemtheoretic problems to which numerical methods cannot be applied.
1. I N T R O D U C T I O N The objective of this chapter is to present some experiences of using computer algebra to a range of systemtheoretic problems. The overall conclusion is that potentially many interesting things are in prospect, but that still what can currently be done is disappointingly limited. Ten years ago, our typical experience was that between the trivial problems that could be solved by hand (say state dimension n = 2 for concreteness) there was a very small range (n = 3, with luck n = 4) that could be solved using computer algebra, before hitting memory limits (n = 5). Furthermore, progressing from n = 3 to n = 4 was not just a matter of changing input parameters; it often required modifying the algorithm, because in each case the computer algebra system needed a different trick to help it out. Now the range between trivial and impossible has widened but, as is to be expected with problems whose roots are to do with exponential complexity, it has not widened much. Breaking away from this ‘bottleneck’ will probably require better exploitation of the algebraic structure of each problem tackled. There are excellent numerical tools available for systemtheoretic computation. Numerical tools should invariably be preferred to computer algebra tools for those
Computational aspects of computer algebra in systemtheoretic applications
117
For most systemtheoretic computations it is desirable and efficient to structure software into ‘procedures’ or ‘functions’ which can be called by other procedures or functions. If one wishes to use procedures which already exist (such as those in the linear algebra ‘package’ linalg, or the Gröbner basis package grobner in Maple) then one needs to know not only the data types of the arguments which must be passed to the procedures, but something about their internal structure. Such information is typically not recorded in a systematic manner in the documentation of current computer algebra environments, so that the programmer frequently has to discover the internal structure by trialanderror. This can even apply to the data types of arguments; for example in Maple a vector could be defined to be of type vector, matrix, or array. Procedures which operate on vectors (in the mathematical sense) will only work with more than one of these possible data types if the programmer of the procedure has taken care to cater for each case. In most cases this has not been done. (A further feature which is not inherent to the difference between numerical and symbolic software systems, but rather a marketing issue, is the availability of source code. Matlab allows the user to read the source code of most functions, which makes it relatively easy to see what has been assumed about input and output arguments, even if details have been omitted from the documentation. The commercial computer algebra systems do not allow this.) Computational complexity. The computational complexity of symbolic algorithms is usually much greater than that of corresponding numerical algorithms. The issue is not only the number of steps that an algorithm has to take, but the growing complexity of the expressions it has to deal with at each step. Even with modern computing systems it is quite easy to generate expressions which are too big to fit into main memory, and which need to be transferred to and from disk during computation – with disastrous results for running time. Certain operations, such as differentiation of symbolic expressions, require a number of operations comparable with the number of ‘terms’ in the original expression, and the complexity of intermediate and final results usually grows quite modestly. Such operations can be performed very quickly, and give few problems. But operations such as computing a determinant of a matrix, or an inverse of a matrix, which cause no problems in a numeric environment, can be bottlenecks for symbolic computation. The computational complexity of these operations for matrices of dimension n is O(n!), rather than the familiar O(n3 ) of numerical algorithms. Furthermore, that is just the count of the ‘toplevel’ multiplications and additions; it does not take into account that the complexity of the terms may grow as the computation progresses. It also ignores the fact that the elements of the given matrix may be expressions of considerable complexity. This last fact implies that the measurement and prediction of the complexity of symbolic
118
J.M. Maciejowski
computations is much less straightforward than in the numerical case, in which the entities being manipulated are all floatingpoint numbers of fixed characteristics (as in IEEE Standard arithmetic, for example). Many algorithms of computer algebra have exponential complexity. The complexity of one of the central algorithms, namely Buchberger’s algorithm for computing Gröbner bases, is doubly exponential [9]. In numerical analysis there is a ‘folklore theorem’ that methods which are simplest for hand calculation are usually the least reliable for finiteprecision computations [10]. With computer algebra the opposite seems to be true. A nontrivial example of this ‘inverse folklore theorem’ is the use of Fadeev sequences to compute matrix inverses [6]. Complexity of result. The result of a symbolic computation is usually an extremely complicated expression, or set of such expressions. Nontrivial and problemspecific postprocessing tools are needed in order to extract useful conclusions from such a result. For example, in one of the examples given below, one is interested in seeing where the curvature tensor becomes zero. This is absolutely not possible by ‘eyeballing’ the result, except in the simplest cases. All of this is to counteract the sales pitch of some of the commercial computer algebra systems. This tries to give the impression that computer algebra is a straightforward alternative to numerical algorithms. It is not – numerical algorithms should be used whenever possible, and computer algebra should be reserved for those cases where there is no numerical alternative. 3. C U R V A T U R E
CALCULATION
We first present an example of a problem in the first category mentioned in the Introduction, namely problems which inherently require symbolic rather than numerical computation: the computation of the Riemann curvature tensor on the manifold of systems of a given McMillan degree. The motivation for looking at this comes from system identification and approximation – for example, the statistical theory of estimation is affected by the curvature of the search space [3, and its published discussion] – but that is not of importance here. The problem inherently requires symbolic computation because it requires partial differentiation of the metric tensor. This is basically straightforward, and all the differentiations required are performed very quickly, even though there are usually very many of them. But this problem already shows features which are not usually encountered when using numerical methods. 3.1. Manifold of linear systems Linear timeinvariant dynamic systems can be represented in one of the following two ‘statespace’ forms, depending on whether a continuoustime (differential equation) or a discretetime (difference equation) setting is assumed:
Computational aspects of computer algebra in systemtheoretic applications
(1) (2)
dx(t) = Ax(t) + Bu(t), dt y(t) = Cx(t),
119
x(k + 1) = Ax(k) + Bu(k), y(k) = Cx(k).
In these representations u ∈ R is a vector of input signals, y ∈ Rm is a vector of output signals, and x ∈ Rn is a vector of state variables. The input–output behaviour of the system can be represented by the transfer function matrix: (3)
G(z) = C(zI − A)−1 B.
The system is minimal if the McMillan degree of G(z) is n, and it is stable if the spectrum of A is contained in the open left halfplane (continuoustime) or the open unit disk (discretetime) [8]. The set of minimal stable systems forms a Riemannian manifold of dimension ( + m)n [1]. The investigation of this manifold is of interest for system approximation and identification. 3.2. The curvature tensor A Riemannian metric on a manifold is defined as follows. Let a point of the manifold be associated with a parameter (vector) value θ . Then (4)
ds 2 = dθ T G dθ,
where G = GT > 0 is the metric tensor. We denote the (i, j ) element of G by gij . The Christoffel symbols of the second kind are defined by i ∂gsk ∂gj k 1 −1 ∂gsj . G is (5) + − = 2 s ∂θk ∂θj ∂θs jk In order to perform these differentiations the elements gij must be stored in symbolic form. Note the matrix inverse which appears in this definition; its computation is one of the bottleneck problems here. The curvature tensor R is then defined by [7] (6)
Rij kl =
∂ 2 gj l ∂ 2 gj k 1 ∂ 2 gik ∂ 2 gil + − − 2 ∂θj ∂θl ∂θi ∂θk ∂θi ∂θl ∂θj ∂θk r s r s − grs − . jk il jl ik r s
3.3. L2 metric tensor for linear systems S are two minimal, stable, linear systems, an inner product can be defined as If S ,
(7)
S,
S = tr
∞ 0
At
e ˜ B
T dt C eAt B C
120
J.M. Maciejowski
for the continuoustime case. (Note that C eAt B is the impulse response of the system S , and is related to the transfer function G(z) by the Laplace transform.) The corresponding Riemannian metric is (8)
˙ 11 C˙ T + 2CM21 C˙ T + CM22 C T , ds 2 = tr CM
where the matrices Mij are defined as the solutions to the following Lyapunov equations:
(10)
AM11 + M11 AT = −BB T , ˙ T − AM ˙ 11 , AM21 + M21 AT = −BB
(11)
T ˙ 21 AM22 + M22 AT = −B˙ B˙ T − M21 A˙ T − AM
(9)
and the ‘dot’ notation is used to denote differentials: C˙ = dC etc. Different parametrisations of A(θ ), B(θ ), C(θ ) are available. Two particular ones which we have investigated are the observable and the balanced parametrisations. The full definitions of these are quite involved [8,12]; here we illustrate them for the case = m = 1, n = 2: A=
0
a12
1
a22
,
A=
B = [b1 , b2 ]T , C = [0, 1], ‘Observable form’
σ2
− 2b1
1
σ1 − bσ2+b 2 1 = [b1 , b2 ]T ,
σ2 − bσ1+b 1
σ2 − 2b2 2
2
,
B C = [b1 , b2 ]. ‘Balanced form’
For the observable parametrisation the parameter vector is (12)
θ = [a12 , a22 , b1 , b2 ]
(in this case) and for the balanced parametrisation it is (13)
θ = [σ1 , σ2 , b1 , b2 ].
(The balanced parametrisation includes some integervalued ‘structure’ parameters. In this illustration we have assumed particular values of these.) 3.4. Software implementation The computer algebra system Maple was used to compute the curvature tensor. The computation was organised in a number of procedures. Procedures were written to define each parametrisation, and others to define the metric tensor corresponding to each of these. ‘Reusable’ procedures were metric_tensor, which puts the metric tensor into the form of the matrix G, christoffel2 which computes the Christoffel symbols, and curvature_tensor which computes R . The Christoffel symbols are stored as a threedimensional array (in symbolic form,
Computational aspects of computer algebra in systemtheoretic applications
121
though this is not necessary). The curvature tensor is stored as a fourdimensional array. A calling sequence for the case = m = 1, n = 2, using the observable parametrisation, is: ns := 2; # number of states observable_par; # define parametrisation obs_rmt2; # define metric tensor mt := metric_tensor(ds2,[dp1,dp2,dp3,dp4]); # convert to matrix cs := christoffel2(mt,inverse(mt),[p1,p2,p3,p4]); R := curvature_tensor(mt,cs,[p1,p2,p3,p4]);
Although symbolic computation is generally harder than numeric computation, one very useful feature of the Maple environment is the possibility of defining socalled indexing functions. This allows indexing rules to be built into the definition of certain data types. In this case an indexing function was defined for curvature tensors to enforce the symmetries (14)
Riij k = Rij kk = 0,
Rij kl = −Rij lk ,
Rij kl = Rklij
which saved considerable programming effort. The bottlenecks for the curvature computation are computing the inverse of the metric tensor, which is needed in Eq. (5), and the characteristic polynomial, which is needed in the Hanzon–Peeters algorithm for solving Lyapunov equations [6]. Here we have an example of the ‘Inverse Folklore Theorem’: the computations are performed much more quickly for the observable canonical form of the system equations than for the balanced form, essentially because there is more sparsity in the matrices A and C [13]. For numerical work it is known that the observable form is often unreliable, because it gives rise to badly conditioned problems [10]. 4. L2
SYSTEM APPROXIMATION
–
A STATESPACE APPROACH
Next we consider an example of a problem which is in the second category defined in the Introduction, namely a problem which, if solved entirely symbolically, would provide a closedform solution and would thus be of interest even if it required enormous amounts of computation. We have so far been able to solve it only as a ‘category 3’ problem – namely we have been able to provide only solutions for specific parameter values, but we can provide these to arbitrary accuracy, and we can guarantee that they are indeed the correct solutions. The problem is optimal L2 approximation of stable linear systems by systems of lower McMillan degree. The firstorder necessary conditions for optimality yield a set of polynomial equations, which can in principle be solved by Gröbner basis methods. A stable minimal (continuoustime) system of state dimension n is defined as in Eq. (1) by the matrix triple (A, B, C). The problem is to find a stable minimal B, C) , of lower state dimension nˆ = n − 1 (for the simplest case = system (A,
122
J.M. Maciejowski
m = 1), which optimally approximates the given system in the sense that it minimises the error in the impulse response: ∞
(15)
At 2 eAt Ce B −C B dt.
0
This is a nonconvex optimisation problem of some importance, for which a considerable literature exists. All existing algorithms involve numerical optimisation, and hence incur the risk that only a local optimum may be found. We first discuss the approach described in [4]. In this case it is advantageous to use the ‘Schwarz’ canonical form for minimal stable linear systems:
(16)
(17) (18)
c2
1 − 2 α 1 A= 0 0
−α1
0
···
0 .. .
−α2 .. .
··· .. .
0
···
0
0
···
αn−1
0
, −αn−1 0 0
B = [b1 , b2 , . . . , bn ]T , C = [c1 , 0, . . . , 0]
in which the parameters are αi > 0 (i = 1, . . . , n − 1), b1 , . . . , bn , and c1 > 0. It can be shown that the optimality criterion (15) can be written as (19)
F=
u(cˆ1 , αˆ 1 , . . . , αˆ n−1 ˆ ) , 2 [w(cˆ1 , αˆ 1 , . . . , αˆ n−1 ˆ )]
where u and w are polynomials. By introducing a new variable v such that vw = 1, the criterion can be written as (20)
F = uv 2 .
The firstorder necessary conditions for optimality are then the following set of polynomial equations: (21) (22)
∂u ∂w = 2u , ∂ cˆ1 ∂ cˆ1 ∂u ∂w w = 2u (i = 1, 2, . . . , nˆ − 1). ∂ αˆ i ∂ αˆ i w
If one appends to these the (polynomial) constraint vw − 1 = 0 and the (polynomial) definition f = uv 2 – or, more economically, f w 2 − u = 0, in which case v need not be introduced at all – then the solution to this set of polynomial equations contains the solution to the optimal approximation problem, and the value of the variable f which corresponds to this solution is the optimal value of the criterion F . For a given
Computational aspects of computer algebra in systemtheoretic applications
123
n and nˆ these equations can in principle be solved using Gröbner basis methods. It can be shown that there are finitely many solutions of the set of equations, so the global optimum can be found by finding all the solutions, then selecting the one with the smallest value of f . When this approach is applied to a system of McMillan degree n = 3 (and hence nˆ = 2), the firstorder conditions give 2 polynomials, each in 2 variables. Even for specific systems, defined by numerical parameters, attempts to find a reduced Gröbner basis by application of Buchberger’s algorithm were unsuccessful, using Maple procedure gbasis. However the heuristic factorisation algorithm of Czapor [2] (gsolve) was successful in producing a set of 8 bases for the original polynomials. Gröbner bases with the grevlex monomial ordering could then be obtained. This in turn allowed Gröbner bases with the lex ordering to be obtained. The term order was defined to be cˆ1 αˆ 1 f , which ensured that one of the polynomials (of degree 7) in this basis was univariate, with f , the value of the L2 approximation criterion, as the indeterminate. This allowed the global optimum to be identified, using either a numerical (finiteprecision) root finder, or Sturmchain root isolation (to arbitrary precision), followed by ‘backsubstitution’ to find the global approximant. The use of numerical methods here is potentially troublesome. The polynomials frequently have (integer) coefficients whose order of magnitude is 10100 , so finding roots of such polynomials, and even evaluating them for particular values of the indeterminates, could be prone to error. In the examples we have investigated, however, we were able to use numerical methods at this last stage without difficulty. The main observation here is that even for very simple cases the approach was feasible only with heuristic tricks; furthermore a specific canonical form, namely the Schwarz form, was used to keep things as simple as possible. The computation time required was distributed as follows. The first factorisation using gsolve required several days to complete. The second step, of finding Gröbner bases with the grevlex term ordering, required up to one hour – each basis required a different solution time. The third step, of finding Gröbner bases with the lex term ordering, required only seconds. Note that these times were obtained in 1995, and would be considerably shorter using current hardware – both CPU speed and main memory size affect the speed – but the relative times are still relevant.
5. L2
SYSTEM APPROXIMATION
–
A TRANSFER FUNCTION APPROACH
USING COMMUTATIVE MATRIX SOLUTIONS
A later approach to the same problem [5] showed that, using known facts about the optimal approximant, the firstorder conditions could be written in a special form. The remarkable thing is that in this form the polynomial equations are already a Gröbner basis, which means that the main computational bottleneck of the previous approach has been avoided. In this second approach both the given system and the approximant are described by transfer functions rather than by statespace models – see Eq. (3) for the relation between them. This is another instance of the ‘inverse folk theorem’, since
124
J.M. Maciejowski
the most reliable numerical systemtheoretic algorithms are based on statespace descriptions. The system to be approximated is assumed to be: (23)
G(s) =
e(s) d(s)
with d and e both polynomials, deg(d) = n, d(δi ) = 0 for i = 1, . . . , n, and we make the simplifying assumption that δi = δj for i = j . Note that Re(δi ) < 0 since the system is assumed to be stable. The approximating system is assumed to be (24)
= b(s) G(s) a(s)
with a and b polynomials, and deg(a) = nˆ = n − 1. The firstorder necessary conditions for optimality are now (25)
2 e(δi )a(δ ˜ i ) = a(−δ ˜ i) ,
where a˜ = aq0 for some constant q0 = 0. Assembling these for each i leads to
(26)
2 a(−δ ˜ a(−δ ˜ 1) 1) .. .. = diag e(δi ) V (δi )V (δi )−1 , . . 2 a(−δ ˜ ) ) a(−δ ˜ n n
where V (δi ) is a Vandermonde matrix – which is nonsingular since δi = δj . ˜ ˜ This can be treated as a set of polynomials in the variables a(−δ 1 ), . . . , a(−δ n ). A remarkable fact is that if the total degree term ordering is adopted then this set of polynomial equations is already a Gröbner basis. Thus the most timeconsuming step in the previous approach is avoided here. These equations can be solved by constructing a set of commuting matrices and finding eigenvalues corresponding to their common eigenvectors, as described in [14]. The idea is the following. From a Gröbner basis (27)
fi (x1 , . . . , xq ) = 0
(i = 1, . . . , q) q ×2q
one can construct q matrices AXi ∈ R2 (28)
fi (AX1 , . . . , AXq ) = 0
such that
(i = 1, . . . , q)
and AXi AXj = AXj AXi for each i, j . Suppose that v is a common eigenvector of all the AXi ’s. Then (29)
fi (λ1 , . . . , λq )v = 0 eigenvalues
(i = 1, . . . , q)
Computational aspects of computer algebra in systemtheoretic applications
125
and hence (x1 = λ1 , . . . , xq = λq ) is a solution. A practical realisation of this idea for the L2 optimal approximation problem was the following. A Gröbner basis was obtained by writing the firstorder conditions in the form described above. Then the matrices {AXi } were constructed symbolically for n 7. (With n = 7 the memory requirement was already approximately 5 Mbytes.) Note that this need be done only once for each n. {AXi } were found numerically for specific examples for n = 8, 9. (With n = 9 the memory requirement was approximately 19 Mbytes; 9 matrices, each of dimensions 512 × 512, need to be stored.) For specific examples, the numerical values of the matrices {AXi } were computed. For each such matrix, the 2n eigenvalues and eigenvectors were found, using numerical methods. For each common eigenvector, the corresponding eigenvalue was found for each matrix. This step is potentially tricky, particularly in the case of repeated (or very close) eigenvalues and derogatory matrices. A sophisticated approach to it is given in [11], but we did not exploit this. It then had to be checked whether the set of eigenvalues found by this procedure corresponded to a real stable system, since it is quite possible to obtain a system with complex coefficients in the transfer function (24), which is not an admissible candidate solution. Using this approach, the optimal approximants of systems up to McMillan degree 9 could be found, which was a considerable advance on what was achieved using the earlier approach of [4]. 6. C O N C L U S I O N S It appears that the exploitation of problemspecific structure is essential for effective use of computer algebra methods. Of course structure should be exploited whatever methods are adopted, whether symbolic or numerical. But with numerical methods one can frequently use naive methods on ‘small’ problems, and even on ‘large’ problems one can often obtain useful results with naive algorithms. Our experience indicates that with symbolic methods naive methods cannot be expected to yield useful results for interesting systemtheoretic problems. Even when problem structure is exploited, the range of problems that can be successfully tackled is disappointingly small. Our advice is that computer algebra should not be used as an alternative to numerical methods. It should be reserved for those problems for which numerical methods cannot be used, or at least for which the use of computer algebra brings some advantage over the use of numerical methods. ACKNOWLEDGEMENT The work reported in this chapter was done in close collaboration with B. Hanzon over several years. Some of this work was supported by the European Commission through the European Research Network on System Identification (ERNSI) under HCM contract ERB CHBG CT 920002 and TMR contract ERB FMRX CT 980206.
126
J.M. Maciejowski
R EFERENCES [1] Clark J.M.C. – The consistent selection of parametrizations in system identification, in: Proc. Joint Automatic Control Conf. (Purdue, West Lafayette, Indiana), 1976. [2] Czapor S.R. – Gröbner Basis Methods for Solving Algebraic Equations, Ph.D. thesis, University of Waterloo, Dept. of Applied Mathematics, 1988. [3] Efron B. – Defining the curvature of a statistical problem (with applications to secondorder efficiency), Ann. Statist. 3 (6) (1975) 1189–1242. [4] Hanzon B., Maciejowski J.M. – Constructive algebra methods for the L2problem for stable linear systems, Automatica 32 (12) (1996) 1645–1657. [5] Hanzon B., Maciejowski J.M., Chou C.T. – Model reduction in H2 using matrix solutions of polynomial equations, Technical Report CUED/FINFENG/TR314, Cambridge University Engineering Dept., March 1998. [6] Hanzon B., Peeters R.L.M. – A Faddeev sequence method for solving Lyapunov and Sylvester equations, Linear Algebra Appl. 241–243 (1996) 431–453. [7] Hicks N.J. – Notes on Differential Geometry, Van Nostrand, Princeton, 1965. [8] Kailath T. – Linear Systems, PrenticeHall, 1980. [9] Kreuzer M., Robbiano L. – Computational Commutative Algebra, vol. 1, Springer, Berlin, 2000. [10] Laub A.J. – Numerical linear algebra aspects of control design calculations, IEEE Trans. Automat. Control AC30 (2) (1985) 97–108. [11] Möller M., Tenberg R. – Multivariable polynomial system solving using intersections of eigenspaces, J. Symbolic Comput. 32 (2001) 513–531. [12] Ober R.J. – Balanced parametrizations of classes of linear systems, SIAM J. Control Optim. 29 (6) (1991) 1251–1287. [13] Sluis W. – System identification: A differential geometric approach, Technical Report CUED/FINFENG/TR206, Cambridge University Engineering Dept, January 1995. [14] Stetter H.J. – An introduction to the numerical analysis of multivariate polynomial systems, in: Hanzon B., Hazewinkel M. (Eds.), Constructive Algebra Systems Theory, Royal Netherlands Academy of Arts and Sciences, 2006, this volume.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
Computer algebra and geometry – some interactions
Franz Winkler 1 RISCLinz, Johannes Kepler Universität Linz Email:
[email protected] (F. Winkler)
A BSTRACT Algebraic curves and surfaces have been studied intensively in algebraic geometry for centuries. Thus, there exists a huge amount of theoretical knowledge about these geometric objects. Recently, algebraic curves and surfaces play an important and ever increasing rôle in computer aided geometric design, computer vision, computer aided manufacturing, coding theory, and cryptography, just to name a few application areas. Consequently, theoretical results need to be adapted to practical needs. We need efficient algorithms for generating, representing, manipulating, analyzing, rendering algebraic curves and surfaces. Exact computer algebraic methods can be employed effectively for dealing with these geometric problems.
1. I N T R O D U C T I O N Algebraic curves and surfaces have been studied intensively in algebraic geometry for centuries. Thus, there exists a huge amount of theoretical knowledge about these geometric objects. Recently, algebraic curves and surfaces play an important and ever increasing rôle in computer aided geometric design, computer vision, computer aided manufacturing, coding theory, and cryptography, just to name a few application areas. Consequently, theoretical results need to be adapted to practical needs. We need efficient algorithms for generating, representing, manipulating, analyzing, rendering algebraic curves and surfaces. Exact computer algebraic methods can be employed effectively for dealing with these geometric problems. So, for instance, we need to be able to factor over algebraically closed fields for 1
The author wants to acknowledge support from the Austrian Fonds zur Förderung der wissenschaftlichen Forschung under the project SFB F013/1304.
128
F. Winkler
determining whether a curve or surface is irreducible, we need to solve systems of algebraic equations for analyzing the singular locus of such objects, and we need to control algebraic extensions in computing rational parametrizations. We describe some of the algorithms for computations with algebraic curves and surfaces which have been developed in the last few years. One interesting subproblem is the rational parametrization of curves and surfaces. Determining whether a curve or surface has a rational parametrization, and if so, computing such a parametrization, is a nontrivial task. But in the last few years there has been quite some progress in this area. Implicit representations (by defining polynomial) and parametric representations (by rational parametrization) both have their particular advantages and disadvantages. Given an implicit representation of a curve and a point in the plane, it is easy to check whether the point is on the curve. But it is hard to generate “good” points on the curve, i.e. for instance points with rational coordinates if the defining field is Q. This is easy for a curve given parametrically. So it is highly desirable to have efficient algorithms for changing from implicit to parametric representation, and vice versa. We have described such parametrization algorithms for curves in [19, 20]. So, for instance, the cardioid curve and also its offset curve are both rational curves; compare Fig. 1(a). Recently, in [12], we have developed a completely algebraic algorithm for parametrizing pipe and canal surfaces, such as the pipe around Viviani’s temple, see Fig. 1(b). Sometimes algebraic curves and surfaces need to be visualized. Numerical approximation algorithms tend to have problems finding all the isolated components of these objects and also tracing them through singularities. On the other hand, symbolic algebraic algorithms might spend a lot of computation time on noncritical parts of these objects. We describe a hybrid symbolicnumerical algorithm for visualizing algebraic curves.
2
y 1
–1.2 –1 –0.8 –0.6 –0.4 –0.2 0
0.2 0.4 0.6 0.8
1
x
1.2 1.4 1.6 1.8
2
–1
–2
(a)
(b)
Figure 1. (a) Cardioid curve and offset; (b) Viviani pipe surface.
Computer algebra and geometry – some interactions
129
In the last section we mention some open problems in computational algebraic geometry. These open problems concern the “best” integer coefficients of a parametrization, optimal parametrization of surfaces, determining rational points on elliptic curves, decomposition of rational functions over the reals, and symbolicnumerical plotting of surfaces. For a general background on computer algebra and on symbolic algebraic algorithms for algebraic curves and surfaces we refer to [25] and [11]. 2. P A R A M E T R I Z A T I O N
OF ALGEBRAIC CURVES
One interesting problem in computational algebraic geometry is the rational parametrization of curves and surfaces. Consider an affine plane algebraic curve C in A2 (K) defined by the bivariate polynomial f (x, y) ∈ K[x, y] (here we denote by K the algebraic closure of the ground field K ). I.e. C = (a, b)  (a, b) ∈ A2 (K) and f (a, b) = 0 .
Of course, we could also view this curve in the projective plane P2 (K), defined by F (x, y, z), the homogenization of f (x, y). A pair of rational functions (x(t), y(t)) ∈ K(t)2 is a rational parametrization of the curve C , if and only if f (x(t), y(t)) = 0 and x(t), y(t) are not both constant. Only irreducible curves, i.e. curves whose defining polynomial is absolutely irreducible, can have a rational parametrization. Almost any rational transformation of a rational parametrization is again a rational parametrization, so such parametrizations are not unique. An algebraic curve having a rational parametrization is called a rational curve. A rational parametrization is called proper iff the corresponding rational map from K to C is invertible, i.e. iff the affine line and the curve C are birationally equivalent. By Lüroth’s theorem, every rational curve has a proper parametrization. Implicit representations (by defining polynomial) and parametric representations (by rational parametrization) both have their particular advantages and disadvantages. Given an implicit representation of a curve and a point in the plane, it is easy to check whether the point is on the curve. But it is hard to generate “good” points on the curve, i.e. for instance points with rational coordinates if the defining field is Q. On the other hand, generating good points is easy for a curve given parametrically, but deciding whether a point is on the curve requires the solution of a system of algebraic equations. So it is highly desirable to have efficient algorithms for changing from implicit to parametric representation, and vice versa. Example 2.1. The curve defined in the affine or projective plane over C by the defining equation f (x, y) = y 2 − x 3 − x 2 = 0 is rationally parametrizable, and actually a parametrization is x(t) = t 2 − 1, y(t) = t (t 2 − 1). On the other hand, the elliptic curve defined by f (x, y) = y 2 − x 3 + x = 0 does not have a rational parametrization.
130
F. Winkler
The tacnode curve defined by f (x, y) = 2x 4 − 3x 2 y + y 4 − 2y 3 + y 2 = 0 has the parametrization x(t) =
t 3 − 6t 2 + 9t − 2 , 2t 4 − 16t 3 + 40t 2 − 32t + 9
y(t) =
t 2 − 4t + 4 . 2t 4 − 16t 3 + 40t 2 − 32t + 9
The criterion for parametrizability of a curve is its genus. Only curves of genus 0, i.e. curves having as many singularities as their degree permits, have a rational parametrization. A symbolic algebraic algorithm for rational parametrization of curves of genus 0 has been developed in [18–20]. Let us demonstrate the algorithm on a simple example. Example 2.2. Let C be the curve in the complex plane defined by 2 f (x, y) = x 2 + 4y + y 2 − 16 x 2 + y 2 = 0. The curve C has the following rational parametrization: −1024i + 128t − 144it 2 − 22t 3 + it 4 , 2304 − 3072it − 736t 2 − 192it 3 + 9t 4 1024 − 256it − 80t 2 + 16it 3 + t 4 y(t) = −40 · . 2304 − 3072it − 736t 2 − 192it 3 + 9t 4
x(t) = −32 ·
C has infinitely many real points. But generating any one of these real points from the above parametrization is not obvious. Does this real curve C also have a parametrization over R? Indeed it does, let’s see how we can get one. In the projective plane over C, C has 3 double points, namely (0 : 0 : 1) and be the linear system of conics passing through all these double (1 : ±i : 0). Let H has points. H is called the system of adjoint curves of degree 2. The system H dimension 2 and is defined by h(x, y, z, s, t) = x 2 + sxz + y 2 + tyz = 0. . 3 elements of this linear I.e. for any particular values of s and t we get a conic in H system define a birational transformation T = h(x, y, z, 0, 1) : h(x, y, z, 1, 0) : h(x, y, z, 1, 1) = x 2 + y 2 + yz : x 2 + xz + y 2 : x 2 + xz + y 2 + yz
which transforms C to the conic D defined by 15x 2 + 7y 2 + 6xy − 38x − 14y + 23 = 0.
For a conic defined over Q we can decide whether it has a point over Q or R. In particular, we determine the point (1, 8/7) on D , which, by T −1 , corresponds to
Computer algebra and geometry – some interactions
131
to conics through P and the regular point P = (0, −8) on C . Now, by restricting H with C (for details see [19]), we get the parametrization intersecting H x(t) =
−1024t 3 , 256t 4 + 32t 2 + 1
y(t) =
−2048t 4 + 128t 2 256t 4 + 32t 2 + 1
over the reals. An alternative parametrization approach can be found in [23]. In any case, computing such a parametrization essentially requires the solution of two major problems: (1) a full analysis of singularities and determination of genus and adjoint curves (either by successive blowups, or by Puiseux expansion) and (2) the determination of a regular point on the curve. The fastest known method for (1) has been presented in [22]. If f (x, y) ∈ Q[x, y] is the defining polynomial for the curve under consideration, then the problem can be solved in time O(d 5 ), where d is the degree of f . Let us discuss the treatment of problem (2). We can control the quality of the resulting parametrization by controlling the field over which we choose this regular point. Thus, finding a regular curve point over a minimal field extension on a curve of genus 0 is one of the central problems in rational parametrization. The treatment of this problem goes back to [7]. Its importance for the parametrization problem has been described in [8]. A rationally parametrizable curve always has infinitely many regular points over the algebraic closure K of the ground field K . Every one of these regular points is contained in an algebraic extension field of K of certain finite degree. The coordinates of the regular point determine directly the algebraic extension degree over K which is required for determining a parametrization based on this regular point. So the central issue is to find a regular point on the curve C of as low an algebraic extension degree as possible. Example 2.3. Let us once more consider the tacnode curve C of Example 2.1, defined by f (x, y) = 2x 4 − 3x 2 y + y 4 − 2y 3 + y 2 = 0.
According to the approach described in Example 2.2 we need a regular point on the curve. We might determine such a point by intersecting C with a line. Such intersecting lines are shown in Fig. 2. Suppose we select the line L1 defined by l1 (x, y) = y + 1. The 4 intersection points have the form P = (α, −1), where α is a root of the irreducible polynomial 2α 4 + 3α 2 + 4 = 0. Using such a point leads to the parametrization (x(t), y(t)) = (n1 (t)/d(t), n2 (t)/d(t)), where
132
F. Winkler
2 1.8 1.6 1.4 1.2 y 1 0.8 0.6 0.4 0.2 –2
–1 –0.2
1 x
2
–0.4 –0.6 –0.8 –1
Figure 2. Determining points on the tacnode curve.
α 4 36t + 60α + 72α 3 t 3 − 18 + 108α 2 t 2 + 103α + 42α 3 t 4 − 20 − 24α 2 , n2 (t) = −9t 4 − 3α + 18α 3 t 3 + 2α 2 − 33 t 2 − 2α + 12α 3 t − 4, d(t) = 9t 4 + 24αt 3 − 16α 2 + 60 t 2 − 20α + 24α 3 t + 6 + 12α 2 . n1 (t) = −
This parametrization of C has complex coefficients of algebraic degree 4 over Q. Now suppose we select the line L2 defined by l2 (x, y) = y − 1. L2 intersects C in the double point (0, 1) and in the 2 intersection points having the form P = (β : 1), where β is a root of the irreducible polynomial 2β 2 − 3 = 0. Using such a point leads to the parametrization (x(t), y(t)) = (n1 (t)/d(t), n2 (t)/d(t)), where n1 (t) = 2βt 4 + 9t 3 − 27t − 18β, n2 (t) = 2t 4 + 12βt 3 + 39t 2 + 36βt + 18, d(t) = 11t 4 + 24βt 3 + 12t 2 + 18.
This parametrization of C has real coefficients of algebraic degree 2 over Q. Next we select the line L3 defined by l3 (x, y) = y − 1792025 687968 x = 0. L3 intersects C in the double point (0, 0) and in 2 other rational points, one of which has the coordinates 1232888111650 3211353600625 , . P= 1772841609267 1772841609267 Using this point P leads to the parametrization (x(t), y(t)) = (n1 (t)/d(t), n2 (t)/d(t)), where n1 (t) = −11095993004850t 4 − 12890994573912t 3 + 4296998191304t + 1232888111650,
Computer algebra and geometry – some interactions
133
n2 (t) = 28902182405625t 4 + 67155391392600t 3 + 58277689547446t 2 + 22385130464200t + 3211353600625, d(t) = 15955574483403t 4 + 44963405382900t 3 + 54017766921682t 2 + 29782459134100t + 6069839800571.
This parametrization of C has rational coefficients, but they are huge. Finally we select the line L4 defined by l4 (x, y) = x − 1. L4 intersects C in 2 complex points and 2 real points, one of which has the coordinates P = (1, 2). Using this point P leads to the parametrization (x(t), y(t)) = (n1 (t)/d(t), n2 (t)/d(t)), where n1 (t) = 2t 4 + 7t 3 − 21t − 18, n2 (t) = 4t 4 + 28t 3 + 73t 2 + 84t + 36, d(t) = 9t 4 + 40t 3 + 64t 2 + 48t + 18.
This parametrization of C has small rational coefficients. In [19] we present an algorithm for determining the lowest algebraic extension degree [Q(α) : Q] of a field Q(α) which admits a rational parametrization of a curve defined over Q. In fact, this algorithm also determines a parametrization over this optimal extension field. In [20] we describe a decision procedure for determining whether an algebraic curve with defining polynomial in Q[x, y] has a parametrization over the real numbers R. Once we are able to parametrize algebraic curves over the optimal extension field, we can also determine Diophantine solutions of the corresponding polynomial equations. We do not go into details here, but refer the reader to [15]. 3. P A R A M E T R I Z A T I O N
OF ALGEBRAIC SURFACES
The problem of rational parametrization can also be solved for algebraic surfaces. Also in this case, the analysis of singularities plays an essential rôle. Many different authors have been involved in the solution of this problem, we just mention [9] and [24]. Based on the algorithmic solution of the singularity problem, Schicho has developed a general algorithm for determining the rational parametrizability of an algebraic surface, and, in the positive case, for actually computing such a parametrization. See [16]. But whereas for the case of curves we know exactly the degrees of the rational functions and also the degree of the algebraic extension which might appear in the parametrization, these bounds are not known for surfaces, in general. General parametrization algorithms for surfaces require considerable computation time. So it is natural to try to develop algorithms specifically taylored for classes of surfaces of practical importance. Such a class is, for instance, the one of pipe and canal surfaces. A canal surface S , generated by a parametrized space curve C = (m1 (t), m2 (t), m3 (t)) in R3 , is the envelope of the set of spheres with rational radius function r(t) centered at C . The curve C is called the spine curve
134
F. Winkler
of S . In a pipe surface r(t) is constant. This concept generalizes the classical offsets (for constant r(t)) of plane curves. Pipe surfaces have numerous applications, such as shape reconstruction or robotic path planning. Canal surfaces with variable radius arise in the context of computer aided geometric design mainly as transition surfaces between pipes. Whereas for curves it is crucial to determine a regular point with real coordinates, in the situation of pipe and canal surfaces we determine a rational curve with real coefficients on the surface, in the same parameter as the spine curve. Once we have determined such a rational curve on the canal surface S , we can rotate this curve around the spine curve and in such a way compute a parametrization of S . So, for instance, Viviani’s temple is defined as the intersection of a sphere of radius 2a and a circular cylinder of radius a : x 2 + y 2 + z2 = 4a 2 , (x − a)2 + y 2 = a 2 ,
see Fig. 1(b). The pipe around Viviani’s temple can be rationally parametrized. In [14] it is shown that canal surfaces with rational spine curve and rational radius function are in general rational. To be precise, they admit rational parametrizations of their real components. Recently we have developed a completely symbolic algebraic algorithm for computing rational parametrizations of pipe and canal surfaces over Q, see [12]. 4. I M P L I C I T I Z A T I O N
OF CURVES AND SURFACES
The inverse problem to the problem of parametrization consists in starting from a (rational) parametrization and determining the implicit algebraic equation of the curve or surface. This is basically an elimination problem. Let us demonstrate the procedure for curves. We write the parametric representation of the curve C , x(t) = p(t)/r(t),
y(t) = q(t)/r(t),
as h1 (t, x) = x · r(t) − p(t) = 0,
h2 (t, y) = y · r(t) − q(t) = 0.
The implicit equation of the curve must be the generator of the ideal I = h1 (t, x), h2 (t, y) ∩ K[x, y]. We can use any method in elimination theory, such as resultants of Gröbner bases, for determining this generator. For instance, resultantt h1 (t, x), h2 (t, y) will yield the polynomial defining the curve C . Compare [25] and [21] for details. In [21] we introduce the notion of the tracing index of a rational parametrization,
Computer algebra and geometry – some interactions
135
i.e. the number of times a (possibly nonproper) parametrization “winds around, or traces, an algebraic curve”. When we compute the resultant of h1 and h2 as above for an nonproper parametrization, then this tracing index will show up in the exponent of the generating polynomial. Example 4.1. Let us do this for the cardioid curve of Fig. 1(a). We start from the parametrization x(t) =
256t 4 − 16t 2 , 256t 4 + 32t 2 + 1
y(t) =
−128t 3 . 256t 4 + 32t 2 + 1
So we have to eliminate the variable t from the equations h1 (t, x) = x · 256t 4 + 32t 2 + 1 − 256t 4 + 16t 2 , h2 (t, y) = y · 256t 4 + 32t 2 + 1 + 128t 3 .
As the polynomial defining the cardioid curve we get resultantt h1 (t, x), h2 (t, y) = 17179869184 · 4y 4 − y 2 + 8x 2 y 2 − 4xy 2 + 4x 4 − 4x 3 .
Similarly we could determine this defining polynomial by a Gröbner basis computation. 5. F U R T H E R
TOPICS IN COMPUTATIONAL ALGEBRAIC GEOMETRY
We have only described a few subproblems in computational algebraic geometry. For the algorithmic treatment of problems in computer aided geometric design, such as blending and offsetting, we refer the reader to [10]. A thorough analysis of the offset curves, in particular their genus, is given in [2]. If we need to decide problems on algebraic geometric objects involving not only equations but also inequalities, then the appropriate method is Collins’ algorithm for cylindrical algebraic decomposition, see [3]. Further areas of investigation are desingularization of surfaces, determining rational points on elliptic curves, and fast algorithms for visualization of curves and surfaces. In general, it will be more and more important to bridge the gap between symbolic and numerical algorithms, combining the best features of both worlds. 6. O P E N
PROBLEMS
Integer coefficients in curve parametrization As we have seen in Section 2, the quality of a rational parametrization of an algebraic curve crucially depends on the quality of a regular point which we can determine on this curve. For instance, starting from a defining polynomial f (x, y) of C over the rational numbers Q, we know that we will need an algebraic extension of degree 2, at most, for expressing such a point. But if we can actually find a
136
F. Winkler
regular point with coordinates in Q, and therefore a parametrization with rational coefficients, the question is still how to find a parametrization with “smallest” rational coefficients. To our knowledge, this problem is unsolved. Optimality of surface parametrization For rational algebraic surfaces no algorithm is known, in general, for finding a parametrization with lowest possible degree of rational functions. The best we can currently do is to compute a parametrization having at most twice the optimal degree, see [17]. Also the problem of determining the smallest algebraic field extension for expressing a rational parametrization of a rational algebraic surface is wide open. In general, we cannot decide whether there is a parametrization over the given field of definition. We also do not know whether a bound for the degree of the necessary extension exists. Decomposition of rational functions over R In the algorithm for rationally parametrizing pipe and canal surfaces, [12], the problem is finally reduced to finding a representation of a rational function as a sum of two squares. This is a special case of Hilbert’s 17th problem. Over the real algebraic numbers there exists a simple algorithm for solving this problem. Over R the problem is still open. Determining rational points on elliptic curves For curves of genus 0 we can decide the existence of rational points. If a curve of genus 0 over a field of characteristic 0 has one rational point then it must have infinitely many. In fact, we can determine these rational points. If the genus is greater or equal 2, then there are only finitely many rational points on the curve C . This was conjectured by Mordell and proved by Faltings [5]. For curves of genus 1, i.e. socalled elliptic curves, all possibilities can arise: no, finitely many, and infinitely many rational points. Elliptic curves play an important rôle in many areas of mathematics, and recently also in cryptography. Determining all, or at least one, rational point on an elliptic curve is an open problem. For a short introduction see [4]. Symbolic–numerical plotting of surfaces When we work with curves and surfaces, we do not only construct, transform, and analyze them, but sometimes we also want to visualize them on the screen. These geometrical objects might be quite complicated, having several real components, perhaps isolated singularities, and complicated branch points. There are basically two approaches to the problem of plotting such curves or surfaces: numerical plotting and algebraic plotting. Whereas numerical plotting routines work well for simple objects and require relatively little computation time, they quickly become unreliable for more complicated objects: missing small components, getting the picture wrong around singularities. On the other hand, algebraic plotting routines can overcome these problems, but are notoriously slow.
Computer algebra and geometry – some interactions
137
Recently we have developed a hybrid symbolic–numerical routine in the program system CASA, [6], for reliable but relatively fast visualization of plane algebraic curves [13]. These methods need to be understood better and extended to surfaces. R EFERENCES [1] Abhyankar S.S., Bajaj C.L. – Automatic parametrization of rational curves and surfaces III: Algebraic plane curves, Comput. Aided Geom. Design 5 (1988) 309–321. [2] Arrondo E., Sendra J., Sendra J.R. – Genus formula for generalized offset curves, J. Pure Appl. Algebra 136 (3) (1999) 199–209. [3] Caviness B.F., Johnson J.R. – Quantifier Elimination and Cylindrical Algebraic Decomposition, Springer, 1998. [4] Drmota M. – Sieben MillenniumsProbleme. I, Internat. Math. Nachrichten 184 (2000) 29–36. [5] Faltings G. – Endlichkeitssätze für abelsche Varietäten über Zahlenkörpern, Invent. Math. 73 (1983) 549–576. [6] Hemmecke R., Hillgarter E., Winkler F. – CASA, in: Grabmeier J., Kaltofen E., Weispfenning V. (Eds.), Handbook of Computer Algebra: Foundations, Applications, Systems, SpringerVerlag, 2003, pp. 356–359. [7] Hilbert D., Hurwitz A. – Über die Diophantischen Gleichungen vom Geschlecht Null, Acta Math. 14 (1890) 217–224. [8] Hillgarter E., Winkler F. – Points on algebraic curves and the parametrization problem, in: Wang D. (Ed.), Automated Deduction in Geometry, in: Lecture Notes in Artificial Intelligence, vol. 1360, SpringerVerlag, Berlin, 1998, pp. 189–207. [9] Hironaka H. – Resolution of singularities of an algebraic variety over a field of characteristic 0, Ann. Math. 79 (1964) 109–326. [10] Hoffmann C.M. – Geometric & Solid Modeling, Morgan Kaufmann Publ., 1989. [11] Hoffmann C.M., Sendra J.R., Winkler F. – Parametric algebraic curves and applications, J. Symbolic Comput. 23 (2&3) (1997) (special issue). [12] Landsmann G., Schicho J., Winkler F., Hillgarter E. – Symbolic parametrization of pipe and canal surfaces, in: Traverso C. (Ed.), Proc. Internat. Symposium on Symbolic and Algebraic Computation (ISSAC’2000) (St. Andrews, Scotland, Aug. 2002), ACM Press, 2000, pp. 202–208. [13] Mittermaier C., Schreiner W., Winkler F. – A parallel symbolicnumerical approach to algebraic curve plotting, in: Ganzha V.G., Mayr E.W., Vorozhtsov E.V. (Eds.), Computer Algebra in Scientific Computing, Proc. of CASC’2000, SpringerVerlag, Berlin, 2000, pp. 301–314. [14] Peternell M., Pottmann H. – Computing rational parametrizations of canal surfaces, J. Symbolic Comput. 23 (2&3) (1997) 255–266. [15] Poulakis D., Voskos E. – On the practical solution of genus zero diophantine equations, J. Symbolic Comput. 30 (5) (2000) 573–582. [16] Schicho J. – Rational parametrization of surfaces, J. Symbolic Comput. 26 (1) (1998) 1–30. [17] Schicho J. – A degree bound for the parametrization of a rational surface, J. Pure Appl. Algebra 145 (1999) 91–105. [18] Sendra J.R., Winkler F. – Symbolic parametrization of curves, J. Symbolic Comput. 12 (6) (1991) 607–631. [19] Sendra J.R., Winkler F. – Parametrization of algebraic curves over optimal field extensions, J. Symbolic Comput. 23 (2&3) (1997) 191–207. [20] Sendra J.R., Winkler F. – Algorithms for rational real algebraic curves, Fund. Inform. 39 (1–2) (1999) 211–228. [21] Sendra J.R., Winkler, F. – Tracing index of rational parametrizations, RISCLinz Report Series 0101, J. Kepler Univ., Linz, Austria, 2001. [22] Stadelmeyer, P. – On the Computational Complexity of Resolving Curve Singularities and Related Problems, Ph.D. thesis, RISCLinz, J. Kepler Univ., Linz, Austria, Techn. Rep. RISC 0031, 2000.
138
F. Winkler
[23] van Hoeij M. – Computing Parametrizations of Rational Algebraic Curves, in: von zur Gathen J., Giesbrecht M. (Eds.), Proc. ISSAC’94, ACM Press, 1994, pp. 187–190. [24] Villamayor O. – Introduction to the algorithm of resolution, in: Lopez A. Campillo, et al. (Eds.), Algebraic Geometry and Singularities (La Rabida 1991), Birkhäuser, 1991, pp. 123–154. [25] Winkler F. – Polynomial Algorithms in Computer Algebra, SpringerVerlag, New York, 1996.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
A matrix method for finding the minimum or infimum of a polynomial
Bernard Hanzon a and Dorina Jibetean b a b
School of Mathematical Sciences, University College, Cork, Ireland Technical University Eindhoven, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
Emails:
[email protected] (B. Hanzon),
[email protected] (D. Jibetean)
A BSTRACT A method is described for finding the global minimum or infimum of a polynomial in several variables. Both the value of the minimum or infimum is obtained as well as at least one point where the minimum is attained if it is attained. If the minimum is attained in finitely many points, the method finds them all. The method is based on the Stetter–Möller matrix method from the theory of constructive algebra, which translates problems in polynomial algebra into linear algebra problems. The linear algebra techniques that we use are wellknown in linear dynamical systems theory.
1. I N T R O D U C T I O N In systems theory, statistics and virtually all other branches of science that use mathematical modeling, many problems can be formulated as optimization problems. In linear systems theory the criterion function is often a rational function in several variables. In that case the firstorder conditions for finding the optimum can be rewritten as a system of polynomial equations. This allows, at least in principle, for the application of techniques from the theory of constructive polynomial algebra, like Gröbner basis techniques, to obtain the optimal criterion value and (some representation of) the critical points. An example of this approach is [6]. A problem that one has to face following this approach is that the construction by computer of a Gröbner basis, using, e.g., the Buchberger algorithm, can take up a lot of memory space and calculation time. In fact so much so that (given the present state Key words and phrases: Optimization, Polynomials, Constructive algebra, Gröbner basis, Linear algebra, Eigenvalue problems, Linear dynamical systems theory, Polynomial matrices, Algebraic functions
140
B. Hanzon and D. Jibetean
of technology) often only small problems can be solved using such an approach. Another problem with such an approach is that if the number of critical points is infinite, one may run into trouble. The wellknown method of calculating a lexicographical Gröbner basis and solving by backsubstitution (as, e.g., described in Section 8 of [2]) can break down in that case. In [7] an alternative approach is presented for a special class of H2 optimal model reduction problems. The firstorder conditions are rewritten such that they form a Gröbner basis with respect to total degree ordering (directly, i.e. without making use of anything like the Buchberger algorithm). From the form of the equations it is concluded that the number of solutions is finite which makes it possible to apply the Stetter–Möller matrix method (which was actually reinvented independently in [7]). One of the main advantages of this approach is that the use of the Buchberger algorithm is avoided altogether. Instead the problem is now to solve a (large) common eigenvector problem for a set of commutative matrices. In this way a polynomial algebra problem is transformed into a wellknown and wellstudied (but large) linear algebra problem. In our approach to the infimization of a multivariate polynomial that we discuss here we try to apply an analogous line of thought. However we cannot apply the idea directly to the polynomial, but instead we will apply it to a perturbation of the polynomial and study the limiting behavior. In this way again the use of the Buchberger algorithm is avoided and only linear algebra methods (including polynomial matrix methods) are used. The method works even if the original polynomial has an infinite number of critical points or when the polynomial does not attain a minimum. A detailed account of the approach is published elsewhere ([5]). Here we intend to give a summary of some of the results that can be obtained in this way and we will present a number of examples, which will allow us to make some remarks about issues of implementation of the method. 2. P R O P O S E D
APPROACH
Let us start with presenting some definitions and explaining some of the notation that will be used. Let K[x1 , x2 , . . . , xn ] denote the ring of polynomials with coefficients in the field K, where K can be the real field R, the complex field m m C, etc. The total degree of a monomial x1 1 x2 2 · · · xnmn is the sum ni=1 mi and the total degree of a polynomial is the maximum of the total degrees of all the monomials that occur in the polynomial with nonzero coefficient. The total degree of a polynomial p will be denoted by tdeg(p). A monomial ordering is a complete ordering on the set of monomials which has the property that for any pair of monomials r1 , r2 and for any i ∈ {1, . . . , n} the following implication holds: r1 r2 ⇒ xi r1 xi r2 . If a monomial ordering is defined, the leading monomial of a polynomial is defined as the monomial in the polynomial which dominates all other monomials with respect to the monomial ordering. The leading monomial of a polynomial p is denoted by LM(p). Let a monomial ordering of the monomials be called a total degree ordering if for any pair of monomials r1 , r2 , the following implication holds: tdeg(r1 ) > tdeg(r2 ) ⇒ r1 r2 .
A matrix method for finding the minimum or infimum of a polynomial
141
Now we can give our problem formulation. Let p = p(x1 , x2 , . . . , xn ) denote a polynomial in the variables x1 , x2 , . . . , xn . We would like to find the infimum of this polynomial over Rn . We know that this infimum is either finite or minus infinity and iff the infimum is attained it is in fact a (global) minimum. In that case we would like to know at least one point where this minimum is attained and more points if possible. Consider the firstorder conditions for this optimization problem:
(1)
∂p = 0, ∂x1 ∂p = 0, ∂x2 .. . ∂p = 0. ∂xn
Note that this forms a system of polynomial equations. m ∂p is xi i for If, accidentally, for each i ∈ {1, 2, . . . , n} the leading monomial of ∂x i some positive integer mi , then the firstorder conditions have a finite number of ∂p ∂p , . . . , ∂x would in fact form a Gröbner basis (this follows solutions in Cn and ∂x n 1 from [2, Section 5.3, Thm. 6, Section 2.9, Thm. 3, and Prop. 4]; see also [5]). Then the system of equations can be solved by the Stetter–Möller matrix method, as will be explained below. Usually the firstorder equations will however not have that form. Therefore we consider an associated family of polynomials, depending on the real positive parameter λ, given by qλ (x1 , . . . , xn ) = p(x1 , . . . , xn ) + λ x12m + x22m + · · · + xn2m , where m is a positive integer. When m is chosen such that 2m is strictly larger 2m−1 λ with respect than the total degree of p, the leading monomial of ∂q ∂xi is xi
∂qλ λ to any total degree ordering. It follows that ∂q ∂x1 , . . . , ∂xn forms a Gröbner basis with respect to any total degree ordering and that the number of complex solutions of the corresponding system of polynomial equations is finite. In the language of algebraic geometry: the ideal I generated by the partial derivatives of qλ has a zerodimensional variety. It is wellknown that in that case the quotient space C[x1 , . . . , xn ]/I is a finitedimensional linear vector space. A basis for this vector m m space is formed by all monomials x1 1 x2 2 · · · xnmn with mi a nonnegative integer (strictly) less than 2m − 1 for each i = 1, 2, . . . , n. It follows that the dimension of this linear space is N := (2m − 1)n . Multiplication by a polynomial f modulo the ideal is a linear endomorphism of this finitedimensional space. Using the basis just described, the endomorphism can be represented by a matrix, that we will denote by Af . Because the endomorphisms represent multiplication by f modulo the ideal, the matrices have a number of very interesting properties:
(i) For each pair of polynomials f, g ∈ C[x1 , . . . , xn ] the following equalities hold: Af Ag = Ag Af = Afg . It follows that these matrices commute.
142
B. Hanzon and D. Jibetean
(ii) A polynomial f ∈ C[x1 , . . . , xn ] is a polynomial from the ideal I if and only if Af = 0. (iii) If f ∈ I is a polynomial from the ideal and Ax1 , . . . , Axn represent multiplication by x1 , . . . , xn respectively, then f (Ax1 , . . . , Axn ) = Af = 0. It follows that (Ax1 , . . . , Axn ) is a matrix solution of the set of polynomial equations that generates the ideal. (iv) Suppose ζ ∈ CN \ {0} is a common eigenvector of (Ax1 , . . . , Axn ) with Axi ζ = ξi ζ,
i = 1, . . . , n.
Then the vector (ξ1 , . . . , ξn ) will be called the multieigenvalue corresponding to ζ. This multieigenvalue is in fact a zero of the ideal: for each f ∈ I we have 0 = f (Ax1 , . . . , Axn )ζ = f (ξ1 , . . . , ξn )ζ which implies f (ξ1 , . . . , ξn ) = 0. In fact it can be shown that also a reverse of this property holds: for each zero (ξ1 , . . . , ξn ) of the ideal there exists a common eigenvector of (Ax1 , . . . , Axn ) with (ξ1 , . . . , ξn ) as its multieigenvalue. (See, e.g., [9]; an independent proof is also provided in [7].) These facts constitute the core of what we will denote by the Stetter–Möller theory. Applying this we can construct, for each positive value of λ, the matrices Axi , i = 1, . . . , n. The multieigenvalues of these matrices form the solutions (over the complex field) of the critical point equations of qλ . The corresponding values of qλ , the socalled critical values are the eigenvalues of Aqλ . This matrix will be called the critical value matrix of qλ . Analyzing the firstorder equations for qλ it is not too hard to see that the matrices Axi are in fact polynomial in λ1 . To stress this fact we will use the notation Axi = Axi ( λ1 ), if we want to stress the dependence of Axi on λ (i = 1, . . . , n). 3. R E S U L T S
ON THE LIMITING BEHAVIOR OF THE MINIMA IF
λ↓0
In this section a number of results is presented which show why it is useful to study the minimization problem of qλ , where λ approaches zero from above, when one is interested in the minimization of p. First let us note for further reference that qλ can be written as qλ (x) = p(x) + λx2m 2m , 1 where · 2m denotes the Minkowski norm x2m = ( ni=1 xi2m ) 2m of the vector x ∈ Rn and m a positive integer as before. The first result about qλ is that it attains a global minimum on the set Rn for every positive value of λ. (For a proof we refer to [5].) This holds even if p does not have a global minimum on Rn . The number of points in Rn where qλ attains its global minimum is finite. This finite nonempty set will be denoted by xλ . (In fact the total number of critical points is finite for each positive value of λ, as we have argued in the previous section based on the Stetter–Möller theory.) It can be shown (see [5])
A matrix method for finding the minimum or infimum of a polynomial
143
that the minimum of qλ approaches the infimum of p if λ approaches zero from above: lim minn qλ (x) = infn p(x). λ↓0 x∈R
x∈R
Furthermore when λ > 0 is small the values of p on the set xλ are close to the infimum; in fact: lim p(xλ ) = infn p(x). λ↓0
R
(For a proof we refer to [5].) Now consider the set L := x ∈ Rn  ∀ε > 0 ∃λε > 0: ∀λ ∈ (0, λε ): xλ ∩ B(x, ε) = ∅ .
We will call this the limit set of xλ when λ ↓ 0. It is a finite set, possibly empty. It is nonempty if and only if p attains a global minimal value in Rn . If it is nonempty than p attains its global minimal value in each point of L. (For a proof of these properties we refer to [5].) 4. C O M P U T A T I O N
OF THE SOLUTION USING MATRIX METHODS
From the previous sections we know that we can calculate the global minimum of qλ for each positive value of λ and the finite set of points where this minimum is attained. Also we know that from the behavior, when λ ↓ 0, of this global minimum and the corresponding set of minimizing points, we can find the infimum of p and a finite set of points where this infimum is attained if such points exist (in which case the infimum is a global minimum). If there are finitely many such points then all these points can be found using this approach (see [5]). In the present section it will be explained how we perform the calculations. In fact we will make use of matrix methods, including solution of generalized eigenproblems and degree reduction algorithms for polynomial matrices. We will also use some algebraic function theory. Recall that the matrix Aqλ that corresponds to multiplication by qλ modulo the ideal generated by the critical point equations of qλ has a spectrum that contains all the critical values of qλ : Rn → R. Similarly the corresponding values of xi form a subset of the spectrum of Axi , i = 1, 2, . . . , n. Each critical value of qλ and the components x1 , x2 , . . . , xn of the corresponding critical point of qλ form a multieigenvalue of the n + 1 commuting matrices Aqλ , Ax1 , . . . , Axn . All these matrices are polynomial matrices in the variable λ1 . For any matrix A( λ1 ), polynomial in λ1 consider the spectrum as a function of λ: 1 − zI = 0, det A λ
λ > 0, z ∈ C.
144
B. Hanzon and D. Jibetean
Multiplying both sides by a sufficiently high power of λ, this will be an equation of the form f0 (z) + λf1 (z) + λ2 f2 (z) + · · · + λk fk (z) = 0,
with f0 ≡ 0,
where f0 (z) + λf1 (z) + λ2 f2 (z) + · · · + λk fk (z) is polynomial in the variables λ and z. This defines an algebraic multivalued function z(λ). Let ζi (λ), i = 1, . . . , N , denote the N branches of z(λ). It can be shown using results from algebraic function theory that the set of finite limits of the branches for λ ↓ 0 is precisely the set of zeros of f0 (z). Our purpose is now to calculate the univariate polynomial f0 (z) and its zeros. We use matrix methods as follows. Write A( λ1 ) − zI as a product of two matrices 1 1 1 − zI = U ,z F ,z , A λ λ λ where U is a matrix which is unimodular over R(z) (in other words: its determinant does not depend on λ1 and is not identically zero for all z) and F is a matrix which has lowest possible sum of row degrees as a polynomial matrix in λ1 over the field R(z). The algorithm that we use to obtain this factorization is analogue to the algorithm of Forney for rectangular polynomial matrices (see [4]). If we consider in each row the coefficients in R(z) corresponding to the highest degree monomials of 1 λ in that row, then we obtain the socalled highorder coefficient matrix (HOCM). The HOCM depends on z, not on λ1 . So its determinant is a function of z. The product det(U ) det(HOCM)
is the univariate polynomial in z that we are looking for. This technique can now be applied to Aqλ and to the matrices Axi , i = 1, 2, . . . , n, to obtain the infimum of p and a finite number of points where the infimum is obtained if such points exist. The method can also be applied to determine whether p has a minimum or not and if not whether the infimum is finite or minus infinity. For further details we refer to [5]. 5. E X A M P L E We consider here rather small examples. There are a few reasons for our choices. The first one is that the method we have proposed requires a number of calculations that increases rapidly with the degree of the polynomial and the number of variables. The second, and more important reason, is that in this case we already know the minimum and the set of points where it is attained, therefore it is possible to analyze the algorithm in these specific examples. We considered interesting the case of an infinite number of critical points. In the finite case we know from the theory that the algorithm finds all the points.
A matrix method for finding the minimum or infimum of a polynomial
145
5.1. Example 1 Let 2 p(x1 , x2 ) = x12 + x22 − 1 .
It is easy to see that the minimum of p is zero and it is attained for all the points of the circle of radius 1, centered in the origin. First we construct the family of polynomials 2 qλ = x12 + x22 − 1 + λ x16 + x26 . The power in the extraterm was chosen to be an even number strictly larger than 4, the total degree of p . Next we construct our matrices using the Stetter–Möller method. In our experience it is advisable, for computational reasons, to work with the matrices associated to the variables rather than with matrices associated to the polynomial qλ . Start with the matrix associated to x1 . The matrix can be seen in the Appendix together with other matrices resulting while running the algorithm. One can easily see that the total row degree of the matrix equals 7. However it is not minimal, i.e. the highest power of 1/λ appearing in the determinant of Ax1 (1/λ, z) = Ax1 (λ) − zI is actually 5 as it results by running the total row degree reduction algorithm of Forney on Ax1 (1/λ, z) which will return the matrix A¯ x1 (1/λ, z). At this point we have also obtained the coefficient of the highest power of 1/λ in the expression det(Ax1 (1/λ, z)). This is the determinant of the HOCM of A¯ x1 (1/λ, z). Computing the eigenvalues of HOCM, i.e. the zeroes of the determinant of HOCM, we get √ √ √ √ 0, 0, 0, 0, 0, 1, −1, 1/ 2, −1/ 2, 1/ 2, −1/ 2 . For eigenvalues with multiplicity one we can find the values corresponding to the other variables using the eigenvectors. There are at least two methods for dealing with eigenvalues with multiplicity higher than one. The first one is to use the eigenspace and apply it successively to matrices corresponding to the remaining variables in order to find the subspaces which are invariant for all matrices Axi . The second one, that we apply in our examples, is using a socalled discriminating polynomial as in [3], that is a polynomial, normally a linear combination of the variables, whose eigenspaces are 1dimensional. The way to choose such a polynomial is discussed in [3]. Using x1 + 2x2 as discriminating polynomial we obtain the set of critical points √ √ (0, 0), ± 1/ 2, ±1/ 2 , (±1, 0), (0, ±1) . Evaluating the polynomial p at these points, we conclude that the minimum is 0 and it is attained at the points above except (0, 0). To be completely safe, we should check that p has indeed a minimum, although we do not do the calculations here. Remark that we find points where the minimum of p is attained, of minimal and maximal Minkowski norm. For related theoretical results see [5].
146
B. Hanzon and D. Jibetean
5.2. Example 2 Let 2 p(x1 , x2 ) = (x1 − 3)2 + 2(x2 + 2)2 − 1 (x1 − 4)2 + (x2 − 7)2 .
The critical set of this polynomial consists of two connected components, an ellipsoid and a point. We apply the same strategy using the discriminating polynomial 10x1 + x2 . By calculating the value of p at the obtained points we find that the minimal value is 0 at (4, 7), (3.9999, −2.0039), (2.0406, −1.8004) .
Remark that we obtain (at least) one point in each connected component of the set of critical points. It is true in general that the algorithm finds at least one point in each connected component of the set of points where the minimum is attained, namely a point with minimal Minkowski norm in the component. The proof will not be given here ([5]).
APPENDIX The matrices Aλ , associated to x1 and A¯ λ (z), obtained after the running the Forney algorithm on Aλ − zI are The matrix corresponding to the variable “x ” is: −z
1 0 0 0 0 0 0 0 0 −z 1 0 0 0 0 0 0 0 0 −z 1 0 0 0 0 0 0 0 0 −z 1 0 0 0 0 1 0 − 2 1 −z 0 0 0 0 0 32 λ 3λ 0 0 0 0 0 −z 0 0 0 0 0 0 0 0 0 −z 0 0 0 0 0 0 0 0 0 −z 0 0 0 0 0 0 0 0 0 −z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0
0
0
0
0
0
0
0
0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 − 23 λ 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 −z 0 0 0 0 −z 0 0 0 0 −z 0 0 0 0 −z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 1 0 − 32 λ 0 3 λ 1 2 1 0 − 32 λ 0 0 0 0 0 0 3 λ 2 1 +4 1 0 0 0 0 0 0 0 − 49 12 3 λ 9 λ2 λ 2 1 4 1 0 0 0 0 0 0 0 − 49 12 3 λ + 9 λ2 λ
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 −z 0 0 0 1 0 −z 0 0 0 0 0 −z 0 0 0 0 0 −z 0 0 0 0 0 −z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 − 32 λ
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 −z 0 0 0 −z 0 0 0 −z 0 0 0 1 0 0 0 0 0 0 − 32 λ 0 1 0 0 0 0 0 94 12 0 − 32 λ λ 1 0 0 0 0 0 49 12 0 − 23 λ λ
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 −z
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
0 −z 0 0 0 0 −z 0 0 0 0 −z
147
A matrix method for finding the minimum or infimum of a polynomial
Its total row degree in λ1 is 7. By multiplying on the left with a unimodular matrix over Q(z), we reduce it to its minimal degree which is 5. −z 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
−z 1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
−z
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2 1 0 − 2 1 −z 0 3 λ 3 λ
0
0
0
0
0
0
0
0 0
0
0
0
0
0
0
0
0 0
0 −z 0
0
0
0
0
0 −z 0 0
0
0 − 23 λ1 1 0
0
0
0
0
0 0
0
0
0
0
0
0
0
0
0
0
0
0 −z 0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 −z 0
0
0
1
0
0
0
0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 0 −z 0
0
0
1
0
0
0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 −z
0
0
0
1
0
0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
−z
0
0
0
1
0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
−z
0
0
0
1 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
−z
0
0
0 1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
−z 0
0 0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 −z 0 0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 −z 0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 −z
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
−z
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
−z
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
−z
0
0
0
1
0
0
0
0 − 23 λ1
0
0
0
0
0
0
0
0
−z
0
0
0
1
0
0
0
0
0
0
0
0
2 1 3 λ
0
− 23 λ1 0
0
0
0 − 23 λ1 0 0 0 −z 0 0 − 23 λ1 0 0 0 0 0 0 − λ1 0 − λz 0 0 0 0 − λ1 0
−z
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 0 23 λ1 0 0
0
0
0
0
0
0
0
0
0
0
1 λ
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1 λ
0
0
0
0 − 32 z 0 − λz 0 − 32 z
148
B. Hanzon and D. Jibetean
The HOCM is
−z 1 0
0 0 0 0 0 0 0
0
0
0
0 0 0 0
0
0
0
0
0 0 00
0 −z 1
0 0 0 0 0 0 0
0
0
0
0 0 0 0
0
0
0
0
0 0 0 0
0 0 −z 1 0 0 0 0 0 0
0
0
0
0 0 0 0
0
0
0
0
0 0
0 0 0 −z 1 0 0 0 0 0
0
0
0
0 0 0 0
0
0
0
0
0 0
0
2 3
0 − 23 0 0
0 0 0
0 − 23
0
0
0 0 0 0
0
0
0
0
0 0
0
0
0 0 0 0
0
0
0
0
0 0
0
0 0 0 0
0
0
0
0
0 0
0
0 0 0 0
0
0
0
0
0 0
1
0 0 0 0
0
0
0
0
0 0
0
1 0 0 0
0
0
0
0
0 0
0
0 1 0 0
0
0
0
0
0 0
0 −z 0
0 0 1 0
0
0
0
0
0 0
0 0 0
0 0 −z 0 0 0 1
0 0 0
0 0 0 −z 0 0 0
1
0
0 0 0
0 0 0 0 −z 0 0
0
1
0 0 0
0 0 0 0 0 −z 0
0
0
0 0 0
0 0 0 0 0 0 −z 0
0
0 0 0
0 0 0 0 0 0 0 −z 0
0 0 0
0 0 0 0 0 0 0
0
0 0 0
0 0 0 0 0 0 0
0
0 −z 0 0 0 1
0
0
0
0
0 0
0 0 0
0 0 0 0 0 0 0
0
0
0 −z 0 0 0
1
0
0
0
0 0
0 0 0
0 0 0 0 0 0 0
0
0
0
0 −z 0 0
0
1
0
0
0 0
0 0 0
0 0 0 0 0 0 0
0
0
0
0 0 −z 0
0
0
1
0
0 0
0 0 0
0 0 0 0 0 0 0
0
0
0
0 0 0 −z 0
0
0
1
0 0
0 0 0
0 0 0 0 0 0 0
0
0
0
0 0 0 0 −z 0
0
0
1 0
0 0 0
0 0 0 0 0 0 0
0
0
0
0 0 0 0
0 −z 0
0
0 1
0 0 0
0 0 0 0 0 0 0
0
0
0
0 0 0 0
0
0 −z 0
0 0
0 0 0
0 0 0 0 0 0 0
0
0
0
0 0 0 0
0
0
0 0 0
0 0 0 0 0 0
2 3
0 − 23 0
0 0 0 0 − 23 0
0
0 0
0 0 0
0 0 0 0 0 0 0
0
1
0
0 0 0 0
0 0 0
0 0 0 0 0 0 0
0
0
1
0 0 0 0
0
0 0 0 0 0 0 0
0 − 23 0
0
0 − 23 0 0 0 0 0 0 −1 0 −z 0
0 0 0
2 3
0 −z 0 0
0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0
0 −1 0 −z 0 0
R EFERENCES [1] Basu S., Pollack R., Roy M.F. – A new algorithm to find a point in every cell defined by a family of polynomials, in: Caviness B.F., Johnson J.R. (Eds.), Quantifier Elimination and Cylindrical Algebraic Decomposition, 1998. [2] Cox D., Little J., O’Shea D. – Ideals, Varieties, and Algorithms, 2nd ed., Springer, New York, 1997. [3] Cox D., Little J., O’Shea D. – Using Algebraic Geometry, SpringerVerlag, New York, 1998. [4] Forney G.D. – Minimal bases of rational vector spaces, with applications to multivariable linear systems, SIAM J. Control 13 (3) (1975) 493–520. [5] Hanzon B., Jibetean D. – Global minimization of a multivariate polynomial using matrix methods, J. Global Optim. 27 (2003) 1–23. [6] Hanzon B., Maciejowski J.M. – Constructive algebra methods for the L2 problem for stable linear systems, Automatica 32 (12) (1996) 1645–1657. [7] Hanzon B., Maciejowski J.M., Chou C.T. – Model reduction in H2 using matrix solutions of polynomial equations, Technical Report CUED/FINFENG/TR314, Cambridge University Engineering Department, 1998.
A matrix method for finding the minimum or infimum of a polynomial
149
[8] Möller H. – Systems of algebraic equations solved by means of endomorphisms, in: Cohen, Mora, Moreno (Eds.), Lecture Notes in Comput. Sci., vol. 673, SpringerVerlag, New York, 1993, pp. 43–56. [9] Möller H.M., Stetter H.J. – Multivariate polynomial equations with multiple zeros solved by matrix eigenproblems, Numer. Math. 70 (1995) 311–329. [10] Stetter H.J. – Matrix eigenproblems are at the heart of polynomials systems solving, SIGSAM Bull. 30 (1996) 22–25.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
Solution of polynomial Lyapunov and Sylvester equations
Ralf Peeters and Paolo Rapisarda Department of Mathematics, Universiteit Maastricht, P.O. Box 616, 6200 MD Maastricht, The Netherlands Emails:
[email protected] (R. Peeters),
[email protected] (P. Rapisarda)
A BSTRACT A twovariable polynomial approach to solve the onevariable polynomial Lyapunov and Sylvester equations is proposed. Lifting the problem from the onevariable to the twovariable context gives rise to associated lifted equations which live on finitedimensional vector spaces. This allows for the design of an iterative solution method which is inspired by the method of Faddeev for the computation of matrix resolvents. The resulting algorithms are especially suitable for applications requiring symbolic or exact computation.
1. I N T R O D U C T I O N In various areas in mathematical systems and control theory Lyapunov and Sylvester equations play an important role. For instance, they occur in the computation of certain performance criteria in control (see [1,17,18]), in stability theory (see [12,22]), and in relation to statistical quantities such as state covariance matrices and Fisher information (see [13]). In their classical form, their derivation and interpretation is usually most natural within the context of linear timeinvariant statespace systems (A, B, C, D), both in the continuoustime case and in the discretetime case. In the behavioral approach to systems theory ([21,16]), advocating the use of models derived from first principles which are typically described by systems of high order differential equations, a convenient generalization of the classical Key words and phrases: Twovariable polynomial matrices, Polynomial Lyapunov equation, Polynomial Sylvester equation, Method of Faddeev, Symbolic computation
152
R. Peeters and P. Rapisarda
Lyapunov equation attains the form of a structured polynomial matrix equation in a single variable, constituting the socalled polynomial Lyapunov equation (PLE): (1)
R(−ξ )T X(ξ ) + X(−ξ )T R(ξ ) = Z(ξ ).
Here R(ξ ), X(ξ ) and Z(ξ ) are q × q real polynomial matrices in the indeterminate ξ , with R(ξ ) nonsingular (i.e., det(R(ξ )) = 0) and with X(ξ ) denoting the polynomial matrix to solve for. From the symmetric structure of the lefthand side of this equation it directly follows that solutions to the PLE may exist only if Z(ξ ) is a socalled paraHermitian matrix, which means that Z(ξ ) = Z(−ξ )T . In many practical situations the PLE happens to attain the special form (2)
R(−ξ )T X(ξ ) + X(−ξ )T R(ξ ) = Q(−ξ )T Q(ξ ),
where R(ξ ) is nonsingular, is a p × p signature matrix (i.e., a diagonal matrix with entries ±1 on its main diagonal) and Q(ξ ) is a p × q real polynomial matrix which moreover has the property of being R canonical (i.e., Q(ξ )R(ξ )−1 is a strictly proper rational matrix in ξ ). In this case one also restricts the search for a solution to the finitedimensional subspace of R canonical polynomial matrices X(ξ ), which can be done without affecting solvability properties of the equation as will be shown in Section 4. We shall refer to Eq. (2) as the ‘PLE in canonical form’. As it turns out, the problem of solving a PLE of the form (1) can always be reduced to that of solving a PLE in canonical form (2); see Section 5 for details. Therefore it is natural to focus attention exclusively on Eq. (2). A new solution method for this PLE in canonical form, based on lifting the problem to a twovariable polynomial setting and exploiting an algorithm inspired by the method of Faddeev for computing matrix resolvents, has recently been developed in [15]. Here, the results of that paper are briefly reviewed and then extended to deal with a more general type of polynomial matrix equation, which we propose to call the polynomial Sylvester equation (PSE). In its general form the PSE is defined as (3)
R1 (−ξ )T X12 (ξ ) + X21 (−ξ )T R2 (ξ ) = Z(ξ ),
with R1 (ξ ) and R2 (ξ ) nonsingular real polynomial matrices in ξ of size q1 × q1 and q2 × q2 , respectively, and Z(ξ ) a real polynomial matrix of size q1 × q2 . Here the polynomial matrices X21 (ξ ) of size q2 × q1 and X12 (ξ ) of size q1 × q2 constitute the pair of unknown quantities to solve for. As in the Lyapunov case, in many practical situations the PSE attains the following special form which shall be referred to as the ‘PSE in canonical form’: (4)
R1 (−ξ )T X12 (ξ ) + X21 (−ξ )T R2 (ξ ) = Q1 (−ξ )T Q2 (ξ ),
where is a p × p signature matrix, Q1 (ξ ) is a p × q1 real polynomial matrix which is R1 canonical and Q2 (ξ ) is a p ×q2 real polynomial matrix which is R2 canonical. In addition, one now may restrict the search for a solution pair (X21 (ξ ), X12 (ξ ))
Solution of polynomial Lyapunov and Sylvester equations
153
to the finitedimensional subspace of pairs of R1 canonical matrices X21 (ξ ) and R2 canonical matrices X12 (ξ ). The problem of solving a PSE of the form (3) can always be reduced to that of solving a PSE in canonical form (4); see again Section 5 for details. The definition of the PLE and PSE in canonical form is motivated primarily by their connection with the problem of computing norms and inner products of the time signals produced by linear timeinvariant autonomous systems in kernel form, which is demonstrated by a worked example in Section 7. A markedly distinguishing feature of the PSE when compared to the PLE is that it requires the determination of a solution pair of polynomial matrices (X21 (ξ ), X12 (ξ )), while the solution of the PLE consists only of a single polynomial matrix X(ξ ). The solution approach towards the PSE (4) presented here is similar to the approach of [15] to the solution of the PLE (2). By lifting the problem to a twovariable polynomial setting, a new equation is introduced which is called the lifted polynomial Sylvester equation (LPSE). In contrast to the PSE, this LPSE requires the determination of a single twovariable polynomial matrix only, from which a solution pair of onevariable polynomial matrices for the PSE can then be constructed. The proposed algorithm to solve the LPSE is again inspired by the method of Faddeev. It applies to the regular case where the associated Sylvester operator is nonsingular. The algorithm is again designed to be particularly suited for exact and symbolic computation. In contrast to the available algorithms described in the literature (see, e.g., [7–9]), it does not require substantial preprocessing or the transformation of any of the matrices involved into some canonical form. This chapter is organized as follows. In Section 2 we review several concepts from the literature regarding polynomial matrices and shifts in a single variable. In Section 3 these notions are extended to the case of twovariable polynomial matrices and we define the Sylvester operator as a twovariable shift operator on a particular finitedimensional vector space. The development of the twovariable framework for the study of the PSE is completed in Section 4. There, the PSE is lifted to a twovariable context, giving rise to the LPSE. Next we explore the intimate relationship that exists between the PSE and the LPSE. Section 5 constitutes an intermezzo where we address details of the reduction of the PLE (1) to the PLE in canonical form (2) and of the PSE (3) to the PSE in canonical form (4). We also show how these equations relate to the classical Lyapunov and Sylvester equations for statespace systems (A, B, C, D). In Section 6 the Sylvester operator is used to formulate an iterative algorithm to compute a solution Y to the LPSE which is inspired by the method of Faddeev for computing matrix resolvents and generalizes the algorithm of [15]. From this twovariable solution matrix Y a onevariable solution pair (X21 , X12 ) to the PSE (4) is constructed. In Section 7 the algorithm is demonstrated by a worked example. A section containing final remarks concludes the chapter. Because of space limitations no proofs are included. Most of these proofs can be obtained as generalizations of the proofs employed in the Lyapunov case addressed in [15]; they will be given elsewhere.
154
2. R  E Q U I V A L E N C E
R. Peeters and P. Rapisarda
AND THE ONEVARIABLE SHIFT OPERATOR
In this section we briefly review a number of wellknown results on polynomial matrices in a single variable which are important in the sequel. The concepts and notions introduced in this section are not new, although the terminology used elsewhere may differ. See also [2,3,20] and [15]. Let R be an element of Rq×q [ξ ], the set of q × q real polynomial matrices in the indeterminate ξ . Assume that R is nonsingular, i.e., det(R) does not vanish identically. Then R induces an equivalence relation on the set of polynomial row vectors R1×q [ξ ] as follows. Definition 2.1. Two polynomial vectors D1 , D2 ∈ R1×q [ξ ] are called R equivalent if there exists a polynomial vector P ∈ R1×q [ξ ] such that D1 − D2 = P R . A polynomial vector D ∈ R1×q [ξ ] is called R canonical if the rational vector DR −1 is strictly proper. Every 1 × q polynomial vector D admits a unique R canonical polynomial vector D which is R equivalent to D . This R canonical representative D of the R equivalence class of D can be computed as D = SR = D − P R , where P denotes the polynomial part and S the strictly proper part of DR −1 = P + S . We alternatively denote D by D mod R . The subset of R1×q [ξ ] consisting of all 1×q R canonical polynomial vectors is denoted by CR [ξ ], for which the following proposition holds. 1×q
Proposition 2.2. The space CR [ξ ] is a finitedimensional vector space over R of dimension n = deg(det(R)). It can be identified with the vector space of R equivalence classes in R1×q [ξ ] in a natural way. 1×q
We proceed to define the polynomial shift operator σ on CR [ξ ]. 1×q
1×q
Definition 2.3. The (onevariable) polynomial shift operator σ : CR [ξ ] → CR [ξ ] is the linear operator defined by the action: σ D(ξ ) := ξ D(ξ ) mod R(ξ ). 1×q
Proposition 2.4. The characteristic polynomial χσ (z) of the operator σ on CR [ξ ] is given by χσ (z) = det R(z) /r0 , where r0 denotes the leading coefficient of det(R(z)). The definition of the shift σ can obviously be extended from R1×q [ξ ] to Rk×q [ξ ] in a rowbyrow manner. The concepts of R equivalence and R canonicity are extended likewise. The subspace of R canonical elements of Rk×q [ξ ] is denoted k×q by CR [ξ ].
Solution of polynomial Lyapunov and Sylvester equations
3. T W O  V A R I A B L E (R1 , R2 )  E Q U I V A L E N C E
AND THE
155
SYLVESTER
OPERATOR
In this section we study (R1 , R2 )equivalence, (R1 , R2 )canonicity and shift operators on spaces of symmetric and nonsymmetric polynomial matrices in two variables. The material of this section is in part a review of notions developed in the context of quadratic differential forms, see [20]. It extends the results of [15] on symmetric twovariable polynomial matrices and the Lyapunov operator to the nonsymmetric case, thereby introducing the Sylvester operator. The vector space of q1 × q2 real polynomial matrices in the two indeterminates ζ and η is denoted by Rq1 ×q2 [ζ, η]. A (square) polynomial matrix Y ∈ Rq×q [ζ, η] is called symmetric if Y (ζ, η) = Y (η, ζ )T . The subspace of all symmetric polynomial q×q matrices in Rq×q [ζ, η] is denoted by Rsym [ζ, η]. Let R1 ∈ Rq1 ×q1 [ξ ] and R2 ∈ Rq2 ×q2 [ξ ] be nonsingular. Then R1 and R2 together induce an equivalence relation on Rq1 ×q2 [ζ, η] in the following way. Definition 3.1. Two q1 × q2 polynomial matrices Y1 , Y2 ∈ Rq1 ×q2 [ζ, η] are called (R1 , R2 )equivalent if there exist two polynomial matrices P1 ∈ Rq1 ×q2 [ζ, η] and P2 ∈ Rq2 ×q1 [ζ, η] such that Y1 (ζ, η) − Y2 (ζ, η) = R1 (ζ )T P1 (ζ, η) + P2 (η, ζ )T R2 (η). A polynomial matrix Y ∈ Rq1 ×q2 [ζ, η] is called (R1 , R2 )canonical if the rational twovariable matrix R1 (ζ )−T Y (ζ, η)R2 (η)−1 is strictly proper in ζ and in η . Every Y ∈ Rq1 ×q2 [ζ, η] admits a unique (R1 , R2 )canonical twovariable polynomial matrix Y which is (R1 , R2 )equivalent to Y . Computation of this (R1 , R2 )canonical representative Y of the (R1 , R2 )equivalence class of Y may proceed as follows. First determine a factorization of Y of the form Y (ζ, η) = M(ζ )T N (η). Note that this can always be achieved with M and N not necessarily square; see also [20] and [14]. Then Y (ζ, η) = M (ζ )T N (η), where M = M mod R1 and N = N mod R2 (in the sense of onevariable R1 equivalence and R2 equivalence, respectively). The (R1 , R2 )canonical representative Y of the (R1 , R2 )equivalence class of Y ∈ Rq1 ×q2 [ζ, η] is alternatively denoted by Y = Y mod(R1 , R2 ). The subset of Rq1 ×q2 [ζ, η] of all (R1 , R2 )canonical twovariable polynomial matrices is denoted q ×q by CR11 ,R22 [ζ, η]. q ×q
Proposition 3.2. The space CR11 ,R22 [ζ, η] is a finitedimensional vector space over R of dimension n1 n2 , where n1 = deg(det(R1 )) and n2 = deg(det(R2 )). It can be identified with the vector space of (R1 , R2 )equivalence classes in Rq1 ×q2 [ζ, η] in a natural way. We proceed to define the twovariable shift operator SR1 ,R2 acting on the space q ×q CR11 ,R22 [ζ, η] of (R1 , R2 )canonical twovariable polynomial matrices. This linear operator will be referred to as the Sylvester operator associated with R1 and R2 for reasons that will become clear in the next section.
156
R. Peeters and P. Rapisarda
q ×q
q ×q
Definition 3.3. The Sylvester operator SR1 ,R2 : CR11 ,R22 [ζ, η] → CR11 ,R22 [ζ, η] is defined by the action (5) SR1 ,R2 Y (ζ, η) := (ζ + η)Y (ζ, η) mod(R1 , R2 ). Proposition 3.4. The characteristic polynomial χSR1 ,R2 (z) of the Sylvester operaq ×q
tor SR1 ,R2 acting on CR11 ,R22 [ζ, η] is given by
(6)
χSR
1 ,R2
n1 n2 z − (λi + µj ) , (z) := i=1 j =1
where n1 = deg(det(R1 )) and n2 = deg(det(R2 )), and where λ1 , . . . , λn1 and µ1 , . . . , µn2 denote the zeros of det(R1 ) and det(R2 ) respectively (including multiplicities). q×q
In [15] attention has been focused exclusively on the symmetric case Rsym [ζ, η]. There, the concept of twovariable R equivalence was introduced, which can be shown to coincide on this subspace with the concept of (R1 , R2 )equivalence introduced above when R1 = R2 = R . Also the Lyapunov operator LR was introduced as the twovariable shift operator on the space of R canonical twovariable q×q symmetric polynomial matrices CR,sym [ζ, η] which is readily seen to be a subspace q×q of CR,R [ζ, η]. On this subspace the Lyapunov operator coincides with the restriction q×q of the Sylvester operator SR,R . Note that the subspace CR,sym [ζ, η] has dimension n(n + 1)/2 instead of n2 (with n = deg(det(R))), so that the characteristic polynomial of LR is different from the characteristic polynomial of SR,R (unrestricted) as expressed by the fact that the multiplicities of its zeros are lower. Since the degrees of the characteristic polynomials of these operators determine the number of iterations to be carried out in our solution algorithms, this shows how the implicit incorporation of symmetry in the Lyapunov case leads to a more efficient algorithm than the present nonsymmetric Sylvester approach would give on such a more structured problem. 4. T H E
LIFTED POLYNOMIAL
SYLVESTER
EQUATION
In this section we complete the framework for the study of the PSE. First we lift the problem of computing a solution to the PSE in canonical form (4) from the onevariable polynomial context in which it was formulated above, to a twovariable polynomial context. To this end we now introduce the following twovariable polynomial equation associated with the matrices R1 , R2 , Q1 , Q2 and which define the PSE (4). The equation (7)
(ζ + η)Y (ζ, η) mod(R1 , R2 ) = Q1 (ζ )T Q2 (η) q ×q
in the unknown (R1 , R2 )canonical twovariable polynomial matrix Y ∈ CR11 ,R22 [ζ, η] is called the lifted polynomial Sylvester equation (LPSE). As in the Lyapunov case
Solution of polynomial Lyapunov and Sylvester equations
157
treated in [15], solvability of the PSE is equivalent to solvability of the LPSE, as the following proposition shows. Proposition 4.1. Let R1 ∈ Rq1 ×q1 [ξ ] and R2 ∈ Rq2 ×q2 [ξ ] both be nonsingular, let Q1 ∈ Rp×q1 [ξ ] be R1 canonical, let Q2 ∈ Rp×q2 [ξ ] be R2 canonical and let be a p × p signature matrix. Then the following two statements are equivalent. (1) There exists a solution pair (X21 , X12 ) ∈ Rq2 ×q1 [ξ ] × Rq1 ×q2 [ξ ] for the associated PSE (4). q ×q (2) There exists a solution Y ∈ CR11 ,R22 [ζ, η] for the associated LPSE (7). A solution pair (X21 , X12 ) for the PSE is called (R1 , R2 )canonical if X21 is R1 canonical and X12 is R2 canonical. The next proposition characterizes the solution space of the PSE (4) as a direct sum of (R1 , R2 )canonical solution pairs and the solution space to the homogeneous PSE. Proposition 4.2. Let R1 ∈ Rq1 ×q1 [ξ ] and R2 ∈ Rq2 ×q2 [ξ ] both be nonsingular, let Q1 ∈ Rp×q1 [ξ ] be R1 canonical, let Q2 ∈ Rp×q2 [ξ ] be R2 canonical and let be a p × p signature matrix. q ×q q ×q Let XR1 ,R2 ⊂ CR21 1 [ξ ] × CR12 2 [ξ ] be the set of all (R1 , R2 )canonical solution pairs of the PSE. Then the space of all solutions pairs (X21 , X12 ) of the PSE is given by XR1 ,R2 ⊕ (SR1 , −S ∼ R2 )  S ∈ Rq2 ×q1 [ξ ] , where S ∼ (ξ ) := S(−ξ )T . Observe that Proposition 4.2 implies that the PSE admits a solution pair if and only if it admits an (R1 , R2 )canonical solution pair. Consequently, as a corollary, the search for a solution pair of the PSE can be restricted from the q ×q q ×q infinitedimensional space Rq2 ×q1 [ξ ] × Rq1 ×q2 [ξ ] to the space CR21 1 [ξ ] × CR12 2 [ξ ] of finite dimension q2 n1 × q1 n2 . From an arbitrary solution pair (X21 , X12 ) for the PSE a twovariable solution Y of the LPSE can explicitly be constructed, and vice versa. Indeed, if (X21 , X12 ) be defined by Y (ζ, η) := [Q1 (ζ )T Q2 (η) − is a solution pair for the PSE let Y is indeed a polynomial R1 (ζ )T X12 (η) − X21 (ζ )T R2 (η)]/(ζ + η). Observe that Y matrix (since the numerator matrix polynomial vanishes by construction when ζ is put equal to −η ), which however need not be (R1 , R2 )canonical. Now let Y mod(R1 , R2 ) of the be defined as the (R1 , R2 )canonical representative Y := Y . It can then be verified directly that Y solves the (R1 , R2 )equivalence class of Y LPSE. Conversely, and more important for our purposes, if Y is a solution to the LPSE then by definition of (R1 , R2 )equivalence there exist two polynomial matrices P1 ∈ Rq1 ×q2 [ζ, η] and P2 ∈ Rq2 ×q1 [ζ, η] such that (8)
(ζ + η)Y (ζ, η) + R1 (ζ )T P1 (ζ, η) + P2 (η, ζ )T R2 (η) = Q1 (ζ )T Q2 (η).
158
R. Peeters and P. Rapisarda
A solution to the PSE is then obtained from P1 and P2 by substituting ζ = −ξ and η = ξ , yielding X21 (ξ ) := P2 (−ξ, ξ ) and X12 (ξ ) := P1 (−ξ, ξ ). This, however, is an indirect way of computing a solution pair (X21 , X12 ) from Y , requiring determination of the twovariable polynomial matrices P1 and P2 . The following proposition shows how an (R1 , R2 )canonical solution pair (X21 , X12 ) for the PSE can in fact be expressed directly in terms of a solution Y to the LPSE. q ×q
Proposition 4.3. Let Y ∈ CR11 ,R22 [ζ, η] be a solution of the LPSE. Then an (R1 , R2 )q ×q q ×q canonical solution pair (X21 , X12 ) ∈ CR21 1 [ξ ] × CR12 2 [ξ ] for the PSE is given by X21 (ξ ) := − limµ→∞ µR2 (µ)−T Y (ξ, µ)T ,
(9)
X12 (ξ ) := − limµ→∞ µR1 (µ)−T Y (µ, ξ ).
Moreover, for such (X21 , X12 ) it holds that (ζ + η)Y (ζ, η) + R1 (ζ )T X12 (η) + X21 (ζ )T R2 (η) = Q1 (ζ )T Q2 (η). Note that the last statement of this proposition makes clear that the twovariable polynomial matrices P1 and P2 required in the indirect computation of (X21 , X12 ) from Y based on Eq. (8), can in fact be chosen to be onevariable polynomials in η and ζ , respectively. Propositions 4.1–4.3 show that to solve the PSE one can first solve the LPSE and then construct an (R1 , R2 )canonical solution pair for the PSE from the solution of the LPSE. If we denote the righthand side of the LPSE by (ζ, η) := Q1 (ζ )T Q2 (η), then the LPSE can be written compactly as SR1 ,R2 (Y ) = , with SR1 ,R2 the Sylvester operator. From Proposition 3.4 a necessary and sufficient condition for the existence of a unique solution to the LPSE is immediate. It is remarkable that the same condition also characterizes the existence of a unique (R1 , R2 )canonical solution pair for the PSE. Proposition 4.4. Let R1 ∈ Rq1 ×q1 [ξ ] and R2 ∈ Rq2 ×q2 [ξ ] be nonsingular, let Q1 ∈ Rp×q1 [ξ ] be R1 canonical, let Q2 ∈ Rp×q2 [ξ ] be R2 canonical and let be a p × p signature matrix. Let n1 = deg(det(R1 )), n2 = deg(det(R2 )) and let λ1 , . . . , λn1 and µ1 , . . . , µn2 be the zeros of det(R1 ) and det(R2 ), respectively. Then the following three statements are equivalent. (1) The following condition is satisfied: (10)
λi + µj = 0
for all i = 1, 2, . . . , n1 ; j = 1, 2, . . . , n2 .
(2) The LPSE has a unique solution (which is (R1 , R2 )canonical). (3) The PSE has a unique (R1 , R2 )canonical solution pair. For obvious reasons we call condition (10) the invertibility condition for the operator SR1 ,R2 . Observe that this condition is certainly satisfied when R1 and R2
Solution of polynomial Lyapunov and Sylvester equations
159
are Hurwitz, i.e., when all λi and µj are in the open left half of the complex plane. The invertibility condition is similar to wellknown sufficient conditions for the existence of a solution to the classical matrix Lyapunov and Sylvester equations (see, for example, [4, Section VIII.3]). 5. R E D U C T I O N
OF THE
PSE
TO THE
PSE
ITS RELATIONSHIP WITH THE CLASSICAL
IN CANONICAL FORM AND
SYLVESTER
EQUATION
In this section we supply additional details on two topics. First we consider the issue of the reduction of a PLE (PSE) of the general form (1) (or (3)) to a PLE (PSE) in the canonical form (2) (or (4)). Next we investigate the relationship between the PLE (PSE) and the classical Lyapunov (Sylvester) equation which emerges as a special case associated with the conventional context of statespace systems (A, B, C, D). To start with the first issue, let Z(ξ ) be the righthand side of a PLE of the form (1). If Z(ξ ) = Z(−ξ )T there are no solutions to the PLE. Otherwise put p = 2q and define Q(ξ ) and as follows: Iq (Z(ξ ) + Iq )/2 0 , := (11) Q(ξ ) := . (Z(ξ ) − Iq )/2 0 −Iq It then is straightforward to verify that the associated PLE of the form (2) is equivalent to the PLE (1). In case of a PSE of the form (3) the situation is even easier because symmetry aspects do not play a role. Here one may simply put p = q2 and define (12)
Q1 (ξ ) := Z(−ξ )T ,
Q2 (ξ ) := Iq2 ,
:= Iq2 .
Alternatively, depending on the dimensions q1 and q2 , it may be preferable to put p = q1 and to define (13)
Q1 (ξ ) := Iq1 ,
Q2 (ξ ) := Z(ξ ),
:= Iq1 .
Other solutions are obviously possible. Once a PSE in the form (4) has been obtained according to the recipe given above, the next step is to enforce R1 canonicity of Q1 and R2 canonicity of Q2 . To this end, let Q1 be the R1 canonical representative of the R1 equivalence class of Q1 with T1 a polynomial matrix such that Q1 = Q1 + T1 R1 . Likewise let Q2 be the R2 canonical representative of the R2 equivalence class of Q2 with T2 a polynomial matrix such that Q2 = Q2 + T2 R2 . Then the righthand side of the PSE can be expanded into a sum four terms, yielding (14)
Q1 (−ξ )T Q2 (ξ ) = Q1 (−ξ )T Q2 (ξ ) + Q1 (−ξ )T T2 (ξ )R2 (ξ ) + R1 (−ξ )T T1 (−ξ )T Q2 (ξ ) + R1 (−ξ )T T1 (−ξ )T T2 (ξ )R2 (ξ ).
Because of linearity of the PSE, individual solution pairs with respect to each term may be superimposed. The first term of the expansion above corresponds to a PSE
160
R. Peeters and P. Rapisarda
exactly in the canonical form (4) that we are reducing to, with all the required properties. A particular solution pair for the PSE corresponding to the remaining terms is easily verified to be given by (15)
T2 (−ξ )T Q1 (ξ ) + T1 (ξ )R1 (ξ )/2 , T1 (−ξ )T Q2 (ξ ) + T2 (ξ )R2 (ξ )/2 .
In case of a PLE in the form (2) a similar procedure can be adopted with all the indices dropped to obtain a PLE with R canonicity holding for Q. For the second issue, it will be natural to associate the following two linear timeinvariant autonomous systems 1 and 2 with the polynomial matrices R1 , Q1 , R2 and Q2 in the PSE: d w1 = 0, R1 dt 1 :=
d y1 = Q1 w1 , dt
and
d w2 = 0, R2 dt 2 :=
d y2 = Q2 w2 . dt
It is clear that the output signals y1 and y2 remain unaffected by replacement of Q1 and Q2 by arbitrary R1 equivalent and R2 equivalent matrices, respectively. Thus the requirement of R1 canonicity of Q1 and R2 canonicity of Q2 appears naturally in such a context. In the classical situation of statespace systems (A, B, C, D), the quantities w1 and w2 serve as state vectors and the systems consist of firstorder differential equations where the polynomial matrices R1 and R2 attain the special form (16)
R1 (ξ ) = ξ Iq1 − A1 ,
R2 (ξ ) = ξ Iq2 − A2 .
The properties of R1 canonicity of Q1 and R2 canonicity of Q2 then amount to these matrices being constant: (17)
Q1 (ξ ) = C1 ,
Q2 (ξ ) = C2 .
Thus, with = Ip , the PSE attains the form (18)
T −ξ Iq1 − AT1 X12 + X21 (ξ Iq2 − A2 ) = C1T C2 .
Here, (R1 , R2 )canonicity of the solution pair (X21 , X12 ) also implies that the matrices X21 and X12 are constant. By comparing the terms that are linear in ξ it is T =: X , say. Then the remaining terms yield the equation obtained that X12 = X21 (19)
AT1 X + XA2 = −C1T C2 ,
which is precisely the classical Sylvester equation for X . The Lyapunov case can be handled in an entirely analogous fashion by dropping the indices.
Solution of polynomial Lyapunov and Sylvester equations
6. A
RECURSIVE ALGORITHM TO SOLVE THE
161
PSE
In this section we present a recursive procedure to solve the PSE (2) under the assumption that the invertibility condition (10) is satisfied. It generalizes the procedure of [15] for the solution of the PLE (2). The method is conceptually and computationally transparent in the sense that the matrices R1 and R2 need not be transformed to some desired canonical representation, and that the amount of bookkeeping is kept to a minimum. The algorithm is particularly suited for computation in an exact or symbolic context. The method is inspired by the Faddeev algorithm for computing the resolvent (zIn − A)−1 of an n × n matrix A. (See, for example, [6] and [4, Section IV.4] for a more detailed exposition.) Assume that A is invertible and let χA (z) = det(zIn − A) = zn + χ1 zn−1 + · · · + χn−1 z + χn be the characteristic polynomial of A. Then χn = (−1)n det(A) = 0 and also χA (A) = 0 according to the wellknown theorem of Cayley and Hamilton. Note that it follows that A(An−1 + χ1 An−2 + · · · + χn−1 In ) = −χn In , whence the inverse of A is given by A−1 = − χ1n (An−1 + χ1 An−2 + · · · + χn−1 In ). Observe that the unique solution xˆ = A−1 b to the linear system of equations Ax = b can therefore be computed by the following iterative procedure: (20)
x0 := b,
(21)
xk := Axk−1 + χk b 1 xˆ := − xn−1 . χn
(22)
(k = 1, 2, . . . , n − 1),
Prior knowledge of the coefficients χk of the characteristic polynomial of the matrix A is fundamental for applicability of this procedure. In case of the LPSE, we are dealing with a linear system of equations on a finitedimensional vector space, namely SR1 ,R2 (Y ) = . The characteristic polynomial of the Lyapunov operator SR1 ,R2 is available and described by Eq. (6). In order to come up with a procedure to solve the PSE we therefore only need to adapt the recursion (20)–(22) to the case at hand. This yields the main result of this section. Proposition 6.1. Let R1 ∈ Rq1 ×q1 [ξ ] and R2 ∈ Rq2 ×q2 [ξ ] both be nonsingular, let Q1 ∈ Rp×q1 [ξ ] be R1 canonical, let Q2 ∈ Rp×q2 [ξ ] be R2 canonical and let be a p × p signature matrix. Let n1 = deg(det(R1 )), n2 = deg(det(R2 )) and let λ1 , . . . , λn1 and µ1 , . . . , µn2 be the zeros of det(R1 ) and det(R2 ), respectively. Assume that the invertibility condition (10) holds. Let χSR1 ,R2 (z) = zd + γ1 zd−1 + · · · + γd−1 z + γd be the characteristic polynomial of the Lyapunov operator SR1 ,R2 as given by Eq. (6) with d = n1 n2 . Denote the righthand side of the LPSE by (ζ, η) := Q1 (ζ )T Q2 (η). Consider the recursion: (23)
Y0 := ,
(24)
Yk := SR1 ,R2 (Yk−1 ) + γk ,
162
R. Peeters and P. Rapisarda
for k = 1, 2, . . . , d − 1. Then the twovariable polynomial matrix (25)
Y := −
1 Yd−1 γd
yields the unique solution of the LPSE. From Y the unique (R1 , R2 )canonical solution pair (X21 , X12 ) for the PSE is computed as: (26)
X21 (ξ ) := − limµ→∞ µR2 (µ)−T Y (ξ, µ)T , X12 (ξ ) := − limµ→∞ µR1 (µ)−T Y (µ, ξ ).
As stated above, knowledge of the characteristic polynomial of SR1 ,R2 is fundamental for applicability of the algorithm above. Observe that in the context of symbolic or exact computation it is not advisable to compute the characteristic polynomial of SR1 ,R2 from the zeros λi and µj of det(R1 ) and det(R2 ) as might be suggested by Eq. (6). An efficient rational algorithm to compute the coefficients of χSR1 ,R2 directly from the coefficients of the polynomials det(R1 ) and det(R2 ) can be designed using Faddeevtype recursions analogous to those of [6, Section 5]. This generalizes the corresponding algorithm of [15] for the Lyapunov case: Proposition 6.2. Let R1 ∈ Rq1 ×q1 [ξ ] and R2 ∈ Rq2 ×q2 [ξ ] both be nonsingular. Let n1 = deg(det(R1 )), n2 = deg(det(R2 )) and let λ1 , . . . , λn1 and µ1 , . . . , µn2 be the zeros of det(R1 ) and det(R2 ), respectively. Define α(z) = zn1 + α1 zn1 −1 + · · · + n1 αn1 −1 z + αn1 := i=1 (z − λi ) and put αk := 0 for all k > n1 . Likewise, define β(z) = n2 n n −1 2 2 z + β1 z + · · · + βn2 −1 z + βn2 := j =1 (z − µj ) and define βk := 0 for all k > n2 . Let d = n1 n2 and consider the following four recursions that define the quantities tk , sk , uk and γk for k = 1, 2, . . . , d .
(27)
tk := − kαk +
t αk− ,
=1
(28)
k−1
sk := − kβk +
k−1
s βk− ,
=1
(29)
with t1 := −α1 ,
uk := n1 sk + n2 tk +
k−1
with s1 := −β1 ,
k
t sk− , with u1 := −n1 β1 − n2 α1 , k−1 γk := − uk + u γk− k, with γ1 := n1 β1 + n2 α1 . =1
(30)
=1
Then the characteristic polynomial χSR1 ,R2 (z) of the Sylvester operator SR1 ,R2 is given by (31)
χSR
1 ,R2
(z) = zd + γ1 zd−1 + · · · + γd−1 z + γd .
163
Solution of polynomial Lyapunov and Sylvester equations
Note that the above result shows that the exact computation of the coefficients of the characteristic polynomial of the Sylvester operator is possible even in cases where the computation of the zeros of det(R1 ) or of det(R2 ) is infeasible, such as when these depend on symbolic, unspecified parameters. Remark 1. The algorithm (23)–(26) involves the computation of the (R1 , R2 )canonical representatives of (ζ + η)Yk−1 (ζ, η) for k = 1, 2, . . . , d − 1. It is easy (ξ ) := − lim −T to see that if one defines the matrices Y1,k µ→∞ µR1 (µ) Yk (µ, ξ ) (ξ ) := − lim −T T and Y2,k µ→∞ µR2 (µ) Yk (ξ, µ) it holds that (ζ + η)Yk−1 (ζ, η) T mod(R1 , R2 ) = (ζ + η)Yk−1 (ζ, η) + R1 (ζ ) Y2,k−1 (η) + Y1,k−1 (ζ )T R2 (η). The au thors have devised a Faddeevtype recursion that enables the computation of Y1,k−1 and Y2,k−1 with polynomial operations only and which only requires division between the highestpower coefficients of certain univariate polynomials. Such implementation details will be discussed elsewhere; see also [14] for similar considerations in the Lyapunov case. Remark 2. In many cases the matrices R1 (ξ ) and R2 (ξ ) have the property that their leading coefficient matrices are nonsingular. For example, this always happens for the scalar PSE: r1 (−ξ )x12 (ξ ) + x21 (−ξ )r2 (ξ ) = q1 (−ξ )q2 (ξ ), where r1 , r2 , q1 , q2 , x12 and x21 ∈ R[ξ ]. An algorithm can then be developed that takes advantage of this property. Full details will again be presented elsewhere; see also [14]. 7. E X A M P L E In this section we demonstrate the algorithm of this chapter by means of a worked example. We also present an interpretation of the PSE by addressing the case of a PSE derived from a PLE in canonical form (2) with a blockdiagonal matrix R . Let the polynomial matrices R and Q be defined by R1 (ξ ) 0 , Q(ξ ) = ( Q1 (ξ ) Q2 (ξ ) ), R(ξ ) = 0 R2 (ξ ) with R1 , R2 , Q1 and Q2 given by R1 (ξ ) = Q1 (ξ ) =
2ξ + 1 1 ξ 1 2
ξ2
ξ2 − 1
1
+ 2ξ + 3
1
−ξ + 1 3ξ − 2 −1 , ξ +1 1
,
ξ +1 Q2 (ξ ) =
R2 (ξ ) =
−1
ξ
2
−6
ξ −2
ξ2 + 4
−ξ − 4
4
,
.
Then Q is easily verified to be R canonical, which due to the blockdiagonal structure of R is equivalent to Q1 being 1 0R1 canonical and Q2 being R2 canonical. The PLE associated with R , Q and is given by 0
1
R(−ξ )T X(ξ ) + X(−ξ )T R(ξ ) = Q(−ξ )T Q(ξ ),
164
R. Peeters and P. Rapisarda
which is to be solved for the R canonical matrix X(ξ ). If X(ξ ) is blockpartitioned as X11 (ξ ) X12 (ξ ) , X(ξ ) = X21 (ξ ) X22 (ξ ) then the PLE gives rise to an equivalent set of three matrix equations of reduced size: R1 (−ξ )T X11 (ξ ) + X11 (−ξ )T R1 (ξ ) = Q1 (−ξ )T Q1 (ξ ), R1 (−ξ )T X12 (ξ ) + X21 (−ξ )T R2 (ξ ) = Q1 (−ξ )T Q2 (ξ ), R2 (−ξ )T X22 (ξ ) + X22 (−ξ )T R2 (ξ ) = Q2 (−ξ )T Q2 (ξ ).
Note that the first and third equations are both PLEs while the second equation is a PSE. The R canonicity property of X is equivalent to X11 and X21 being R1 canonical and X12 and X22 being R2 canonical. Thus, these three equations are all in canonical form. The autonomous system associated with R and Q is described by a set of equations d R dt w = 0,
d y =Q w, dt which represents a parallel connection of the two autonomous subsystems associw1 and y = y1 + y2 . ated with R1 , Q1 , R2 and Q2 , where w = w2 The PLE (see, e.g., [1]) can be associated with a quadratic cost integral: ∞ 2 J = y(t) dt. 0
Since y 2 = y1 2 + 2 y1 , y2 + y2 2 , this may be decomposed into a sum of three cost integrals involving the two individual subsystems: J = J1 + 2J12 + J2 with ∞ 2 J1 = y1 (t) dt, 0
∞ J12 =
T
y1 (t) y2 (t) dt, 0
∞ 2 J2 = y2 (t) dt. 0
Here the cost integrals J1 and J2 are associated with the two reduced size PLEs while the cost integral J12 relates to the PSE. The polynomials det(R1 (ξ )) and det(R2 (ξ )) are easily computed as det R1 (ξ ) = −2 ξ 4 + 3ξ 3 + 6ξ 2 + 3ξ + 2 , det R2 (ξ ) = ξ 3 + 4ξ 2 + 8ξ + 8,
165
Solution of polynomial Lyapunov and Sylvester equations
having degrees 4 and 3 respectively. They are both easily verified to be Hurwitz, whence the cost integrals all converge regardless of the specific initial conditions. The algorithm of Proposition 6.2 then produces the following characteristic polynomial χSR1 ,R2 (ξ ) of degree 12 for the Sylvester operator associated with the LPSE: ξ 12 + 25ξ 11 + 305ξ 10 + 2376ξ 9 + 13066ξ 8 + 53157ξ 7 + 163553ξ 6 + 382761ξ 5 + 675150ξ 4 + 874127ξ 3 + 788370ξ 2 + 445740ξ + 120096.
Using this polynomial, the algorithm (23)–(25) produces the solution Y (ζ, η) to the LPSE in 12 iteration steps:
365/139
Y (ζ, η) = 2(109 + 151ζ )/139 −437/417
(1924 + 2179η)/834
(−2784 + 1336ζ + 1167η + 1090ζ η)/834 . −2(−185 + 7η)/1251
According to Eq. (26) this solution Y gives rise to the following unique (R1 , R2 )canonical solution pair (X21 (ξ ), X12 (ξ )) for the PSE: X21 (ξ ) =
−2179/834
(−1167 − 1090ξ )/834
11/834
(141 + 722ξ )/834
−766/417 X12 (ξ ) = −140/417 437/417
(−5032 − 6565ξ )/5004
14/1251 −1297/1251
,
(−2984 + 25ξ )/5004 . 2(−185 + 7ξ )/1251
8. C O N C L U S I O N S In this chapter we have introduced and studied the polynomial Sylvester equation by exploring analogies with the polynomial Lyapunov equation and generalizing the results of [15]. The algorithm for solving the PSE presented here is an extension of the algorithm developed for the PLE in [15] and works directly with the polynomial matrices that constitute the PSE. No preprocessing or transformations to canonical forms are required. The amount of bookkeeping necessary to perform the computations is kept to a minimum and the procedure is straightforward to implement. Moreover, the methods employed make the algorithm especially suitable for exact and symbolic computation purposes, and has been illustrated by a worked example. An implementation of the algorithm as a Mathematica Notebook is available upon request from the authors. The application of the twovariable polynomial framework proposed in this chapter to the solution of other polynomial equations relevant for systems and control applications is currently being studied. Another issue under investigation concerns the case of singular Lyapunov and Sylvester operators.
166
R. Peeters and P. Rapisarda
R EFERENCES [1] [2] [3] [4] [5]
[6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22]
Brockett R.W. – Finite Dimensional Linear Systems, John Wiley and Sons, 1970. Fuhrmann P.A. – Linear Systems and Operators in Hilbert Space, McGrawHill, New York, 1981. Fuhrmann P.A. – A Polynomial Approach to Linear Algebra, SpringerVerlag, Berlin, 1996. Gantmacher F.R. – The Theory of Matrices, vol. I, Chelsea Publ. Co., New York, 1959. Hanzon B. – Some new results on and applications of an algorithm of Agashe, in: Curtain R.F. (Ed.), Modelling, Robustness and Sensitivity Reduction in Control Systems, in: NATO ASI Series, vol. F34, SpringerVerlag, Berlin, 1987. Hanzon B., Peeters R.L.M. – A Faddeev sequence method for solving Lyapunov and Sylvester equations, Linear Algebra Appl. 241–243 (1996) 401–430. Ježek J. – New algorithm for minimal solution of linear polynomial equations, Kybernetika 18 (6) (1982) 505–516. Ježek J. – Conjugated and symmetric polynomial equations – I: Continuous time systems, Kybernetika 19 (2) (1983) 121–130. Ježek J. – Symmetric matrix polynomial equations, Kybernetika 22 (1) (1986) 19–30. Ježek J., Kuˇcera V. – Efficient algorithm for matrix spectral factorization, Automatica 21 (6) (1985) 663–669. Kailath T. – Linear Systems, PrenticeHall, Englewood Cliffs, NJ, 1980. Kalman R.E. – Algebraic characterization of polynomials whose zeros lie in certain algebraic domains, Proc. Nat. Acad. Sci. 64 (1969) 818–823. Peeters R.L.M., Hanzon B. – Symbolic computation of Fisher information matrices for parametrized statespace systems, Automatica 35 (1999) 1059–1071. Peeters R.L.M., Rapisarda P. – A new algorithm to solve the polynomial Lyapunov equation, Report M 9903, Department of Mathematics, Universiteit Maastricht, 1999. Peeters R.L.M., Rapisarda P. – A twovariable approach to solve the polynomial Lyapunov equation, Systems Control Lett. 42 (2) (2001) 117–126. Polderman J.W., Willems J.C. – Introduction to Mathematical System Theory: A Behavioral Approach, SpringerVerlag, Berlin, 1997. Talbot A. – The evaluation of integrals of products of linear system responses – Part I, Quart. J. Mech. Appl. Math. 12 (4) (1959) 488–503. Talbot A. – The evaluation of integrals of products of linear system responses – Part II. Continuedfraction expansion, Quart. J. Mech. Appl. Math. 12 (4) (1959) 504–520. Trentelman H.L., Rapisarda P. – New algorithms for polynomial J spectral factorization, Math. Control Signals Systems 12 (1999) 24–61. Trentelman H.L., Willems J.C. – On quadratic differential forms, SIAM J. Control Optim. 36 (5) (1998) 1703–1749. Willems J.C. – Paradigms and puzzles in the theory of dynamical systems, IEEE Trans. Automat. Control AC36 (1991) 259–294. Willems J.C., Fuhrmann P.A. – Stability theory for high order equations, Linear Algebra Appl. 167 (1992) 131–149.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
Constructive multidimensional systems theory with applications
H. Pillai a , J. Wood b and E. Rogers b a b
Department of Electrical Engineering, Indian Institute of Technology, Bombay, India Department of Electronics and Computer Science, University of Southampton, UK
A BSTRACT In this chapter we first review the importance of the Gröbner basis as a computational tool in the behavioural approach to nD systems. In particular, we explain Oberst’s solution to the Cauchy problem for discrete systems, in which a Gröbner basis is used to construct an initial solution set and generate trajectories from it. We also develop algorithms for the following problems: testing for observability and controllability, construction of the controllable part, elimination of latent variables, and the construction and realisation of transfer matrices. Finally, we show how the behavioural approach can be used to define and characterise a pole for the special case of discrete linear repetitive processes which are a distinct subclass of 2D discrete linear systems of both systems theoretic and applications interest. A key feature here is that a pole has an exponential trajectory interpretation which can be regarded as the ‘true’ generalisation of the same concept for 1D linear systems.
1. I N T R O D U C T I O N Gröbner bases are becoming wellestablished as a generally useful tool in multidimensional systems theory (see, for example, [3,10]). In nD behavioural theory, we find that Gröbner bases are invaluable for almost any nontrivial constructive exercise and that they also admit a natural interpretation in terms of the Cauchy problem. In this chapter we review both these applications of Gröbner bases. For a formal definition and discussion of Gröbner bases in the original context of commutative algebra, see, for example, [4,6]. The second area which this chapter covers is discrete linear repetitive processes which are distinct class of 2D linear systems of both systems theoretic and applications interest see, for example, [2,14]. The key unique feature of these processes is that information propagation in one of the two distinct directions only
Constructive multidimensional systems theory with applications
169
that the pass length α (i.e. the duration of a pass of the processing tool) is finite and constant, the output, or pass profile, yk (t), 0 t α (t being the independent spatial or temporal variable) produced on the k th pass acts as a forcing function on the next pass and hence contributes to the dynamics of the new pass profile yk+1 (t), 0 t α , k 0. Industrial examples of repetitive processes include longwall coal cutting and metal rolling operations [5]. Also cases exist where adopting a repetitive process setting has major advantages over alternatives – socalled algorithmic examples. This is especially true for classes of iterative learning control schemes [1] and iterative solution algorithms for nonlinear dynamic optimal control problems based on the maximum principle [18]. Repetitive processes clearly have a twodimensional, or 2D, structure, i.e. information propagation occurs along a given pass (t direction) and from passtopass (k direction). They are distinct from, in particular, the extensively studied 2D linear systems described by the Roesser [20] and Fornasini and Marchesini [7] state space models by the fact that information propagation along the pass only occurs over a finite and fixed interval – the pass length α . Moreover, this is an intrinsic feature of the process dynamics and not an assumption introduced for analysis purposes. The basic unique control problem for repetitive processes is that the output sequence of pass profiles can contain oscillations that increase in amplitude in the pass to pass direction (i.e. in the k direction in the notation for variables used here). Early approaches to stability analysis and controller design for (linear singleinput singleoutput (SISO)) repetitive processes and, in particular, longwall coal cutting [5] was based on first converting into an infinitelength singlepass process. This resulted, for example, in a scalar algebraic/delay system to which standard scalar inverseNyquist stability criteria could then be applied. In general, however, it was soon established that this approach to stability analysis and controller design would, except in a few very restrictive special cases, lead to incorrect conclusions [13]. The basic reason for this is that such an approach effectively neglects their finite pass length repeatable nature and the effects of resetting the initial conditions before the start of each pass. To remove these difficulties, a rigorous stability theory has been developed [13,21], based on an abstract model in a Banach space setting which includes all linear dynamics constant pass length processes as special cases. This chapter continues the development of a ‘mature’ systems theory for the subclass of socalled discrete linear repetitive processes which are of both theoretical and practical interest by using recent results from the behavioural approach to nD linear systems to define and characterise the concept of a pole. The state space model of the subclass of discrete linear repetitive processes considered here has the following form over 0 t α , k 0, yk+1 (t + 1) = Ayk+1 (t) + Buk+1 (t) + B0 yk (t),
(1)
yk+1 (t) = Cxk+1 (t) + D0 yk (t).
170
H. Pillai et al.
Here on pass k , xk (t) is the n × 1 state vector, yk (t) is the m × 1 pass profile vector, and uk (t) is the l × 1 vector of control inputs. To complete the process description it is necessary to specify the initial conditions, termed the boundary conditions here, i.e. the state initial vector on each pass and the initial pass profile. The form considered here is xk+1 (0) = dk+1 , y0 (t) = y(t),
k 0, 0 t α,
where the n × 1 vector dk+1 has constant entries and the entries in the n × 1 vector y(t) are known functions of t ∈ [0, α]. For ease of notation, we will make no further explicit reference to the boundary conditions in this chapter. The stability theory for linear constant pass length repetitive processes is based on the following abstract model of the underlying dynamics where Eα is a suitably chosen Banach space with norm · and Wα is a linear subspace of Eα : yk+1 = Lα yk + bk+1 ,
k 0.
In this model yk is the pass profile on pass k and Lα is a bounded linear operator mapping Eα into itself. The term Lα yk represents the contribution from pass k to pass k + 1 and bk+1 represents known initial conditions, disturbances and control input effects. We denote this model by S . In the case of (1), we choose Eα = m 2 [0, α], i.e. the space of sequences of real m × 1 vectors of length α (corresponding to t = 1, 2, . . . , α in (1)). Also write this state space model in the following equivalent form over 0 t α yk (0) = Cxk (0) + D0 yk−1 (0), yk (t) = CAt dk +
t−1
CAt−j −1 B0 yk (j ) + Buk (j ) ,
1 t α.
j =0
Now define the bounded linear map in Eα by D y(j ): j = 0, 0 t−1 Lα y(t) = D y(t) + CAt−j −1 B0 y(j ), 0
1 t α,
j =0
and also Cd : j = 0, k+1 t−1 bk+1 (j ) = CAt d + CAt−j −1 Buk+1 (j ), k+1
1 t α.
j =0
Then we clearly have a special case of S . The linear repetitive process S is said to be asymptotically stable [21] if ∃ a real scalar δ > 0 such that, given any initial profile y0 and any disturbance sequence
Constructive multidimensional systems theory with applications
171
{bk }k1 ⊂ Wα bounded in norm (i.e. bk c1 for some real constant c1 0 and ∀k 0), the output sequence generated by the perturbed process yk+1 = (Lα + γ )yk + bk+1 ,
k 0,
is bounded in norm whenever γ δ . This definition is easily shown to be equivalent to the requirement that ∃ finite real scalars Mα 0 and λα ∈ (0, 1) such that k L Mα λk , k 0 α α (where · is also used to denote the induced operator norm). A necessary and sufficient condition [21] for the existence of such scalars is that the spectral radius, r(Lα ), of Lα satisfies r(Lα ) < 1.
In the special case of processes described by (1), the necessary and sufficient condition for asymptotic stability is (for the proof again see [21]) that r(D0 ) < 1. This result is ‘counterintuitive’ result in the sense that asymptotic stability is essentially independent of the process dynamics and, in particular, the eigenvalues of the matrix A. This is due entirely to the fact that the pass length a is finite and of constant value for all passes. This situation will change drastically if (as below) we let α → +∞. The above analysis provides necessary and sufficient conditions for asymptotic stability but no really ‘useful’ information concerning transient behaviour and, in particular, about the behaviour of the output sequence of pass profiles as the process evolves from pass to pass (i.e. in the k direction). The limit profile provides a characterisation of process behaviour after a ‘large number’ of passes have elapsed. Suppose that the abstract model S is asymptotically stable and let {bk }k1 be a disturbance sequence that converges strongly to a disturbance b∞ . Then the strong limit y∞ := lim yk k→+∞
is termed the limit profile corresponding to this disturbance sequence. Also, it can be shown [21] that y∞ is uniquely given by y∞ = (I − Lα )−1 b∞ .
Note also that this last expression can be obtained from the equation which describes the dynamics of S by replacing all variables by their strong limits. In the case considered here, the limit profile is described by the following result. Proposition 1. In the case when S described by (1) is asymptotically stable, the resulting limit profile is
172
H. Pillai et al.
y∞ (t + 1) = A + B0 (Im − D0 )−1 C y∞ (t) + Bu∞ (t), y∞ (0) = (Im − D0 )−1 Cx∞ (t),
x∞ (0) = d∞ ,
where d∞ is the strong limit of {dk }k1 . Asymptotic stability of processes described by (1) guarantees the existence of a limit profile which is described by a standard, or 1D, linear systems state space model. Hence, in effect, if the process under consideration is asymptotically stable, then its repetitive dynamics can, after a ‘sufficiently large’ number of passes, be replaced by those of a 1D linear timeinvariant system. This result has obvious implications in terms of the design of control schemes for these processes. Owing to the finite pass length (over which duration even an unstable 1D linear system can only produce a bounded output), asymptotic stability cannot guarantee that the resulting limit profile has ‘acceptable’ along the pass dynamics, where in this case the basic requirement is stability as a 1D linear system. As a simple example to demonstrate this fact, consider the case of A = −0.5, B = 1, B0 = 0.5 + β , C = 1, D0 = 0, xk+1 (0) = 0, k 0, where β > 0 is a real scalar. Then the resulting limit profile dynamics are described by the unstable 1D linear system y∞ (t + 1) = βy∞ (t) + u∞ (t),
0 t α,
when β 1. The natural definition of stability along the pass for the above example is to ask that the limit profile is stable in the sense that β < 1 if we let the pass length a become infinite. This intuitively appealing idea is, however, not applicable to cases where the limit profile resulting from asymptotic stability is not described by a 1D linear systems statespace model. Consequently stability along the pass for the general model S has been defined in terms of the rate of approach to the limit profile as the pass length α becomes infinitely large. One of several equivalent formulations of this property is that S is said to be stable along the pass if, and only if, ∃ real numbers M∞ > 0 and λ∞ ∈ (0, 1) which are independent of α and satisfy k L M∞ λk , ∀α > 0, ∀k 0. α ∞ Necessary and sufficient conditions [21] for the existence of such scalars are that r∞ := sup r(Lα ) < 1 α0
and M0 := sup sup (zI − Lα )−1 < +∞ α0 zλ
for some real number λ ∈ (r∞ , 1). In the case of processes described by (1), the following [21] is a set of necessary and sufficient conditions for stability along the pass.
Constructive multidimensional systems theory with applications
173
Theorem 1. Suppose that the pair {A, B0 } is controllable and the pair {C, A} is observable. Then S generated by (1) is stable along the pass if, and only if, (a) r∞ = r(D0 ) < 1, r(A) < 1, and (b) the two variable polynomial
z1 In − A −B0 ρ(z1 , z) = det −C zIm − D0 satisfies ρ(z1 , z) = 0,
3. G R O¨ B N E R
for z1  1 and z 1.
BASES AND TRAJECTORY GENERATION
The Cauchy problem for discrete systems (A = k N ) is as follows: consider the trajectories in a given behaviour B ⊆ Aq as functions from Nq to k , where Nq = q n n i=1 N denotes q copies of the lattice N . With this interpretation, the Cauchy problem is to find a set S ⊆ Nq such that the values w(s) for s ∈ S are completely independent and also that these values determine the rest of the trajectory. A related problem is the construction of a scheme for generating the rest of the trajectory from the initial conditions. In this section, we describe Oberst’s solution to the Cauchy problem (see [12, para 5]; see also [8] for more details on the computational aspects of this theory. Finding a (‘small’) set S ⊆ Nq such that wS determines w is not that hard. It is necessary only that there should be some total ordering of the points of Nq which enable computation of each value w(t), t ∈ Nq , from wS together with previously evaluated values. Two further conditions on this total ordering of Nq are not necessary but are convenient: n
(a) If t (0) is the origin in some component of Nq and t (1) is any point in the same component, then t (0) t (1) . In other words, computation should start from the origin and work outwards. (b) For any t (1) , t (2) ∈ Nq , t (1) t (2) must imply t (1 ) t (2 ) for any equivalent shifts t (1 ) , t (2 ) of t (1) and t (2) in their respective components. In other words, the relative order of two points depends on their relative and not their absolute positions in Nq . A difference equation v · w = 0, v ∈ k[z]1,q , applied at t = 0 gives a relation between the values w(t) for certain values of t . These ‘certain values’ are dea termined by the monomials of v , i.e. the terms z11 · · · znan ei comprising v , where a1 , . . . , an ∈ N and ei is a natural basis vector, i = 1, . . . , q . In this way, we have a natural correspondence between the points of Nq and the monomials of k[z]1,q . Under this identification, the class of orderings on Nq satisfying conditions (1)–(2) above are precisely the wellknown monomial orderings [4,6]. One example is homlex, the homogeneous lexicographic ordering, which orders the monomials
174
H. Pillai et al.
(a)
(b)
Figure 1. First attempt at identifying initial conditions, (a) masks of two difference equations, (b) the set S .
ai , then by z1 degree, then by z2 degree, . . . , first by the highest total degree by zn degree, and finally by component (e1 > e2 > · · · > eq ). Other important orderings are reverse lexicographic ordering, which differs subtly from homlex, and lexicographic ordering, which omits the stage ‘order by total degree’ in the description of homlex. Given a monomial ordering and a set of equations, we can make a first attempt at finding an initial condition set. Set S = Nq initially. Now for each equation v ∈ k[z]1, q , find the initial term in v which is that term of v corresponding to the highest monomial under the given ordering. Exclude from S all points of Nq corresponding to monomials which are multiples of in v . Repeat this process for all equations to obtain the final S . As an example, consider the following 2D equations in a single dependent variable w (hence q = 1): w(t1 + 2, t2 ) − w(t2 , t2 ) = 0, w(t1 + 1, t2 + 1) − w(t1 + 1, t2 ) + w(t1 , t2 ) = 0. 2
The solution of these equations in A = RN is the behaviour B given by
g1 g1 z12 − 1 . , := B = kerA z1 z2 − z1 + 1 g2 g2 The initial terms under homlex of these equations are z12 and z1 z2 respectively (since q = 1 we can drop the ‘ei ’ notation), and the region S is as shown in Fig. 1. Clearly we can generate w along the line (1, t2 ) from the values of wS using g2 , and we can then use the first equation g1 to generate the rest of w . Hence wS indeed determines the whole of w . This will be the case for any set of equations and any monomial ordering: as we go through the lattice points according to the monomial ordering, a given point t is either an initial condition, or it corresponds to a monomial which is a multiple of an initial term of some system equation. In the latter case, we can apply the equation to obtain an expression for w(t) in terms of the values of w at some points previously evaluated. This enables us to inductively compute w .
Constructive multidimensional systems theory with applications
175
Figure 2. Second set S .
Unfortunately, this does not solve the Cauchy problem. Returning to the example, we see that w(2, 1) can be evaluated using either equation; we have both w(2, 1) = w(0, 1)
and w(2, 1) = w(2, 0) − w(1, 0) = w(0, 0) − w(1, 0).
This means that there is an equation relating our ‘initial equations’: w(1, 0) + w(0, 1) − w(0, 0) = 0
and these conditions are not free at all. By a corresponding derivation, the same relation holds if shifted, so we have identified a new system equation which we may write as g3 w = 0, where g3 = z1 + z2 − 1. The initial term of this polynomial is z1 , so we might reasonably redefine our set S to be the region given in Fig. 2. This is still not good enough. We can apply the same argument again, for we find that w(1, 1) can be computed using g3 as w(0, 1) − w(0, 2), or using g2 and then g3 as −w(0, 1), leading to yet another system equation g4 w = 0, g4 = z22 − 2z2 , and a still smaller initial condition set as shown in Fig. 3. Fortunately this process ends here; for example two different computations of w(1, 2) now give w(1, 2) = 2w(1, 1) = 2w(1, 0) − 2w(0, 0) = −2w(0, 1)
and w(1, 2) = w(1, 1) − w(0, 1) = w(1, 0) − w(0, 1) − w(0, 0) = −2w(0, 1).
In fact, at this stage we will obtain the same expression for any trajectory value in terms of the initial conditions, regardless of the computational method applied, i.e.
176
H. Pillai et al.
Figure 3. Third set S .
there are no further equations relating the initial conditions. This solves the Cauchy problem for the given example. At each stage of the computation, we have a set of system equations from which we construct the set S . Our second requirement for S to be an initial condition set is that there should be no system equation relating the values w(s), s ∈ S . For this it is sufficient and in fact also necessary that no system equation should have an initial term corresponding to a point of S . This leads us to a definition: let in B ⊥ denote the k span of all initial terms of elements of B ⊥ under the given monomial ordering. Then our condition for independence of the values of w(s), s ∈ S , is that every element of B ⊥ should have an initial term corresponding to a point which is outside S , i.e. it should have an initial term which is a multiple of the initial term of one of the given system equations. In conclusion, there needs to be a monomial ordering such that the initial terms of the given system equations generate in B ⊥ . This is precisely what is meant by saying that the system equations form a Gröbner basis of the module B ⊥ . The process we have informally described above for generating new equations on the current set {w(s): s ∈ S} is essentially the Buchberger algorithm for constructing a Gröbner basis. The initial condition set finally constructed is the set of points in Nq corresponding to monomials which lie outside in B ⊥ , i.e. which are not multiples of initial terms of elements of the Gröbner basis. The trajectory can be computed using the Gröbner basis equations from the initial conditions, which solves the Cauchy problem. Gröbner bases have been adapted for solving the Cauchy problem over the signal n space A = k Z in [19,26,24]. An analogous tool was developed by Riquier and Janet for the study of formal power series solutions to PDEs, as explained in [16]; this work in fact predates the Gröbner basis. 4. E L I M I N A T I O N
OF LATENT VARIABLES
The specification of a behaviour often involves auxiliary ‘latent variables’. Thus one might obtain a behaviour from a set of equations of the form Rw = Ml , where R ∈ k[z]g,q and M ∈ [z]g,m are polynomial matrices with the same number of rows. The behaviour B is given by the set formed from those w for which there exists
Constructive multidimensional systems theory with applications
177
some l such that Rw = Ml . Such a representation of B is called a latent variable representation. Given such a representation, can one find a kernel representation for the behaviour? The answer is yes. The method described in [9], [12, see Cor. 2.38], is as follows: first find all the relations (‘syzygies’) between the rows of the matrix M (i.e. the kernel of the lefthand action of M on polynomial vectors). This is easily done through a Gröbner basis calculation [4,6]. Collecting together a generating set for these relations as the rows of a matrix E , we obtain what is called a minimal left annihilator (MLA) E ∈ k[z]•,g of the matrix M . Minimal right annihilators (MLAs) are defined by transposition, i.e. they give relations on the columns. Having obtained E , a kernel representation of B is given by the matrix ER , i.e. B = ker ER . We now propose an alternative construction. Consider the matrix [R − M] ∈ k[z]g,q+m , where R and M are polynomial matrices specified in the latent variable representation. We will now find a Gröbner basis for the submodule of k[z]1,q+m generated by the rows of the matrix [R − M]. Consider any monomial ordering which has the following property: it should order monomials first by component, such that components corresponding to the latent variables have higher priority, e.g. (eq+m > eq+m−i > · · · > e1 ), followed by some monomial ordering on the ring k[z]. Such an ordering has the property that if the initial term in in v of some v ∈ k[z]1,q+m corresponds to the component ei , then all components of v corresponding to j > i are zero. We term such an ordering a componentpriority ordering. Lemma 1. Suppose that Rw = Ml is a latent variable representation of a behaviour B and let v1 , v2 , . . . , vr form a Gröbner basis of the rows of [R − M] under a componentpriority ordering with eq+i > ej for all i ∈ {1, . . . , m} j ∈ G{1, . . . , q}. Consider the elements {vi } in the Gröbner basis whose last m components (those corresponding to the latent variables) are zero. Then by truncating the last m components of these vi ’s we obtain the rows of a kernel representation of B . Proof. Consider the v ’s with the last m components 0 as indicated. We can think of these v ’s as elements of k[z]1,q by truncating the last m components. Our claim is that these v ’s generate the submodule B ⊥ . In fact, these v ’s form a Gröbner basis of B ⊥ with the monomial ordering that k[z]1,q inherits from the monomial ordering already specified on k[z]1,q+m . Since each vi is in the module of equations on the system variables, w , l , but in fact restricts only the variables w , it must be a system equation of B , i.e. v ∈ B⊥ . Hence all k[z]linear combinations of the v ’s are in B ⊥ . Conversely, by the first algorithm for elimination of latent variables, any element of B ⊥ is of the form v = xER , where E is an MLA of M and x is some polynomial vector. Hence [v 0] = y[R − M] for some polynomial vector y (necessarily a relation on the rows of M ). In other words, [v 0] is in the span of the rows of [R − M]. Now, by the very nature of the monomial ordering that was chosen, every element in the module generated by the rows of [R − M], whose last m components are zero,
178
H. Pillai et al.
have to be obtained as a k[z]linear combination of the Gröbner basis elements, whose last m components are zero. This completes the proof. 2 The proof of Lemma 1 relies on the above algorithm for latent variable elimination, which in turn relies on the injectivity of the signal space A. This result is new, although an equivalent method was proposed by Miri and Aplevich [11], based on the technique of algebraic variable elimination (see, e.g., [6]). There are several interesting results that one obtains from the above mentioned algorithm to obtain a kernel representation of a behaviour. For example, one can calculate the MLA of a matrix M ∈ ˛[z]g,m by considering a latent variable representation v = Ml of a behaviour B and then finding its kernel representation. If fact, this method also helps in finding a syzygy module of the rows of M , since the MLA of a matrix generates the syzygy module. 5. C O N T R O L L A B I L I T Y
AND OBSERVABILITY
Of particular importance to control theory are the controllable behaviours [15,19, 22]. For our purposes, it suffices to say that these are the behaviours that have a latent variable representation of the special form w = Ml . Given any behaviour, the largest controllable subbehaviour is welldefined and is called the controllable part. Given a behaviour B , one is interested in finding whether the behaviour is controllable, i.e. in checking whether the behaviour is equal to its controllable part. Let B = kerA R and consider the behaviour given by the latent variable representation w = Ml , where M ∈ k[z]q,• is such that RM = 0. Since Rw = 0 by the definition of M , this behaviour is a subbehaviour of B . Note that every column of such an M is a relation between the columns of R . Hence if one chooses a matrix M whose columns generate all the relations between the columns of R , then the behaviour defined by the latent variable representation w = Ml would have the smallest set of system equations among such. Equivalently, this behaviour must be the largest controllable subbehaviour of the given behaviour B , i.e. the controllable part of B . An M specified as above, would be a minimal right annihilator of the matrix R . Thus the controllable part of a given behaviour B = kerA R can be found by first finding an MRA M of R and then finding an MLA of M . This is already known (see, e.g., [8], [12, Thm. 2.24], [25]; see also the ‘torsionfree test’ given in [17, Section 4.1]). Thus we can find the controllable part of a given behaviour using the technique for computing MLAs/MRAs described at the end of the last section. The steps are as follows – first find a Gröbner basis for the matrix [I, R T ] with the appropriate monomial ordering. Then choose the relevant Gröbner basis elements and truncate them appropriately to obtain M T . Now find the Gröbner basis for [I M] and then again choose the relevant Gröbner basis elements to obtain a kernel representation of the controllable part. We call this matrix Rc . Thus Bc = kerA Rc is the largest controllable subbehaviour of B .
Constructive multidimensional systems theory with applications
179
The given behaviour B is controllable if, and only if, it is equal to the largest controllable subbehaviour, i.e. B = Bc . Hence the above algorithm can be used to check if a given behaviour is controllable [12, Alg. 2.25], [25, Lemma 3]. We first find the matrix Rc by the algorithm mentioned above. Now we check for membership of each row of Rc in the module generated by the rows of R ; this test is equivalent to the condition B ⊆ Bc [12, Thm. 2.61]. Testing for submodule membership can again be done using a Gröbner basis for R (e.g., [6, Section 15.10.1]). Given a behaviour B = kerA R , one could subdivide the components of the elements of w ∈ B into w = (w1 , w2 ). The variables w2 are said to be observable from w1 if given two elements (w1 , w2 ) and (w1 , w2 ) in B with w1 = w2 implies that w2 = w2 . This is equivalent to saying that there exists an operator F such that w2 = F w1 . We will now outline an algorithm to check observability. For this algorithm we need a reduced Gröbner basis, that is a Gröbner basis in which no initial term of a Gröbner basis element divides any other term of any other element. We generally assume that the coefficient of the initial term of each reduced Gröbner basis element is 1. Let B = kerA R and a division of the components of B into w1 and w2 be given. We will first find the reduced Gröbner basis of R using a componentpriority monomial ordering which gives highest priority to the components which correspond to the w2 variables. Let [G1 , G2 ] be the matrix obtained from the reduced Gröbner basis elements, with
G 1 acting on the variables w1 and G2 acting on w2 . Check if G2 is of the form I0 (or some permutation of the rows of this matrix). If this is true, then w2 is observed from w1 . In fact, if we pick the part of G1 corresponding to the Gröbner basis elements that form the identity part of G2 , then the matrix F so obtained gives us w2 = −F w1 . It is not hard to see how the above algorithm works. If w2 is observable from w1 , then the module B ⊥ contains equations of the form wi = j pj w j , where the w are components of w2 and the w j are components of w1 . The pj belong to k[z]. By the very nature of the monomial ordering we have used, the equations mentioned above have to be in the reduced Gröbner basis. 6. C O N S T R U C T I O N
AND REALISATION OF TRANSFER MATRICES
The variables of a behaviour B are often considered to be divided into inputs and outputs. It is convenient and customary to assume that the inputs are free, i.e. that they can independently take on any value in the signal space, and that the outputs contain no such free variables once the inputs are fixed. Once such a partition of variables is decided upon, the behaviour can be represented by the equation Qu = P y,
B = kerA (−QP )
and the properties above on the freedom of the variables translate into the condition (2)
rank(−QP ) = rank P = p,
180
H. Pillai et al.
where p is the number of outputs, i.e. the number of columns of P . The transfer (function) matrix is then the unique rational function matrix G satisfying P G = Q [12, Thm. 2.69]. In the case when P is square, G = P −1 Q trivially. However, for an nD system (even a controllable system for n 3), this is often not the case. The transfer matrix may, however, be computed as detailed next. First note that by (2) the columns of P form a maximally linearly independent subset of the columns of (−QP ). Hence each column of Q is linearly dependent on the columns of P , i.e. there is a nonsingular diagonal polynomial matrix D and a polynomial matrix X with QD = P X . The transfer matrix is now simply XD −1 and symbolic computation of this matrix is trivial given D and X . Each column of Q can be dealt with separately. Suppose that v is a column of Q; then we are looking for the relations on the columns of the augmented matrix (P − v). Of course, these can be computed as syzygies via a Gröbner basis of the columns of this matrix. Since P has full column rank, all syzygies will have a nonzero entry in the last position. If γg is such a syzygy for g ∈ k[z]p and γ ∈ k[z], then g/γ is the required column of the transfer matrix. By arguing that the syzygy matrix must have rank 1 we can confirm that the same transfer matrix column will be arrived at regardless of the syzygy chosen, so the transfer matrix is indeed unique. Many behaviours will have the same transfer matrix; however, there is a unique behaviour which is minimal in this set, which in fact is the controllable part of any behaviour in this class [22, Lemma IV.14]. This controllable behaviour is therefore called the minimal realisation of G, and computing it is immediate from the construction of the controllable part [12, Cor. 2.29]. Given G, find a common denominator d of its entries to get the factorisation G = (dI )−1 Q for some Q. Now use the algorithm of the last section to compute the controllable part of the behaviour (−Q dI ), which has transfer matrix G. Alternatively, a kernel representation can in fact be derived directly by computing a minimal left annihilator of dI Q , again using syzygies. 7. P O L E S
OF DISCRETE LINEAR REPETITIVE PROCESSES
It has long remained an open question in 2D (or, more generally nD) linear systems as to what (if anything) is meant by a pole? In particular, the question asked is: Does there exists a definition of a pole for these systems which is the natural generalisation of the ‘exponential’ trajectory interpretation of a pole for 1D linear systems? Next we establish that the answer to this question is yes for the discrete linear repetitive processes considered in this chapter. Since the state in pass 0 plays no role in what follows, it is convenient to relabel the state trajectories in (1) using xk+1 (p) → xk (p) (keeping of course the same interpretation). The pole concept is defined in the behavioural approach and proceeds as follows (for background on the behavioural approach to the poles of a 1D linear system see, for example, the relevant cited references in [23]).
Constructive multidimensional systems theory with applications
181
The behaviour Bx,u,y of the discrete linear repetitive process under consideration (see (1)) is given by the kernel representation: x −B0 z1 In − A −zB u = 0, −C 0 zIm − D0 y where now z1 denotes the shift operator along the pass, applied, e.g., to xk (p) as follows: (z1 xk )(p) := xk (p + 1)
and z the passtopass shift, applied, e.g., to yk (p) as follows: (zyk )(p) := yk+1 (p).
The components of the solutions of the system can be considered as functions from N2 to R, though for purposes of interpretation they are cut off in one dimension at the pass length α . The poles of the system/repetitive process are defined as the characteristic points of the zeroinput behaviour Bx,0,y , i.e. the set of all trajectories which can arise when the input vanishes. The zeroinput behaviour is given to within trivial isomorphism by −B0 z1 In − A x = 0. (3) −C zIm − D0 y Applying Theorem/Definition 4.4 from [23], we can define the poles as follows. Definition 1. The poles of the linear repetitive process (1) are the points in 2D complex space where the matrix on the lefthandside of (3) fails to have full rank; i.e. they are given by the set (4) V(Bx,0,y ) = (a1 , a) ∈ C2  p(a1 , a) = 0 , where
p(z1 , z) = det
z1 In − A −C
−B0 zIm − D0
.
The set V is called the pole variety of the system. Also it can be shown (see [21]) that stability along the pass requires that the characteristic variety (4) of the zero input behaviour lies in the closed unit polydisc (5) P = (a1 , a) ∈ C2  a1  1, a 1 . Since in this case the pole variety is given by the vanishing of a single 2D nonunit polynomial, it is guaranteed to be a onedimensional geometric set in 2D complex
182
H. Pillai et al.
space, i.e. a curve. In particular, the pole variety cannot be zerodimensional (i.e. finite). This corresponds to the fact that proper principal ideals in the ring C[z1 , z] have codimension 1. Note also that the pole variety is a complex variety, even though the entries of the matrices A, B0 , C and D0 are generally assumed to be real. This is essential in order to capture the full exponentialtype dynamics of the process. Poles can be interpreted in terms of exponential trajectories [23], which in the case of discrete linear repetitive processes have a clear physical interpretation. Assume therefore that (a1 , a) ∈ C2 is a zero of p(z1 , z), and write it in the form a1 = r1 eiθ1 , a = r eiθ (with θ1 = 0 for a1 = 0 and θ = 0 for a = 0). The existence of such a zero guarantees [23] the existence of an ‘exponential trajectory’ in the system having the form (6)
1 2 xk (p) = x00 r1 r k cos(θ1 p + θ k) + x00 r1 r k sin(θ1 p + θ k),
(7)
1 2 yk (p) = y00 r1 r k cos(θ1 p + θ k) + y00 r1 r k sin(θ1 p + θ k),
(8)
p p
p p
uk (p) = 0,
1 , x 2 ∈ Rn , y 1 , y 2 ∈ Rm , and at least one of these four is nonzero. where x00 00 00 00 This form of exponential trajectory has been characterised algebraically (see the relevant references in [23]). Conversely, the existence of such a trajectory implies that p(r1 eiθ1 , r eiθ ) = 0, i.e. the ‘frequency’ (r1 eiθ1 , r eiθ ) is a pole of the repetitive process. In the case where (a1 , a) ∈ R2 , it is straightforward to construct such trajectories from the zeroes. Take a1 and a to be real numbers satisfying p(a1 , a) = 0. There must then exist a nonzero vector (x00 , y00 ) ∈ Rn+m satisfying
−B0 a1 In − A x00 = 0. −C aIm − D0 y00
Now extend (x00 , y00 ) to a system trajectory by (9)
xk (p) = x00 a1 a k ,
(10)
yk (p) = y00 a1 a k ,
(11)
p p
uk (p) = 0.
A routine computation now shows that (9)–(11) indeed describes a solution of the system. Returning to the general case (6)–(8), we see that if a = r > 1 then we have a nonzero exponential (or sinusoidal) stateoutput trajectory in the system, which tends towards infinity as the pass number increases (but may remain stable along any given pass). Conversely, if a = r 1 for all poles (a1 , a), then no trajectory tends to infinity for a given value of p as the pass number increases, but there may be trajectories tending to infinity along the pass. Thus here we have the distinction between asymptotic stability and stability along the pass in that order to avoid having trajectories of the form (6)–(8) which are unstable either along the pass
Constructive multidimensional systems theory with applications
183
or in the passtopass direction, we also need to avoid poles (a1 , a) with a1  > 1. In other words, we need that the characteristic variety (4) of the zeroinput behaviour lies in the closed unit polydisc P 1 defined by (5). Equivalently, with zero input there should be no exponential/sinusoidal stateoutput trajectories which tend to infinity either in the passtopass direction or along the pass. Overall, the trajectory based interpretation of the poles of a discrete linear repetitive process provides, as detailed above, ‘physical’ insight into what dynamics a process with one or both of the stability properties will exhibit. R EFERENCES [1] Amann N., Owens D.H., Rogers E. – Predictive optimal iterative learning control, Internat. J. Control 69 (2) (1998) 203–226. [2] Benton S.E. – Analysis and Control of Linear Repetitive Processes, Ph.D. thesis, University of Southampton, UK, 2000. [3] Bose N.K., Charoenlarpnopparut C. – Gröbner bases in nD FIR filter bank design: A synopsis, in: Galkowski K., Wood J. (Eds.), Recent Developments in nD Systems, Taylor and Francis, 2000, Chap. 10. [4] Buchberger B. – Gröbner bases: An algorithmic method in polynomial ideal theory, in: Bose N.K. (Ed.), Multidimensional Systems Theory, D. Reidel, 1985, pp. 184–232. [5] Edwards J.B. – Stability problems in the control of multipass processes, Proc. Inst. Electr. Eng. 121 (1974) 1425–1431. [6] Eisenbud D. – Commutative Algebra with a View Toward Algebraic Geometry, Graduate Texts in Math., vol. 150, SpringerVerlag, 1995. [7] Fornasini E., Marchesini G. – Doubly indexed dynamical systems: State space models and structural properties, Math. Systems Theory 12 (1978) 59–72. [8] Kleon S., Oberst O. – Transfer operators and state spaces for discrete multidimensional linear systems, Acta Appl. Math. 57 (1) (1999) 1–82. [9] Komornik J.P., Rocha P., Willems J.C. – Closed subspaces, polynomial operators in the shift, and ARMA representations, Appl. Math. Lett. 4 (3) (1991) 15–19. [10] Lin Z.P. – Output feedback stabilizability and output stabilization of linear nD Systems, in: Galkowski K., Wood J. (Eds.), Recent Developments in nD Systems, Taylor and Francis, 2000, Chap. 4. [11] Miri S.A., Aplevich J.D. – Relationship between representations of ndimensional linear discretetime systems, in: Proceedings of 36th IEEE Conference on Decision and Control, 1997, pp. 1457–1462. [12] Oberst U. – Multidimensional constant linear systems, Acta Appl. Math. 20 (1990) 1–175. [13] Owens D.H. – Stability of linear multipass processes, Proc. Inst. Electr. Eng. 124 (1977) 1079– 1082. [14] Owens D.H., Amann N., Rogers E., French M. – Analysis of linear iterative learning control schemes – A 2D systems/repetitive processes approach, Multidimens. Systems Signal Process. 11 (2000) 125–177. [15] Pillai H., Shankar S. – A behavioral approach to the control of distributed systems, SIAM J. Control Optim. 37 (2) (1999) 388–408. [16] Pommaret J.F. – Partial Differential Equations and Group Theory: New Perspectives for Applications, Math. Appl., vol. 293, Kluwer, Dordrecht, 1994. [17] Pommaret J.F., Quadrat A. – Generalized Bezout identity, Appl. Algebra Engrg. Comm. Comput. 9 (2) (1998) 91–116. [18] Roberts P.D. – Twodimensional analysis of an iterative nonlinear optimal control algorithm, IEEE Trans. Circuits Systems I Fund. Theory Appl. 49 (6) (2000) 872–878. [19] Rocha P. – Structure and Representation of 2D Systems, Ph.D. thesis, University of Groningen, The Netherlands, 1990.
184
H. Pillai et al.
[20] Roesser R.P. – A discrete state space model for linear image processing, IEEE Trans. Automat. Control AC 20 (1) (1975) 1–10. [21] Rogers E., Owens D.H. – Stability Analysis for Linear Repetitive Processes, Lecture Notes in Control and Inform. Sci., vol. 175, SpringerVerlag, Berlin, 1992. [22] Wood J., Rogers E., Owens D.H. – Controllable and autonomous nD linear systems, Multidimens. Syst. Signal Process. 10 (1) (1999) 33–69. [23] Wood J., Oberst U., Rogers E., Owens D.H. – A behavioural approach to the pole structure of onedimensional and multidimensional linear systems, SIAM J. Control Optim. 38 (2) (2000) 627–661. [24] Zampieri S. – A solution of the Cauchy problem for multidimensional discrete linear shiftinvariant systems, Linear Algebra Appl. 202 (1994) 143–162. [25] Zerz E. – Primeness of multivariate polynomial matrices, Systems Control Lett. 29 (3) (1996) 139–146. [26] Zerz E., Oberst O. – The canonical Cauchy problem for linear systems of partial difference equations with constant coefficients over the complete r dimensional lattice Zr , Acta Appl. Math. 31 (3) (1993) 249–273.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
A test for state/drivingvariable representability of 2D behaviors
Isabel Brás and Paula Rocha Department of Mathematics, University of Aveiro, Campo de Santiago, 3810193 Aveiro, Portugal Emails:
[email protected] (I. Brás),
[email protected] (P. Rocha)
A BSTRACT We consider discrete 2D behaviors with kernel representation and investigate the question of representability by means of a 2D state/drivingvariable (SDV) model. It turns out that a 2D behavior B is SDVrepresentable if and only if it has a full row rank kernel representation and, additionally, the gcd’s of the maximal order minors of any full row rank kernel representation of B are unimodularly equivalent to a 2D Laurentpolynomial with 2Dproper inverse.
1. I N T R O D U C T I O N In this chapter we consider discrete 2D behaviors that can described by a kernel representation of the form R(σ1 , σ2 , σ1−1 , σ2−1 )w = 0, where R(s1 , s2 , s1−1 , s2−1 ) is a 2D Laurentpolynomial shift operator and the system variable w is a vector valued signal whose components are not divided into inputs and outputs. For short, we will refer to such behaviors as kernel behaviors. The question that we investigate is the existence of an alternative firstorder description of the form σ1 x = A(σ )x + B(σ )v, (1) w = Cx + Dv, where σ = σ1 σ2−1 , A(s) = A1 s + A0 , B(s) = B1 s + B0 and A0 , A1 , B0 , B1 , C and D are real matrices of suitable dimensions. Eqs. (1) are said to be a Key words and phrases: Kernel behavior, 2D firstorder representation, 2Dproperness, State/drivingvariable model, 2D Laurentpolynomial
186
I. Brás and P. Rocha
state/drivingvariable (SDV) representation of B = ker R if B = {w : Z2 → Rq  ∃x, v such that (1) holds}. In this case we say that B is SDVrepresentable. From the formal point of view, SDV models are very similar to the wellknown Fornasini–Marchesini models, [3], but with the essential difference that here the role of the input is taken by the drivingvariable, while the system variable w (which contains both inputs and outputs) plays the role of an output. Here we start from a previous characterization of the SDVrepresentable kernel behaviors given in [2], according to which SDVrepresentability is equivalent to the existence of a kernel representation that can be factored as a product of a leftprime 2D Lpolynomial matrix by a square 2D Lpolynomial matrix with 2Dproper inverse. It turns out that this condition is not very easy to check. However, it can be taken as a starting point for an alternative test for state/drivingvariable representability. Indeed, we will show that a 2D behavior B is SDVrepresentable if and only if it has a full row rank kernel representation and, additionally, the gcd’s of the maximal order minors of any full row rank kernel representation of B are unimodularly equivalent to a 2D Laurentpolynomial with 2Dproper inverse. These alternative conditions are easier to check. 2. P R E L I M I N A R I E S In the following we consider discrete 2D systems = (Z2 , Rq , B) that admit a kernel representation, i.e. for which there exists a 2D Lpolynomial matrix, R s1 , s2 , s1−1 , s2−1 ∈ Rg×q s1 , s2 , s1−1 , s2−1 ,
such that B = w ∈ Z2 : Rq  R σ1 , σ2 , σ1−1 , σ2−1 w = 0 = ker R σ1 , σ2 , σ1−1 , σ2−1 ,
where σ1 , σ2 are the usual 2D shifts (i.e. σ1 w(i, j ) = w(i + 1, j ) and σ2 w(i, j ) = w(i, j + 1) for w : Z2 → Rq ). The equation (2)
R σ1 , σ2 , σ1−1 , σ2−1 w = 0
is called a kernel representation of (and B ) and the matrix R(σ1 , σ2 , σ1−1 , σ2−1 ) is simply said to be a representation of (and B ). We denote the set of all 2D systems = (Z2 , Rq , B) with kernel representation by Lq . Definition 1. Let = (Z2 , Rq , B) ∈ Lq . The system of equations
(3)
σ1 x = A(σ )x + B(σ )v, w = Cx + Dv,
A test for state/drivingvariable representability of 2D behaviors
187
where σ := σ1 σ2−1 , σ1 , σ2 are the usual 2D shifts, A(s) = A1 s + A0 ∈ Rn×n [s], B(s) = B1 s + B0 ∈ Rn×m [s], C ∈ Rq×n , and D ∈ Rq×m , is called a state/drivingvariable representation (SDV) of (of B ) if B = w : Z2 → Rq  ∃x, v such that (3) holds .
In this case B is said to be the manifest behavior associated with (3) and B is said to be SDVrepresentable. The question of the existence of such a representation has already been studied in [2]. Let us recall some basic facts already shown in that previous work which are here our starting point. It is a wellknown fact that every 2D behavior is decomposable as a sum of its controllable part, B c , with an autonomous part. Taking this into account together with the fact that controllable behaviors are always SDVrepresentable [5] it is possible to conclude the following. Proposition 1 ([2]). Let B a 2D behavior with a kernel representation. B is SDVrepresentable if and only if B has an SDVrepresentable autonomous part. The representability of an autonomous behavior is characterized in [2] in terms of the notion of 2Dproperness. We say that a 2D rational function f is 2Dproper if f = p/q , where p, q ∈ R[s1−1 , s2−1 ] and the zerodegree coefficient of q is nonzero. A 2D rational matrix will be called 2D proper if all its entries are 2Dproper rational functions. Recall that every 2D Lpolynomial matrix R(s1 , s2 , s1−1 , s2−1 ) can be written as j R s1 , s2 , s1−1 , s2−1 = Rij s1i s2 , (i,j )∈S
where S is a finite subset of Z2 and Rij is a nonzero constant matrix, (i, j ) ∈ S . The set S is the support of R , usually denoted by supp(R). Notice that, with this notation the definition of 2Dproperness given above may be reformulated by saying that p and q must have their supports in the third quarter plane and moreover the zerodegree coefficient of q is nonzero. Proposition 2 ([2]). Let = (Z2 , Rq , B) ∈ Lq an autonomous system. B is SDVrepresentable if and only if there exists a kernel representation of B with 2Dproper inverse. Based on the two previous propositions it is possible to conclude the following. Theorem 1 ([2]). A 2D kernel behavior B is SDVrepresentable if and only if it can be described as B = ker RR a , where R is (factor) leftprime and R a is unimodularly equivalent to a square Lpolynomial matrix with 2Dproper inverse.
188
3. A
I. Brás and P. Rocha
STRAIGHTFORWARD REFORMULATION
In this section we will reformulate Theorem 1 by only imposing conditions on determinant of the square matrix R a . For that purpose let us prove the next lemma. Lemma 1. A 2D Lpolynomial matrix M(s1 , s2 , s1−1 , s2−1 ) is unimodularly equivalent to a square Lpolynomial matrix with 2Dproper inverse if and only if det M(s1 , s2 , s1−1 , s2−1 ) is unimodularly equivalent to a 2D Lpolynomial with 2Dproper inverse. Proof. Let U be a unimodular 2D Lpolynomial matrix such that N = U M has a 2Dproper inverse. Then it is possible to show that, [1], its determinant is a 2D Lpolynomial with 2Dproper inverse. Since det M = det1U det N , it follows that the det M is unimodularly equivalent to 2D Lpolynomial with 2Dproper inverse (det N ). Reciprocally, let us suppose that det M = up , where u is a unimodular 2D Lpolynomial and p is 2D Lpolynomial with 2Dproper inverse. So, (4)
uM −1 =
1 adj M. p
Note that there exists a unimodular matrix V −1 such that the polynomial matrix adj MV −1 has its support in the thirdquarter plane. This together with the fact that p has a 2Dproper inverse allows us to conclude that the rational matrix 1 adj MV −1 p
is 2Dproper. Then, according to (4), uM −1 V −1 is 2Dproper. Therefore, we have shown that the rational matrix (uV M)−1 is 2Dproper. Since uV is unimodular we may conclude that M is unimodularly equivalent to a 2D Lpolynomial matrix with 2Dproper inverse. 2 If p(s1 , s2 , s1−1 , s2−1 ) is unimodularly equivalent to a 2D Lpolynomial with 2D proper inverse we will say that it is quasi properly invertible. From Theorem 1 and Lemma 1 we obtain the following corollary. Corollary 1. A 2D kernel behavior B is SDVrepresentable if and only if it can be described as B = ker RR a , where R is (factor) leftprime, R a is square and det R a is quasi properly invertible. 4. A N
ALTERNATIVE CHARACTERIZATION
The characterization given by Corollary 1 rests on the existence of an autonomous part, B a = ker R a , with suitable properties. Therefore, due to the nonuniqueness of the autonomous parts, the obtained condition may be difficult to check. This is illustrated in the following example.
A test for state/drivingvariable representability of 2D behaviors
189
Example 1. Let B = ker R(σ1 , σ2 , σ1−1 , σ2−1 ), with R s1 , s2 , s1−1 , s2−1 = (s1 − 1)(s2 − 1)
(s1 − 1)(s1 − s2 ) .
Since R s1 , s2 , s1−1 , s2−1 = s2 − 1
s1 − s2
s1 − 1 0
0 s1 − 1
it follows from Corollary 1 that B = ker R is SDV representable, being ker(s1 − 1)I2 an SDV representable autonomous part of B . However, factoring R as R s1 , s2 , s1−1 , s2−1 = (s2 − 1)(s1 − 1)
1 R a s1 , s2 , s1−1 , s2−1
with 1 R a s1 , s2 , s1−1 , s2−1 = 0
0 (s1 − s2 )(s1 − 1)
we can conclude that ker R a (s1 , s2 , s1−1 , s2−1 ) is an autonomous part of B which is not SDV representable. This example shows that different autonomous parts of the same behavior may have different representability properties, which makes the application of Corollary 1 (and of Theorem 1) a difficult issue. To overcome this problem we propose a characterization in terms of an alternative factorization of the representation matrices that singles out the controllable part, B c , of the behavior which (contrary to autonomous parts) is unique. In fact, it is possible to show the following proposition. Proposition 3. A 2D Lpolynomial matrix R has a factorization (5)
R = RR a ,
where R is (factor) leftprime, R a is a nonsingular square matrix and det R a is quasi properly invertible if and only if R has a factorization (6)
c, R = RR
is quasi is a nonsingular square matrix and det R where R c is (factor) leftprime, R properly invertible.
Proof. If R = RR a with R leftprime and R a square and nonsingular, then B a = ker R a is an autonomous part of B = ker R . From this fact it is possible to show that (7)
R a B c = ker R,
190
I. Brás and P. Rocha
where B c is the controllable part of B . But, R a B c = v: v = R a w ∧ R c w = 0 . So, col(I, O)v = col(R a , R c )w is a latent variable representation of R a (B c ). Let M = [ M1 M2 ] be a minimal left annihilator of col(R a , R c ), then R a (B c ) = ker M1 . Thus, according to (7), since R has full row rank (because it is left prime), there exists a unimodular 2D Lpolynomial matrix U such that (8)
M1 = U R.
= −U −1 M2 , is a (minimal) left annihi], where R Then, the matrix M = [ R −R a c c where R is a nonsingular square matrix lator of col(R , R ). Therefore R = RR must divide det R a . It is not difficult to check and R c is left prime. Moreover det R . that this implies that if det R a is quasi properly invertible then so is det R c c is a Reciprocally, let us suppose that R = RR , where R is left prime and R nonsingular square matrix. Then, it follows from [6] and [4] that B = ker R has an autonomous part B a = ker R a with
c
0 R R a , (9) R = C 0 I
where I is the an identity matrix of suitable dimensions, R c is a (leftprime) representation of the controllable part B c and C is a matrix that rowborders R c up to a nonsingular square matrix whose determinant is a 1D Lpolynomial, say, π(s1 , s1−1 ). From this it is possible to prove that there exists a left prime matrix R , if det R is quasi properly such that R = RR a . Moreover, since det R a = π det R a invertible so will be det R (note that quasi proper invertibility is invariant under multiplication by a 1D Lpolynomial). 2 As a consequence of the previous result and of Corollary 1 we give an alternative test for SDV representability. Theorem 2. A 2D kernel behavior B is SDVrepresentable if and only if it can be c , where R c is (factor) leftprime, R is is square and det R described as B = ker RR quasi properly invertible. c of Theorem 2, has full row rank. Therefore, any Notice that the matrix RR other full row rank representation of the behavior is unimodularly equivalent to this matrix. This together with Theorem 2 allows to show the next result.
Corollary 2. A 2D kernel behavior is SDVrepresentable if and only if the following conditions hold. (i) B has a full row rank kernel representation. (ii) The gcd’s of the maximal order minors of any full row rank kernel representation of B are quasi properly invertible.
A test for state/drivingvariable representability of 2D behaviors
191
Note that it is easy to check that the behavior B = ker R of Example 1 satisfies the conditions of this corollary. Given a behavior B = ker R , the following procedure can be applied in order to check whether the conditions of Corollary 2 hold. (1) Factor the representation matrix R as R = P F,
where P is factor right prime and F has full row rank. (2) If P is also zero prime, or equivalently, if ker P = {0}, then B has a full row rank representation F . (a) Compute a gcd of the maximal order minors of F . (i) If that gcd is quasiproperly invertible, B is SDV representable. (ii) If not, B is not SDV representable. (3) If ker P = {0} then B has no full row rank representation, this is a consequence of [6, Lemma A.1]. Thus B is not SDV representable. The factorization R = P F of (1) can be obtained as follows. The matrix P can be obtained as a minimal right annihilator of a minimal left annihilator of R . If P † is a (rational) right inverse of P then F = P † R . Notice that, in case P is zero right prime, P † may always be taken to be polynomial. Example 2. Let B = ker R(σ1 , σ2 , σ1−1 , σ2−1 ) with
(−1 + 2s2 )s12 + s13 + (−1 + s1 )s2 R= s2 (2s1 + s12 + s2 ) s12 + s2 + 2s1 s2
(1 + s2 )s12 + (2s2 − 1)s1 − s2 s2 (1 + 2s1 + s2 ) s1 + 2s2 + s1 s2
s1 (−2 + 2s1 + s12 + s2 ) 2 . s1 + s2 + 2s1 s2 2s1 + s12 + s2
Notice that R has not full row rank. The following matrix is a minimal left annihilator of R M = s2 − 1 1 −s1 s2 + s1 − 1 . Computing a minimal right annihilator of M we have obtained s1 − 1 s1 P = s2 1 . 1 1 0 s1 Thus, taking a left inverse P † = −1 of P , the matrix 1 0 1−s 1
F = P †R s 2 + s2 = 1 2s1 s2
s1 + s2 (1 + s1 )s2
2s1 s12 + s2
is a full row rank representation of B , since the matrix P is zero right prime. After a few computations we can conclude that s12 − s2 is a gcd of the maximal order minors of F . Since this polynomial is properly invertible, B is SDV representable.
192
I. Brás and P. Rocha
5. C O N C L U S I O N S We considered the question of the SDVrepresentability of a 2D kernel behavior. Starting from a previous characterization of the SDVrepresentable kernel behaviors given in [2], we have given an alternative test for state/drivingvariable representability. Indeed, we have shown that a 2D behavior B is SDVrepresentable if and only if it has of a full row rank kernel representation and, additionally, the gcd’s of the maximal order minors of any full row rank kernel representation of B are unimodularly equivalent to a 2D Laurentpolynomial with 2Dproper inverse, Corollary 2. These conditions are easier to check than the ones of Theorem 1. R EFERENCES [1] Brás I. – On the Representation of 2D Systems by First Order Models (in Portuguese), Ph.D. thesis, University of Aveiro, 2000. [2] Brás I., Rocha P. – From 2D kernel representations to state/drivingvariable models, in: Proceedings CD of the 14th International Symposium on Mathematical Theory of Networks and Systems (Perpignan, France, June 2000). [3] Fornasini E., Marchesini G. – Doublyindexed dynamical systems: Statespace models and structural properties, Math. Systems Theory 12 (1978) 59–72. [4] Lévy B.C. – 2D Polynomial and Rational Matrices, and Their Applications for the Modeling of 2D Dynamical Systems, Ph.D. thesis, Stanford University, 1981. [5] Rocha P. – Structure and Representation 2D Systems, Ph.D. thesis, Rijkuniversiteit Groningen, 1990. [6] Valcher M.E. – On the decomposition of twodimensional behaviors, Multidimens. Systems Signal Process. 11 (1–2) (2000) 49–65.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
Storage functions for systems described by PDE’s
Jan C. Willems a and Harish K. Pillai b a b
ESAT, K.U. Leuven, 3000 Leuven, Belgium Department of Electrical Engineering, Indian Institute of Technology, Mumbai 400076, India
1. I N T R O D U C T I O N The notion of a dissipative system is one of the useful concepts in systems theory. Many results involving stability of systems and design of robust controllers make use of this notion. Until now, the theory of dissipative systems has been developed for systems that have time as its only independent variable (1D systems). However, models of physical systems often have several independent variables (i.e., they are nD systems), for example, time and space variables. In this chapter we develop the theory of dissipative systems for nD systems. The central problem in the theory of dissipative systems is the construction of a storage function. Examples of storage functions include Lyapunov functions in stability analysis and internal energy, entropy in thermodynamics, etc. The construction of storage functions for 1D systems is well understood [4, Part 1] for general nonlinear systems and for linear systems with quadratic supply rates [4, Part 2] and [7]. In this chapter, we obtain analogous results for nD systems described by linear constant coefficient partial differential equations with quadratic differential forms as supply rates. However, there are some important differences between the 1D and the nD case. The most important one being the dependence of storage functions on the unobservable (or hidden) latent variables. A few words about notation. We use the standard notation Rn , Rn1 ×n2 , etc., for finitedimensional vectors and matrices. When the dimension is not specified (but, of course, finite), we write R• , Rn×• , R•×• , etc. In order to enhance readability, we typically use the notation Rw when functions taking their values in that vector space are denoted by w . Real polynomials in the indeterminates ξ = (ξ1 , ξ2 , . . . , ξn ) are denoted by R[ξ ] and real rational functions by R(ξ ), with obvious modifications for the matrix case. The space of infinitely differentiable functions with domain Rn and codomain Rw is denoted by C∞ (Rn , Rw ), and its subspace consisting of elements with compact support by D(Rn , Rw ).
194
2. n D
J.C. Willems and H.K. Pillai
SYSTEMS
We view a system as a triplet = (T, W, B). Here T is the indexing set and stands for the set of “independent” variables (for example, time, space, time and space). W stands for the set of “dependent” variables, i.e., where the variables take on their values – these are often called the signal space or the space of field variables. Finally the “behavior” B is viewed as a subset of the family of all trajectories that map the set of independent variables into the set of dependent variables. In fact, the behavior B consists of the set of admissible trajectories that satisfy the system laws (for example, the set of partial differential equations that constitute the system laws). In this chapter, we consider systems with T = Rn (nD systems). We assume throughout that W is a finitedimensional real vector space, W = Rw . In this chapter, we look at behaviors that arise from a system of partial differential equations. More precisely, if there exists a real polynomial matrix R ∈ R•×w [ξ ] in n indeterminates, ξ = (ξ1 , . . . , ξn ), then we consider B to be the C∞ (Rn , Rw )solutions of d w = 0, (1) R dx where ddx = ( ∂x∂ 1 , . . . , ∂x∂ n ). The assumption about C∞ solutions is made for ease of exposition. The results of this chapter also hold for other solution concepts like distributions, though the mathematics needed is more involved. Systems = (Rn , Rw , B) that are defined by a set of constant coefficient partial differential equations (equivalently, behaviors that arise as a consequence of a set of constant coefficient partial differential equations) will be called differential systems and denoted as Lwn . We often abuse the notation by stating B ∈ Lwn , as the indexing set and the signal space are then obvious. Whereas we have defined the behavior of a system in Lwn as the set of solutions of a system of PDE’s in the system variables, often, in applications, the specification of the behavior involves other, auxiliary variables, which we call latent variables. Specifically, consider the system of PDE’s d d w=M (2) R dx dx with w ∈ C∞ (Rn , Rw ) and ∈ C∞ (Rn , R ) and with R ∈ R•×w [ξ ] and M ∈ R•× [ξ ] polynomial matrices with the same number of rows. The set (3) Bf = (w, ) ∈ C∞ Rn , Rw+  (2) holds obviously belongs to Lnw+ . It follows from a classical result in the theory of PDEs, the fundamental principle, that the set w ∈ C∞ Rn , Rw  ∃ ∈ C∞ Rn , R : (w, ) ∈ Bf (4) belongs to Lwn . We call (2) a latent variable representation with manifest variables w and latent variables , of the system with full behavior (3) and manifest
Storage functions for systems described by PDE’s
195
behavior (4). Correspondingly, we call (1) a kernel representation of the system with the behavior ker(R( ddx )). We shall soon meet another sort of representation, the image representations, in the context of controllability. 3. C O N T R O L L A B I L I T Y
AND OBSERVABILITY
Two very influential classical properties of dynamical systems are those of controllability and observability. These properties were defined for 1D systems in a behavioral setting in [5] and in [3] generalizations to nD systems have been introduced. We discuss these concepts here exclusively in the context of systems described by linear constant coefficient PDE’s. Definition 1. A system B ∈ Lwn is said to be controllable if for all w1 , w2 ∈ B and for all sets U1 , U2 ⊂ Rn with disjoint closure, there exists a w ∈ B such that wU1 = w1 U1 and wU2 = w2 U2 . Thus controllable PDE’s are those in which the solutions can be ‘patched up’ from solutions on subsets. Though there are several characterizations of controllability, the characterization that is important for the purposes of this chapter is the equivalence of controllability with the existence of an image representation. Consider the following special latent variable representation d (5) w=M dx with M ∈ Rw× [ξ ]. Obviously, by the elimination theorem, its manifest behavior B ∈ Lwn . Such special latent variable representations often appear in physics, where the latent variables involved in such a representation are called potentials. Obviously B = im(M( ddx )) with M( ddx ) viewed as a map from C∞ (Rn , R ) to C∞ (Rn , Rw ). For this reason, we call (5) an image representation of its manifest behavior. Whereas every B ∈ Lwn allows (by definition) a kernel representation and hence trivially a latent variable representation, not every B ∈ Lwn allows an image representation. In fact: Theorem 2. B ∈ Lwn admits an image representation if and only if it is controllable. We denote the set of controllable systems in Lwn by Lwn,cont . Observability is the property of systems that have two kinds of variables – the first set of variables are the ‘observed’ set of variables, and the second set of variables are the ones that are ‘tobededuced’ from the observed variables. Every variable that can be deduced uniquely from the manifest variables of a given behavior will be called an observable. So observability is not an intrinsic property of a given behavior. One has to be given a partition of the variables in the behavior into two
196
J.C. Willems and H.K. Pillai
classes before one can say whether one class of variables in the behavior can actually be deduced from the other class of variables (which were observed). Definition 3. Let w = (w1 , w2 ) be a partition of the variables in = (Rn , Rw1 +w2 , B). Then w2 is said to be observable from w1 in B if given any two trajectories (w1 , w2 ), (w1 , w2 ) ∈ B such that w1 = w1 , then w2 = w2 . A natural situation to use observability is when one looks at the latent variable representation of a behavior. Then one may ask whether the latent variables are observable from the manifest variables. If this is the case, then we call the latent variable representation observable. As we have already mentioned, every controllable behavior has an image representation. Whereas every controllable behavior has an observable image representation in 1D systems, this is no longer true for nD systems. 4. Q U A D R A T I C
DIFFERENTIAL FORMS
It was shown in [6,7] that for systems described by onevariable polynomial matrices, the appropriate tool to express quadratic functionals are twovariable polynomial matrices. In this chapter we will use polynomial matrices in 2n variables to express quadratic functionals for functions of n variables. For convenience, let ζ denote (ζ1 , . . . , ζn ) and η denote (η1 , . . . , ηn ). Let Rw1 ×w2 [ζ, η] denote the set of real polynomial matrices in the 2n indeterminates ζ and η . We will consider quadratic forms of the type ∈ Rw1 ×w2 [ζ, η]. Explicitly, (ζ, η) = k,l ζ k ηl . k,l
This sum ranges over the nonnegative multiindices k = (k1 , k2 , . . . , kn ), l = (l1 , l2 , . . . , ln ) ∈ Nn and the sum is assumed to be finite. Moreover, k,l ∈ Rw1 ×w2 . The polynomial matrix induces a bilinear differential form (BLDF), that is, the map L : C∞ Rn , Rw1 × C∞ Rn , Rw2 → C∞ Rn , R defined by L (v, w)(x) :=
d k v k,l
where
dk d xk
=
∂ k1
∂ k2 k1 k ∂x1 ∂x22
···
∂ kn k ∂xnn
l T dw (x) k,l (x) , d xk d xl
and analogously for
dl . d xl
Note that ζ corresponds to
differentiation of terms to the left and η refers to differentiation of the terms to the right. If w1 = w2 = w, then induces the quadratic differential form (QDF) Q : C∞ Rn , Rw → C∞ Rn , R
Storage functions for systems described by PDE’s
197
defined by Q (w) := L (w, w).
Define the ∗ operator ∗
: Rw×w [ζ, η] → Rw×w [ζ, η]
by ∗ (ζ, η) := T (η, ζ ).
If = ∗ , then is called symmetric. For the purposes of QDF’s induced by polynomial matrices, it suffices to consider the symmetric quadratic differential forms, since Q = Q∗ = Q 1 (+∗ ) . 2
We also consider vectors ∈ (Rw×w [ζ, η])n , i.e. = (1 , . . . , n ). Analogous to the quadratic differential form , induces a vector of quadratic differential forms (VQDF) n Q (w) : C∞ Rn , Rw → C∞ Rn , R
defined by Q = (Q1 , . . . , Qn ). Finally, we define the “div” (divergence) operator that associates with the VQDF induced by , the scalar QDF (div Q )(w) :=
∂ ∂ Q1 (w) + · · · + Qn (w). ∂x1 ∂xn
The theory of QDF’s have been developed in much detail in [6,7] for 1D systems. 5. L O S S L E S S
AND DISSIPATIVE SYSTEMS
Quadratic functionals play an important role in control theory. Quite often, the rate of supply of some physical quantity (for example, the rate of energy, i.e., the power) delivered to a system is given by a quadratic functional. We make use of quadratic differential forms defined earlier to define such supply rates for controllable systems B ∈ Lwn,cont . Let = ∗ ∈ Rw×w [ζ, η] and B ∈ Lwn,cont . We consider the quadratic differential form Q (w) as a supply rate for trajectories w ∈ B. More precisely, we consider Q (w)(x) (with x ∈ Rn ) as the rate of supply of some physical quantity delivered to the system at the point x. Thus, Q (w)(x) being positive implies that the system absorbs the physical quantity that is being supplied. with respect to the Definition 4. The system B ∈ Lwn,cont is said to be lossless supply rate Q induced by = ∗ ∈ Rw×w [ζ, η] if Rn Q (w) d x = 0 for all w ∈ B ∩ D(Rn , Rw ) (i.e., trajectories in the behavior B with compact support).
198
J.C. Willems and H.K. Pillai
The system B ∈ Lwn,cont is said to be dissipative with respect to Q (briefly dissipative) if Q (w) d x 0 Rn
for all w ∈ B ∩ D(Rn , Rw ). We now explain the physical interpretation of the definition above. Rn Q (w) d x denotes the net amount of supply that the system absorbs integrated over “time” and “space”. So the system is lossless with respect to the quadratic differential form if this integral is zero, since any supply absorbed at some time or place is temporarily stored but eventually recovered (perhaps at some other time or space). On the other hand, if the integral is nonnegative, then the net amount of supply is absorbed (at least for some of the trajectories) by the system. Thus the system is dissipative. Note that the conditions in the above definitions are defined only for compactly supported trajectories in the behavior B. The intuitive reason for such a restriction is because for controllable systems, compactly supported trajectories are completely representative of all trajectories. It also gets us through the technical difficulty that might arise if the integral Rn Q (w) d x is not well defined. Since we consider only compact trajectories, such a complication does not arise. We shall first look at lossless systems. The following theorem gives some equivalent conditions for a system to be lossless. Theorem 5. Let B ∈ Lwn,cont . Let R ∈ R•×w [ξ ] and M ∈ Rw×• [ξ ] induce respectively a kernel and image representation of B; i.e. B = ker(R( ddx )) = im(M( ddx )). Let = ∗ ∈ Rw×w [ζ, η] induce a QDF on B. Then the following conditions are equivalent: (1) B is lossless with respect to the QDF Q ; (2) (−ξ, ξ ) = 0 where (ζ, η) := M T (ζ )(ζ, η)M(η); (3) there exists a VQDF Q , with ∈ (Rm×m [ζ, η])n , where m is the number of columns of M , such that div Q () = Q () = Q (w)
(6)
for all ∈ C∞ (Rn , Rm ) and w = M( ddx ). Note that the condition (1) in the above theorem states that B is lossless with respect to Q , i.e. that Q (w) d x = 0
(7) Rn
Storage functions for systems described by PDE’s
199
for all w ∈ B ∩ D(Rn , Rw ). This is a global statement about the concerned trajectory w ∈ B. On the other hand, condition (3) of the above theorem states that B admits an image representation w = M( ddx ) and there exists some VQDF such that (8)
div Q () = Q (w)
for all w ∈ B and such that w = M( ddx ). This statement gives a local characterization of losslessness. This equivalence of the global version of losslessness (7) with the local version (8) is a recurrent theme in the theory of dissipative systems. The local version states that there is a function, Q ()(x) that plays the role of amount of supply stored at x ∈ Rn . Thus (8) says that for lossless systems, it is possible to define a storage function Q such that the conservation equation (9)
div Q () = Q (w)
is satisfied for all w, such that w = M( ddx ). At this point it is worth emphasizing some basic differences between 1D and nD systems. Since every controllable 1D behavior has an observable image representation, it can be shown that the conservation equation can be rewritten in the form d Q (w) = Q (w) dt with some quadratic differential form Q that acts on the manifest variables. Here t is assumed to be the independent variable for the concerned 1D behavior. On the other hand, since every controllable nD behavior need not necessarily have an observable image representation, there may not exist any storage function of the form Q (w), that depend only on the manifest variables. Thus, the storage function in the conservation equation (9) may involve “hidden” (i.e., nonobservable) variables. Another important difference between 1D and nD behaviors is the nonuniqueness of the vector of quadratic differential forms Q involved in the conservation equation (9) for the nD case. As a result of this nonuniqueness, there will be several possible storage functions in the nD case that satisfy the conservation equation. We now formally define the concept of a storage function and the associated notion of a dissipation rate. As we have already seen in the context of lossless systems, the storage function is in general a function of the unobservable latent variables that appear in an image representation of the behavior B. We now incorporate this in the definition and show later that the function Q defined in the conservation equation (9) is indeed a storage function. Definition 6. Let B ∈ Lwn,cont , = ∗ ∈ Rw×w [ζ, η] and w = M( ddx ) be an image representation of B with M ∈ Rw× [ξ ]. Let = (1 , 2 , . . . , n ) with k = k∗ ∈ R× [ζ, η] for k = 1, 2, . . . , n. The VQDF Q is said to be a storage function for B with respect to Q if (10)
div Q () Q (w)
for all ∈ D(Rn , R ) and w = M( ddx ).
200
J.C. Willems and H.K. Pillai
= ∗ ∈ R× [ζ, η] is said to be a dissipation rate for B with respect to Q if Q 0 and Q () d x = Q (w) d x Rn
Rn
for all ∈ D(Rn , R ) and w = M( ddx ). We define Q 0 if Q (w(x)) 0 for all w ∈ D(Rn , Rw) evaluated at every x ∈ Rn . This defines a pointwise positivity condition. Thus Q (w) d x 0 for every ⊂ Rn if Q 0. In the case of lossless systems, we had obtained the conservation equation div Q () = Q (w). Clearly, this Q qualifies to be a storage function as it satisfies the inequality stated in the definition above. From the above definitions, it is also easy to see that there is a relation between a storage function for B with respect to Q and a dissipation rate for B with respect to Q , given by d − Q (). (11) div Q () = Q M dx The above definitions of the storage function and the dissipation rate, combined with (11), yield intuitive interpretations. The dissipation rate can be thought of as the rate of supply that is dissipated in the system and the storage function as the rate of supply stored in the system. Intuitively, we could think of the QDF Q as measuring the power going into the system. dissipativity would imply that the net power flowing into a system is nonnegative which in turn implies that the system dissipates energy. Of course, locally the flow of energy could be positive or negative, leading to variations in Q () (in many practical situations Q () play the role of energy density and fluxes). If the system is dissipative, then the rate of change of energy density and fluxes cannot exceed the power delivered into the system. This is captured by the inequality (10) in Definition 6. The excess is precisely what is lost (or dissipated). This interaction between supply, storage and dissipation is formalized by Eq. (11). When the independent variables are time and space, we can rewrite (11) as d ∂ U() − ∇ · S() − Q (), (12) = Q M ∂t dx where we substitute Q = (U, S), with U = t the stored energy and S = (x , y , z ) the flux. Moreover w = M( ddx ). The above equation is reminiscent of energy balance equations that appear in several fields like fluid mechanics, thermodynamics, etc. Thus (12) states that the change in the stored energy ( ∂ U∂t() ) in an infinitesimal volume is exactly equal to the difference between the energy supplied
Storage functions for systems described by PDE’s
201
(Q (w)) into the infinitesimal volume and the energy lost by the infinitesimal volume by means of energy flux flowing out of the volume (∇ · S()) and the energy dissipated (Q ()) within the volume. The problem we now address is the equivalence of (i) dissipativeness of B with respect to Q , (ii) the existence of a storage function and (iii) the existence of a dissipation rate. Note that this problem also involves the construction of an appropriate image representation. We first consider the case where B = C∞ (Rn , Rw ). In this case, the definition of the dissipation rate requires that for all ∈ D(Rn , R ) (13) Q (w) d x = Q () d x Rn
Rn
with w = M( ddx ); M( ddx ) a surjective partial differential operator and Q () 0 for all ∈ D(Rn , R ). By stacking the variables and their various derivatives to form a new vector of variables, this latter condition can be shown to be equivalent to the existence of a polynomial matrix D ∈ R•× [ξ ] such that (ζ, η) = D T (ζ )D(η). Using Theorem 5, it follows that (13) is equivalent to the factorization equation (14)
M T (−ξ )(−ξ, ξ )M(ξ ) = D T (−ξ )D(ξ ).
A very well known problem in 1D systems is that of spectral factorization which involves the factorization of a matrix (ξ ) ∈ Rw×w [ξ ] into the form (ξ ) = F T (−ξ )F (ξ )
with F ∈ Rw×w [ξ ] (the matrix F may have to satisfy some additional conditions like being Hurwitz). It is well known that a polynomial matrix (ξ ) in one variable ξ admits a solution F ∈ Rw×w [ξ ] if and only if T (−ξ ) = (ξ ) and (iω) 0 for all ω ∈ R. The above factorization problem for nD systems (14) is very similar in flavor. We can reformulate the problem as follows: given ∈ Rw×w [ξ ], a polynomial matrix in n commuting variables ξ = (ξ1 , . . . , ξn ), is it possible to factorize it as (15)
(ξ ) = F T (−ξ )F (ξ )
with F ∈ R•×w [ξ ] itself a polynomial matrix. Quite clearly, T (−ξ ) = (ξ ) and (iω) 0 for all ω ∈ Rn are necessary conditions for the existence of a factor F ∈ R•×w [ξ ]. The important question is whether these conditions are also sufficient (as in the 1D case). If we consider the case when w = 1 (the scalar case), substituting iω for ξ , (15) reduces to finding F such that (iω) = F T (−iω)F (iω).
Separating the real and imaginary parts of the above equation, the problem further reduces to the case of finding a sum of “two” squares which add up to a given positive (or nonnegative) polynomial.
202
J.C. Willems and H.K. Pillai
This turns out to be a problem with a long history. It is Hilbert’s 17th problem, which deals with the representation of positive definite functions as sums of squares [2]. This investigation of positive definite functions began in the year 1888 with the following “negative” result of Hilbert: If f (ξ ) ∈ R[ξ ] is a positive definite polynomial in n variables, then f need not be a sum of squares of polynomials in R[ξ ], except in the case when n = 1. Several examples of such positive definite polynomials which cannot be expressed as sum of squares of polynomials are available in the literature, for example, the polynomial ξ12 ξ22 ξ12 + ξ22 − 1 + 1
is not factorizable as a sum of squares of polynomials [1]. Thus the two conditions that we mentioned earlier (namely T (−ξ ) = (ξ ) and (iω) 0 for all ω ∈ Rn ) are not sufficient to guarantee a polynomial factor F ∈ R•×w (even for the scalar case). However, we have the following result. Theorem 7. Assume that ∈ Rw×w [ξ ] satisfies T (−ξ ) = (ξ ) and (iω) 0 for all ω ∈ Rn . Then there exists an F ∈ R•×w (ξ ) such that (ξ ) = F T (−ξ )F (ξ ). Note that even when is a polynomial matrix, the entries of the matrix F are rational functions in nindeterminates with real coefficients, whereas for the 1D case one can obtain an F with polynomial entries. Combining the result of Theorem 7 along with the factorization problem (14), we obtain the following theorem. Theorem 8. Let = ∗ ∈ Rw×w [ζ, η]. Then the following conditions are equivalent: (1) Rn Q (w) d x 0 for all w ∈ D(Rn , Rw ). (2) there exists a polynomial matrix M ∈ Rw×w [ξ ] such that M( ddx ) is surjective and = (1 , 2 , . . . , n ) with k = k∗ ∈ Rw×w [ζ, η] for k = 1, 2, . . . , n such that the VQDF Q is a storage function, i.e.,
div Q () Q (w) for all ∈ D(Rn , Rw ) and w = M( ddx ). (3) There exists a polynomial matrix M ∈ Rw×w [ξ ] such that M( ddx ) is surjective and a = ∗ ∈ Rw×w [ζ, η] such that Q is a dissipation rate, i.e.,
Q 0
Q () d x =
and Rn
for all ∈ D(Rn , Rw ) and w = M( ddx ).
Q (w) d x Rn
Storage functions for systems described by PDE’s
203
(4) There exists a polynomial matrix M ∈ Rw×w [ξ ] such that M( ddx ) is surjective, a = (1 , 2 , . . . , n ) with k = k∗ ∈ Rw×w [ζ, η] for k = 1, 2, . . . , n and a = ∗ ∈ Rw×w [ζ, η] such that Q 0
and (16)
div Q () = Q (w) − Q ()
for all ∈ C∞ (Rn , Rw ) and w = M( ddx ). Note that this states that the VQDF Q is a storage function and that Q is a dissipation rate. The above theorem considers the case when B is all of C∞ (Rn , Rw ) and it shows the equivalence of dissipativeness of C∞ (Rn , Rw ) with respect to Q , the existence of a storage function (Q ) and the existence of a dissipation rate (Q ). The important message of this theorem is the unavoidable emergence of latent variables in the dissipation equation (16) for nD systems. Also note that the storage and dissipation functions that one obtains using the above theorem are not unique. Finally, for an arbitrary controllable nD behavior B ∈ Lwn,cont , the above theorem can be modified to obtain the following. Theorem 9. Let B ∈ Lwn,cont and = ∗ ∈ Rw×w [ζ, η]. The following conditions are equivalent: (1) B is dissipative, i.e., Rn Q (w) d x 0 for all w ∈ B ∩ D(Rn , Rw ), (2) there exists an integer l ∈ N, a polynomial matrix M ∈ Rw×l [ξ ] such that M( ddx ) is an image representation of B, a = (1 , 2 , . . . , n ) with k = k∗ ∈ Rl×l [ζ, η] for k = 1, 2, . . . , n and a = ∗ ∈ Rl×l [ζ, η] such that Q 0
and (17)
div Q () = Q (w) − Q ()
with w = M( ddx ). 6. C O N C L U S I O N S In this chapter, we dealt with nD systems described by constant coefficient linear partial differential equations. We started by defining controllability for such systems, in terms of patching up of feasible trajectories. We then explained that it is exactly the controllable systems which allow an image representation, i.e., a representation in terms of what in physics is called a potential function. Subsequently, we turned to lossless and dissipative systems.
204
J.C. Willems and H.K. Pillai
For lossless systems, we proved the equivalence with the existence of a conservation law involving the storage function. Important features of the storage function are (i) the fact that it depends on latent variables that are in general hidden (i.e., nonobservable), and (ii) its nonuniqueness. For dissipative systems, we proved the equivalence with the existence a storage function and a dissipation rate. The problem of constructing a dissipation rate led to the question of factorizability of certain polynomial matrices in n variables. We reduced this problem to Hilbert’s 17th problem, the representation of a nonnegative rational function in n variables as a sum of squares of rational functions. R EFERENCES [1] Berg C., Christensen J.P.R., Ressel P. – Harmonic Analysis on Semigroups: Theory of Positive Definite and Related Functions, Graduate Texts in Math., vol. 100, SpringerVerlag, 1984. [2] Pfister A. – Hilbert’s seventeenth problem and related problems on definite forms, in: Browder F.E. (Ed.), Mathematical Developments Arising from Hilbert Problems, in: Proceedings of Symposia in Pure Mathematics, vol. XXVIII, Amer. Math. Soc., 1974, pp. 483–489. [3] Pillai H.K., Shankar S. – A behavioural approach to control of distributed systems, SIAM J. Control Optim. 37 (1999) 388–408. [4] Willems J.C. – Dissipative dynamical systems – Part I: General theory, Part II: Linear systems with quadratic supply rates, Arch. Rational Mech. Anal. 45 (1972) 321–351 and 352–393. [5] Willems J.C. – Paradigms and puzzles in the theory of dynamical systems, IEEE Trans. Automat. Control 36 (1991) 259–294. [6] Willems J.C., Trentelman H.L. – On quadratic differential forms, SIAM J. Control Optim. 36(5) (1998) 1703–1749. [7] Willems J.C., Trentelman H.L. – Synthesis of dissipative systems using quadratic differential forms – Part I and Part II, IEEE Trans. Automat. Control 47 (2002) 53–69, 70–86.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
The constructive solution of linear systems of partial difference and differential equations with constant coefficients
Ulrich Oberst Institut für Mathematik, Universität Innsbruck, Technikerstraße 25, A6020 Innsbruck, Austria Email:
[email protected] (U. Oberst)
A BSTRACT This talk is based on a submitted preprint with the same title by the speaker and Franz Pauer. It gives a survey of past work in the treated subject and also presents several new results. We solve the Cauchy problem for linear systems of partial difference equations on general integral lattices by means of suitable transfer operators and show that these can be easily computed with the help of standard implementations of Gröbner basis algorithms. The Borel isomorphism permits to transfer these results to systems of partial differential equations. We also solve the Cauchy problem for the function spaces of convergent power series and for entire functions of exponential type. The unique solvability of the Cauchy problem implies that the considered function spaces are injective cogenerators. They are even large cogenerators for which the full duality between finitely generated modules and behaviors holds. Already in 1910(!!) C. Riquier considered and solved problems of the type discussed here in his book “Les systèmes d’équations aux dérivées partielles”.
1. I N T R O D U C T I O N In this talk which is based on the preprint [11] we present a survey of the results in the papers [10,21,20,12] and [9] with partially new and simpler proofs and expose several new results too. Their significance for multidimensional systems theory is explained in H. Pillai, J. Wood and E. Rogers’ talk [14]. We also describe the relation of our results with the work of C. Riquier [16] which was fundamental, far ahead of his time and long forgotten. We owe the reference to and an introduction into Riquier’s work to J.F. Pommaret and A. Quadrat. (See also V. Gerdt’s talk in this Key words and phrases: Partial difference equation, Partial differential equation, Cauchy problem, Gröbner basis, Fundamental principle, Transfer operator, Injective cogenerator, Multidimensional behavior
206
U. Oberst
volume [5]). The formulation of the discrete Cauchy problem in [10] was influenced by an early version of J. Gregor’s paper [6]. In Theorem 1 we solve the discrete Cauchy or initial value problem for linear systems of partial difference equations with constant coefficients on the lattice Nr by means of suitable transfer operators and show that these can be easily computed with the help of standard implementations of the Gröbner basis algorithm. The new algorithm uses less storage space and is faster than those of [10] and [9] which used a recursive solution method and recursive programming. Using ideas of Riquier we show in Theorem 10 that the initial data can be given as a finite family of power series. The Borel isomorphism permits to transfer the results on partial difference equations to those on linear systems of partial differential equations (Theorem 11). In Theorems 12, 13 and 15 they are also extended to the function spaces of convergent power series and of entire functions of exponential type. These new theorems on convergent solutions of the Cauchy problem show that the formal solutions constructed in Theorem 1 are indeed (locally) convergent or even entire of exponential type if the right side and the initial data have this property. The solution of the Cauchy problem for partial differential systems of (locally) convergent power series and its proof are a variant of, but different from C. Riquier’s existence theorem [16, Ch. VII, §115] which is valid for (even nonlinear) passive orthonomic systems. Theorems 21, 22 and 25 extend Theorems 1 and 12 to the integral lattices M = Nr1 × Zr2 ⊂ Zr1 +r2 , and extensions to even more general lattices are possible [12]. Predecessors of such extensions were contained in the papers [21,20] and [9]. The unique solvability of the Cauchy problem implies that all considered function spaces are injective cogenerators. They are even large cogenerators and thus give rise to the categorical duality between finitely generated polynomial modules and behaviors [10]. In this survey of [11] the terminology and the constructive proofs are worked out with the necessary details. The proofs of the new results on convergent power series are long and their details are not needed for the constructive application of Theorems 12, 13, 15 and 25 and hence omitted. 2. D A T A Let F be a field and r > 0. As ring of operators we introduce the polynomial algebra (1)
D := F [s] = F [s1 , . . . , sr ] =
F sµ.
µ∈Nr
The signal space is the space of multisequences or formal power series A := F N = F [[z]] = F [[z1 , . . . , zr ]], a(µ)zµ . a = a(µ) µ∈Nr = r
(2)
µ∈Nr
For any l > 0 this gives rise to the space of signal vectors or vector signals
207
The constructive solution of linear systems
(3)
r l r Al = F [[z]]l = F N = F [l]×N , [l] := {1, . . . , l}, w1 wj (ν)zν . w = . . . , wj = wj (ν) ν∈Nr = r ν∈N wl
Two natural actions of F [s] on F [[z]] are that by (4)
µ s ◦ a (ν) := a(µ + ν),
left shifts:
µ, ν ∈ Nr , a ∈ F [[z]],
and, if the characteristic of F is zero, that by (5)
partial differentiation:
s µ • a := ∂ µ a = ∂ µ a/∂z1 1 ∗ · · · ∗ ∂zrµr , µ
µ ∈ Nr , a ∈ F [[z]].
In this fashion we obtain the F [s]modules (F [[z]], ◦) and (F [[z]], •). Considerations with respect to the first, resp. the second, are called the discrete, resp. the continuous, case in systems theory. Those on (F [[z]], ◦) can be extended to more general monoids M ⊂ Zr instead of Nr , for instance to M = Nr1 × Zr2 , the operator ring D := F [M] := µ∈M F s µ of Laurent polynomials and the signal space F M . 3. L I N E A R
SYSTEMS OF PARTIAL DIFFERENCE, RESP. DIFFERENTIAL,
EQUATIONS WITH CONSTANT COEFFICIENTS
Such a system has the form (6)
R ◦ w = v,
resp.
R ∈ F [s]
k×l
R • w = R(∂)w = v,
, w ∈ F [[z]]l , v ∈ F [[z]]k ,
where the matrix R and the right side v are given and a solution w is sought. In detail these systems are written as R = (Rij )i,j , l
(7)
Rij =
Rij (ν)s ν ∈ F [s],
ν∈Nr
Rij (ν)wj (µ + ν) = vi (µ)
for i = 1, . . . , k and all µ ∈ Nr ,
j =1 ν∈Nr
resp.
l
Rij (ν)∂ ν wj = vi
for i = 1, . . . , k.
j =1 ν∈Nr
The problem is to decide whether the systems (6) are solvable and, if this is the case, to find a solution w constructively, here by means of Gröbner bases.
208
U. Oberst
4. A P P L I C A T I O N
OF
GRÖBNER
BASES
We choose a term order on [l] × Nr . This gives rise to a degree function on F [s]1×l \ {0} defined by (8)
deg(ξ ) = j (ξ ), d(ξ ) := max (j, ν) ∈ [l] × Nr ; ξj (ν) = 0 for ξ = (ξ1 , . . . , ξl ) = ξj (ν)s ν δj , j,ν
where δj , j = 1, . . . , l , denotes the standard basis of F [s]1×l . The degree set of the row module U := F [s]1×k R = ki=1 F [s]Ri− of the matrix R is defined as (9)
deg(U ) := deg(ξ ); 0 = ξ ∈ U ,
[l] × Nr = deg(U ) .
Buchberger’s algorithm for modules yields a Gröbner basis G ⊂ U of U with the defining property
deg(g) + Nr ⊂ [l] × Nr ,
deg(U ) =
g∈G
(10)
deg(U )j :=
d(g) + Nr ⊂ Nr ,
Nr = deg(U )j j
g∈G,j (g)=j
for j = 1, . . . , l. A picture of the decomposition Nr = deg(U )j j is given in Fig. 1. The set is called the canonical initial region of the system R ◦ w = v and F its space of initial data or state space. The Gröbner basis algorithm also furnishes (11)
g = ζ g R,
ζ g ∈ F [s]1×k , g ∈ G,
Figure 1. The j th component deg(U )j and its complementary initial region j below the “staircase”.
209
The constructive solution of linear systems
i.e. the vectors in G as linear combinations of the rows of R , and a universal left annihilator L ∈ F [s]h×k of R , with the property LR = 0 and, more precisely, (12)
F [s]1×h L = η ∈ F [s]1×k ; ηR = 0 .
This signifies that the rows of L generate the module of relations or syzygies of the rows of R . Since is a wellorder on [l] × Nr standard linear algebra furnishes the direct decomposition (13) F [s]1×l = F () ⊕ U = F s µ δi ⊕ F [s]1×k R. (i,µ)∈
The decomposition (14)
ξ = ξnorm + Hξ R,
ξnorm ∈ F () , Hξ ∈ F [s]1×k ,
with respect to (13) is computed by the division algorithm for the Gröbner basis G and ξnorm is called the normal form of ξ . For ξ = s ν δj , (j, ν) ∈ [l] × Nr this specializes to s ν δj = (s ν δj )norm + H(j,ν) R, Hs (j, ν), (i, µ) s µ δi , (s ν δj )norm = (i,µ)∈
(15)
H(j,ν) := Hs ν δj =
H (j, ν), (i, µ) s µ δi ,
where
(i,µ)∈[k]×Nr
Hs := Hs (j, ν), (i, µ) (j,ν)∈[l]×Nr ,(i,µ)∈ and H := H (j, ν), (i, µ) (j,ν)∈[l]×Nr ,(i,µ)∈[k]×Nr
are rowfinite ([l] × Nr ) × – resp. ([l] × Nr ) × ([k] ∈ Nr ) – matrices. They are called the 0input, resp. 0state, transfer operator of the system R ◦ w = v . The decomposition (13) also implies that (16) M := F [s]1×l /F [s]1×k R = F s µ δi , (i,µ)∈
i.e. that the s µ δi , (i, µ) ∈ , are an F basis of the factor module M . 5. T H E
DISCRETE
CAUCHY
PROBLEM
For the formulation of the next theorem we need the following algebraic notions which were introduced into systems theory in the paper [10]. The dual module of an F [s]module M is the F [s]module
(17)
D(M) := HomF [s] M, F [[z]], ◦ ∼ = M ∗ := HomF (M, F ), ϕ s ν ◦ x zν φ(x) = ν∈Nr
φ ↔ ϕ,
210
U. Oberst
of all F [s]linear maps from M into the signal space A = (F [[z]], ◦) or, equivalently, of F linear functions on M , especially, by identification, (18)
D F [s]1×l = F [[z]]l w = wj (ν) (j,ν)∈[l]×Nr ,
w s ν δj = wj (ν).
The dual map of a linear map f : M → M is defined as (19)
f φ D(f ) := Hom f, F [[z]] : D(M) → D(M ), φ → φf : M → M → F [[z]]
in particular for ◦R : F [s]1×k → F [s]1×l , η → ηR, R ∈ F [s]k×l , D(◦R) = R ◦ : D F [s]1×l = F [[z]]l → F [[z]]k . A sequence (20)
f
g
M → M → M
is exact if ker(g) = im(f ).
An arbitrary D module A is called injective if the functor D(−) := HomD (−, A) is exact, i.e. preserves the exactness of any sequence (20). An injective module is a cogenerator if (21)
M = 0 ⇔ HomD (M, A) = 0
for all Dmodules M or,
equivalently, if every D module M admits a D linear embedding (= monomorphism) M → AI for some index set I . In slightly other terms a D module A is an injective cogenerator if the functor D(−) := HomD (−, A) preserves and reflects exactness. This signifies that a sequence (20) is exact if and only if its dual sequence (22)
D(g)
D(f )
D(M ) −→ D(M) −→ D(M )
has the same property. For the Noetherian ring D = F [s] and its module F [[z]] (and likewise for any Noetherian ring D and D module A) this is also equivalent with the property that any sequence (23)
◦L
◦R
F [s]1×h −→ F [s]1×k −→ F [s]1×l l R◦
k L◦
F [[z]] −→ F [[z]] −→ F [[z]]
h
is exact if and only if is exact.
The exactness of the first sequence in (23) signifies that the matrix L ∈ F [s]h×k is a universal left annihilator of R ∈ F [s]k×l . The exactness of the second sequence says that (24)
R ◦ w = v is solvable for given v ⇔ L ◦ v = 0.
L. Ehrenpreis [2] called the “only if” part of (23), i.e. the injectivity of A = F [[z]], the fundamental principle for the module A. In the case of partial differential equations J.F. Pommaret [15] calls L ◦ v = 0 an integrability condition (see
The constructive solution of linear systems
211
already [16]) and the solution w of R ◦ w = v a potential of v . Finally an injective D module A is called a large cogenerator if every finitely generated D module M admits a D linear embedding M → Ak , k ∈ N. A large injective cogenerator is especially a cogenerator as the terminology indicates and is easily seen. The following theorem and its proof are improved versions of those presented in [10] and [9]. The new computation of the transfer operator H given in (15) is much better suited for computer implementations than the one described in [9]. We first observe that the equations R ◦ w = v and LR = 0 imply L ◦ v = L ◦ R ◦ w = LR ◦ w = 0 ◦ w = 0.
(25)
Theorem 1. Let R ∈ F [s]k×l be any matrix, L ∈ F [s]h×k a universal left annihilator of R , Hs , H the transfer operators of the system R ◦ w = v from (15) and F its state space. (i) For given v ∈ F [[z]]k the system R ◦ w = v is solvable if and only if v satisfies L ◦ v = 0. This signifies that A = (F [[z]], ◦) is injective or satisfies the fundamental principle. (ii) For v ∈ F [[z]]k and arbitrary initial data x = (xj (ν))(j,ν)∈ ∈ F the canonical initial value or Cauchy problem (26)
R ◦ w = v,
L◦v=0
w = x or wj (ν) = xj (ν)
(integrability condition), for all (j, ν) ∈ (initial condition)
has a unique solution w ∈ F [[z]]l . In other terms: The map
w v
∈ Al+k = Al × Ak ; R ◦ w = v → F × ker L◦ : Ak → Ah w w
→ v v
is an isomorphism. The inverse of this map is the isomorphism
(27)
x w
→ , w = Hs x + Hv, v v wj (ν) = Hs (j, ν), (i, µ) xi (µ) (i,µ)∈
+
H (j, ν), (i, µ) vi (µ),
(i,µ)∈[k]×Nr
where Hs x is the solution for zeroinput v = 0 and Hv is the solution for zero initial state x = 0. Since Hs and H are obtained constructively by the division algorithm which is implemented in various computer algebra systems equation (27) is a fast and convenient method to compute the solution of the Cauchy problem.
212
U. Oberst
(iii) (F [[z]], ◦) is a large injective cogenerator, in particular every finitely generated (f.g.) F [s]module M admits an F [s]linear embedding M → F [[z]]k for some k ∈ N. In P. Fuhrmann’s [4] language: Every f.g. module has a power series model. (iv) Duality: The large injective cogenerator property induces the categorical duality M ↔ B between f.g. modules M and behaviors B given by (28)
M = F [s]1×l /F [s]1×k R ∼ = HomE B, F [[z]], ◦ ,
∼ HomF [s] M, F [[z]], ◦ , B := w ∈ F [[z]]l ; R ◦ w = 0 =
where E := EndF [s] ((F [[z]], ◦)) is the ring of all F linear endomorphisms of F [[z]] which commute with the shift operators. Without reference to any special representation a behavior B is defined as a f.g. E submodule of some Al , I ∈ N. Behaviors were introduced into onedimensional systems theory by J.C. Willems [19]. Proof. We prove (i), (ii) and the cogenerator property of F [[z]] which is surprisingly simple. The more difficult embedding M → F [[z]]k , k ∈ N, and the duality were shown in [10, Theorems 2.54 and 2.56]. (i) Elementary vector space theory shows that the dual space functor HomF (−, F ) is exact. With the identifications from Eqs. (18) and (19) we see that the assumed exactness of the sequence ◦L
◦R
F [s]1×h −→ F [s]1×k −→ F [s]1×l
implies that of the sequence R◦
L◦
Al −→ Ak −→ Ah
or
R ◦ Al = im(R◦) = ker(L◦).
But this signifies that L ◦ v = 0 is necessary and sufficient for the existence of w with R ◦ w = v . (ii) Application of HomF (−, F ) to the exact sequence ◦R
can
F [s]1×k −→ F [s]1×l −→ M = F [s]1×l /F [s]1×k R → 0
induces the exact sequence can∗
0 → M ∗ −→ Al −→ Ak or B := ker(R◦) ∼ = M ∗ , w → ϕ, wj (ν) = ϕ(s ν δj ) R◦
for (j, ν) ∈ [l] × Nr .
Together with (16) we obtain the isomorphisms B∼ w ↔ ϕ ↔ x, = M∗ ∼ = F , ν wj (ν) = ϕ(s δj ) = xj (ν) for (j, ν) ∈ , hence
B = w ∈ Al ; R ◦ w = 0 ∼ w → x = w. = F ,
213
The constructive solution of linear systems
This signifies that every homogeneous Cauchy problem R ◦ w = 0, w = x , has a unique solution. If now w 1 is any solution of R ◦ w 1 = v , L ◦ v = 0, according to (i) and if w 2 is the unique solution of R ◦ w 2 = 0, w 2  = x − w 1  , then w := w 1 + w 2 is the unique solution of the Cauchy problem (26). Eq. (15) implies wj (ν) = s ν δj ◦ w (0) = s ν δj norm ◦ w (0) + (H(j,ν) ◦ R ◦ w)(0).
With R ◦ w = v and Eq. (15) we finally obtain the explicit solution formula wj (ν) =
(i,µ)∈
+
Hs (j, ν), (i, µ) s µ δi ◦ w (0)
H (j, ν), (i, µ) s µ δi ◦ v (0)
(i,µ)∈[k]×Nr
=
Hs (j, ν), (i, µ) xi (µ) +
H (j, ν), (i, µ) vi (µ)
(i,µ)∈[k]×Nr
(i,µ)∈
since µ s δi ◦ w (0) = wi (µ) = xi (µ) for (i, µ) ∈ µ s δi ◦ v (0) = vi (µ) for all (i, µ).
and
If the module M = F [s]1×l /F [s]1×k R is not zero then F [s]1×k R F [s]1×l = ∅
implies
deg F [s]1×k R [l] × Nr
and
and hence D(M) ∼ = M∗ ∼ = F = 0.
This is the cogenerator property of (F [[z]], ◦).
2
Remark 2. The situation is that of the preceding theorem. An F [s]linear embedding f : M → F [[z]]k induces the F [s]isomorphisms M ∼ = (f (M), ◦). This shows that the shift action ◦ is the universal scalar multiplication or F [s]module structure for finitely generated modules. P. Fuhrmann [4] calls (f (M), ◦) a power series or shift model of M . Fuhrmann’s polynomial, rational and finitedimensional models in [4, Def. 5.1.1, p. 112, Th. 6.3.1] have multidimensional counterparts, but are much too special for r 2. Even F finitedimensional F [s]modules corresponding to strongly autonomous behaviors cannot be classified (a problem of wild representation type) and no structure theorems like the Smith form and its consequences in dimension one exist for r 2. Algorithm 3. The data are those of the preceding theorem and of (10) and (11). The solutions wj (ν) of the Cauchy problem can also be computed by transfinite induction or recursive programming on (j, ν) ∈ [l] × Nr . This algorithm was implemented in [9]. High storage requirements were a complicating factor.
214
U. Oberst
1. case: (j, ν) ∈ 2. case:
⇒
wj (ν) = xj (ν).
(j, ν) ∈ deg(U ) =
deg(g) + Nr ,
g∈G
(j, ν) = deg(g) + µ = j (g), d(g) + µ, j = j (g), ν = d(g) + µ for some g = s d(g) δj +
gi (λ)s λ δi ∈ G.
(i,λ) 0, ∃R1 > 0, . . . , ∃Rr > 0 such that for all µ ∈ Nr : a(µ) CR µ .
The algebra Cz contains the algebra
(54) O Cr := b ∈ Cz; b is everywhere convergent
= b ∈ Cz; ∀R1 > 0, . . . , ∀Rr > 0 ∃C > 0 ∀µ ∈ Nr : a(µ) CR µ
of all entire or everywhere holomorphic functions on Cr . Finally its subalgebra of all entire holomorphic functions of exponential type is defined as (55)
r O C ; exp := a ∈ O Cr ; ∃C > 0, R1 > 0, . . . , Rr > 0 such that r a(z) C exp Rρ zρ  for all z . ρ=1
The Borel isomorphism (52) induces the C[s]linear Borel isomorphism
(56)
r Cz, ◦ ∼ = O C ; exp , •), a(µ) zµ . a = a(µ) µ∈Nr = a(µ)zµ → aˆ := µ! µ µ
222
U. Oberst
For all the following theorems we choose a term order on [] × Nr as in [16, Ch. VII, §104] or [17, Ch. IX, §106]. Let W ∈ R×s and Z ∈ Rr×s 0 be matrices such that the map ω : [] × Nr → R1×s ,
(j, ν) → Wj − + νZ,
is injective. Riquier calls the number Wj k ,
1 j ,
resp. Zik ,
1 i r,
for k = 1, . . . , s
the k th cote (mark after [17]) of the function wj resp. the variable zi . Via ω the lexicographic order on R1×s induces the order on [] × Nr by (57)
(i, µ) < (j, ν) ⇔ Wi− + µZ < Wj − + νZ.
It is easily seen that this is a term order, and that all standard term orders are of this type. For Theorem 15 below we additionally choose (58)
Wj 1 ∈ N for j = 1, . . . , and Zi1 := 1 ω(j, ν)1 = Wj 1 + ν1 + · · · + νr ∈ N.
for i = 1, . . . , r,
hence
This definition implies that ([] × Nr , ) is order isomorphic to N, i.e. standard inductions can be used on ([] × Nr , ). This is used in the proof of Theorem 15 like it was in the proof of his existence theorem by C. Riquier. Theorem 12 (The discrete Cauchy problem for (Cz, ◦)). The data are those from Theorem 11 for the field F := C. (i) Let w be the unique solution of the discrete Cauchy problem R ◦ w = v,
w = x,
L ◦ v = 0.
If v and x are convergent, i.e. v ∈ Czk and x ∈ C ∩ Czl , then so is w , i.e. w ∈ Czl . (ii) (Cz, ◦) is a large injective cogenerator. The lengthy proof of the preceding theorem relies on the recursion formula (30) and a lemma which is a variant of [17, §107]. The preceding theorem and the Borel isomorphism (56) imply the next one in the same fashion as Theorem 11 followed from Theorem 1. Theorem 13 (The continuous Cauchy problem for (O(Cr ; exp), •))). The data are those from Theorem 11 for the field F := C. (i) Let wˆ be the unique solution of the continuous Cauchy problem ν R(∂)wˆ = v, ˆ L(∂)vˆ = 0, ∂ wˆ j (0) = xj (ν) for all (j, ν) ∈ .
The constructive solution of linear systems
223
x (ν) If vˆ and xˆ := (j,ν)∈ jν! zν δj are entire of exponential type then so is the solution wˆ . This is applicable, for instance, if the support of x is finite, i.e. if only finitely many nonzero Taylor coefficients (∂ ν wˆ j )(0) = xj (ν) are prescribed by the initial data. (ii) (O(Cr ; exp), •) is a large injective cogenerator.
Remark 14. That (O(Cr ; exp), •) satisfies the fundamental principle or is injective was already shown by L. Ehrenpreis [2, Ch. V, Example 2 on page 138] by means of a functional analytic method. Our proof is completely different, constructive and, we believe, simpler. P.S. Pedersen [13] conjectured the holomorphy of the solution for homogeneous equations with initial data of finite support. That (0(Cr ; exp), •) is a large cogenerator was shown in [10] by a difficult proof. Theorem 15 (The continuous Cauchy problem for (Cz, •)). The preceding theorem holds for (Cz, •) instead of (O(Cr ; exp), •). 9. T H E
WORK OF
C. R I Q U I E R [16]
We comment the work of Riquier [16] from 1910 (this was also exposed in [8,18] and [17]) in retrospect, i.e. from the point of view of our present knowledge. Riquier considers even nonlinear systems of partial differential equations for (locally) convergent power series, i.e. for the function space (Cz, •). However, we discuss his formulation and solution of the Cauchy problem for linear systems with constant coefficients (59)
R(∂)wˆ = v, ˆ
R ∈ Czk× , wˆ ∈ Cz , vˆ ∈ Czk ,
only which, in the present chapter, were treated in Theorem 15. The Cauchy problem involves three steps: (i) the choice of the initial condition, (ii) the solution of (59) by formal power series and (iii) the proof that the formal solution is indeed convergent. The proof of the convergence in Theorem 15 is a variant of the one given by Riquier for his existence theorem in [16, Ch. VII, §114] (see also [8, Ch. III], [18] and [17, Ch. IX, §§109–117], but is more precise in our opinion since it uses more lemmas and formulas instead of words whose meaning is not always clear (at least to us). It is also simpler because we treat the linear case only. The corresponding proof of Theorem 12 is different. The discrete Cauchy problem was not treated by Riquier and his successors. The items (i) and (ii) concern the formal solutions only for which the partial differential and partial difference equations are equivalent (see Theorem 11).
224
U. Oberst
Therefore we transfer Riquier’s theory to the discrete case and consider the difference system (60)
R ∈ F [s]k× , w ∈ F [[z]] , v ∈ F [[z]]k , F [[z]] = F N , r
R ◦ w = v,
for any field F as in Theorem 1. According to Riquier we choose any term order (58), for instance the graded lexicographic one, and define
(61)
U := F [s]1×k R, N := deg(U ), := [] × Nr \ N, k
! !r \ N !, hence ! N := deg(Ri− ) + Nr , = [] × N i=1
! ⊆ N, N
⊆! and
!
F ⊆ F .
!, resp. ! , are called principal, resp. parametric [16, The indices (j, ν) in N §90, p. 169] and the restriction w! is called the initial determination of w . of Riquier [16, p. IX] attributes this terminology to Méray (1880). His space F ! initial determinations contains our state space F , and his initial value problem is defined as
(62)
! w! = x˜ ∈ F ,
R ◦ w = v,
where v and x˜ are given and a solution w is sought. Let L be any left annihilator of R . According to (25) the solvability of R ◦ w = v implies L ◦ v = 0. If this is the case and if w is the unique solution of R ◦ w = v,
L ◦ v = 0,
w  = x ˜
according to Theorem 1 the Cauchy problem (62) has a solution if and only if w (! \ ) = x( ˜ ! \ ). Hence (62) is uniquely solvable if and only if L◦v=0
and
w (! \ ) = x( ˜ ! \ ).
Corollary and Definition 16. The system R ◦ w = v is called completely integrable [16, p. 195] if the map
! w ∈ F [[z]] ; R ◦ w = v → F , w →
w! , is an isomorphism. This is the case if and only if L ◦ v = 0 and = ! or, equivalently, if L ◦ v = 0 and if the rows of R are a Gröbner basis of U = F [s]1×k R . Riquier’s main existence theorem [16, §115] says that passive orthonomic systems are completely integrable for which, in particular, the preceding corollary holds. We are now going to define the notion of orthonomicity. " ! = ki=1 (deg(Ri− ) + Nr ) choose a decomposition For (j, ν) ∈ N (63)
(j, ν) = deg(Ri− ) + µ
and define
u˜ (j,ν) := s Ri− ∈ U = F [s]1×k R µ
with deg(u˜ (j,ν) ) = (j, ν).
225
The constructive solution of linear systems
Since [] × Nr is wellordered simple linear algebra yields the constructive decomposition ! ) := !, F (! ! := F [s]1× = F ( ) ⊕ U F s ν δj , U F u˜ (j,ν) , (64)
! (j,ν)∈N
(j,ν)∈! ∈ F [s]1×k ,
!ξ R, H !ξ in particular ξ = ξI + ξ I I , ξ I I = H ν ν 1×k ! ! s δj = s δj I + H(j,ν) R, H(j,ν) ∈ F [s] .
Definition 17. The system R ◦ w = v is called orthonomic [16, §104], [18, p. 303], [17, p. 144] if the degrees (j (i), d(i)) := deg(Ri− ), i = 1, . . . , k , are pairwise distinct and if ! Ri− ≡ lt(Ri− ) := s d(i) δj (i) F () for i = 1, . . . , k, where lt(g) denotes the leading term of g . The first condition is not contained in Riquier’s original definition. Recall that if G is a Gröbner basis of U the conditions deg(g) ∈ mincw deg(U ) and g ≡ lt(g) F characterize G as the unique reduced Gröbner basis of U . Corollary 18. An orthonomic system R ◦ w = v is completely integrable if and only if L ◦ v = 0 and if the reduced Gröbner basis of F [s]1×k R is contained among the rows of R . The complete integrability of an orthonomic system R ◦ w = 0 is characterized in [16, §112] by a finite algorithm which we have not yet studied in detail. It is interesting to note that the algorithm makes essential use of the socalled cardinal indices (dérivées cardinales) which are the analogues of Buchberger’s Spolynomials. Janet even characterizes the complete integrability of arbitrary systems R ◦ w = 0 by a finite algorithm [8, Ch. II, §8]. By Corollary 16 these algorithms can be used to test the Gröbner basis property of the rows of R . The finiteness of the algorithms is proven by means of Dickson’s lemma in the same fashion as the finiteness of all algorithms in the Gröbner basis theory. Moreover Janet [8, Ch. II, §12] describes a (finite) algorithm by which any system R ◦ w = 0 can be reduced to completely integrable form. This can be used to construct a Gröbner basis of F [s]1×k R . Assume finally that the system R ◦ w = v is orthonomic !, the decompositions from (64) and consider, for (j, ν) ∈ N ν ! !s (j, ν), (i, µ) s µ δi , !, H s ν δj = s ν δj I + g(j,v) ∈ F ⊕ U s δj I = g(j,ν) := s ν δj −
(i,µ)∈!
!s (j, ν), (i, µ) s µ δi = H !(j,ν) R. H
(i,µ)∈!
For i = 1, . . . , k we choose u˜ (j (i),d(i)) := Ri− and obtain
226
U. Oberst
!, u˜ (j (i),d(i)) = Ri− ∈ U ! ! s d(i) δj (i ) = s d(i) δj (i) − Ri− + Ri− ∈ F ⊕ U
and g(j (i),d(i)) = Ri− .
The equation R ◦ w = v implies !(j,ν) ◦ v =: v(j,ν) g(j,ν) ◦ w = H
(65)
!, especially for all (j, ν) ∈ N g(j (i),d(i)) ◦ w = Ri− ◦ w = vi = v(j (i),d(i)) for i = 1, . . . , k and !s (j, ν), (i, µ) x˜j (µ) + v(j,ν) (0) for (j, ν) ∈ N !. H wj (ν) = (i,µ)∈!
The latter explicit equation is not contained in [16] and is proven as in Theorem 1 . The systems R ◦ w = v and (65) have the same solutions. To for instead of ! !, from the parametric initial values compute the principal values wj (ν), (j, ν) ∈ N wi (µ) = x˜j (µ), (i, µ) ∈ ! , was an essential idea in Riquier’s work and also in the proof of Theorem 1. This idea is due to Méray (1880) (see [16, Introduction]). Riquier and his successors thus made fundamental contributions to the initial value problem of partial differential equations and to those questions which today are treated in the framework of the Gröbner basis theory. Janet’s work is also used by V. Gerdt [5]. Example 19. Consider for the graded lexicographic order on N2 and = 1 the simple homogeneous system h1 ◦ w = 0, h2 ◦ w = 0,
! N = (2, 0), (1, 1) + N2 ,
h1 := s12 + s2 , h2 := s1 s2 + s1 ,
! = 0 × N (1, 0) .
Obviously the system is orthonomic. The reduced Gröbner basis of U = C[s]h1 + C[s]h2 is G = {h1 , h2 , h3 } with h3 = s22 + s2 ,
hence deg(U ) = (2, 0), (1, 1), (0, 2) + N2 and
= (0, 0), (1, 0), (0, 1) .
Hence is finite and the initial data which can be freely prescribed have the form x = x(0, 0) + x(1, 0)z1 + x(0, 1)z2 . The initial determinations according to Riquier have the form x˜ = z1 f 1,0 + f 0,0 (z2 ) where f 1,0 ∈ C and f 0,0 ∈ C[[z2 ]]. The system is not completely integrable in Riquier’s sense. Riquier’s nonlinear orthonomic systems have the form ∂ d(i) wj (i) = fi . . . , ∂ ν wj , . . .),
i = 1, . . . , k,
where
k
! := j (i), d(i) + Nr (j, ν) < j (i), d(i) and (j, ν) ∈ /N i=1 ν
for all arguments ∂ wj in fi
The constructive solution of linear systems
227
and where the fi (. . . , z(j,ν ), . . .) are convergent power series in variables z(j,ν) . In particular, these systems are “solved for the highest derivatives”. The simple differential equation z2 w + w = 0 with the solution w = exp( 1z ), but no locally convergent solution shows that, in contrast to the constant coefficient case, this latter condition is essential for obtaining convergent solutions. The solutions of arbitrary linear systems of partial differential equations with variable coefficients are only hyperfunctions in general (compare [3] for the case of ordinary differential equations). 10. T H E
DISCRETE
CAUCHY
PROBLEM FOR
Nr1 × Zr2
In the last section of this chapter we extend Theorems 1 and 12 to the monoids Nr1 × Zr2 instead of Nr . Without loss of generality and for simplicity of exposition we describe the results for the group M := Zr . According to the papers [21] and [9] the discrete Cauchy problem over M := Zr is reduced to one over the monoid (66)
! := Nr × Nr = N2r M
to which Theorems 1 and 12 are directly applicable. The associated rings of #, viz. operators are the monoid rings of the monoids M and M (67)
D := F [M] := F [s, s −1 ] := F s1 , . . . , sr , s1−1 , . . . , sr−1 , the algebra of Laurent polynomials, and ! ! = F [s, t] := F [s1 , . . . , sr , t1 , . . . , tr ]. D := F [M]
The signal space for D = F [s, s −1 ] is the D module (68) A := F M = HomF (M, F ) = HomD (D, A) a, a(µ) = a(s µ ), µ ∈ Zr , with the scalar multiplication ν (f ◦ a)(g) = a(f g), s ◦ a (µ) = a(µ + ν), f, g ∈ D, µ, ν ∈ M = Zr . The map # → M, τ :M
(µ, ν) → µ − ν,
is a surjective monoidhomomorphism. For ν ∈ M = Zr we define (69) ν+ := max(νi , 0) 1ir ∈ Nr and ν− := (−ν)+ , hence ν = ν+ − ν− and ν := ν1 , . . . , νr  = ν+ + ν− . The map # σ : M → M,
µ → (µ+ , µ− ),
is a (nonhomomorphic) section of τ , i.e. τ σ = idM . These maps induce inverse bijections
228
U. Oberst
τ
(70)
im(σ ) M, where σ
# µi · νi = 0, 1 i r . im(σ ) = (µ, ν) ∈ M;
The maps σ and τ are extended to the maps τ j, (µ, ν) := j, τ (µ, ν) , σ (j, µ) := j, σ (µ) ,
τ
(71)
# [] × M, [] × M σ
with the same names and again we have τ σ = id[]×M . Let ϕ be the surjective algebra homomorphism (72)
! → D, ϕ :D
ti → si−1 ,
si → si ,
with the kernel ker(ϕ) =
r
(s, t)(µ,ν) = s µ t ν → s µ−ν = s τ (µ,ν) ,
! i ti − 1). D(s
i=1
The homomorphism ϕ is extended componentwise to matrices and vectors by (73)
!k×l → Dk×l , ϕ :D
→ ϕ(R) := ϕ(R) R . i,j
# and then, on [] × M, the wellorder induced We choose a term order on [] × M by the injection σ , i.e.
(74)
(i, µ) < (j, ν) :⇔ (i, µ+ , µ− ) < (j, ν+ , ν− ).
As in Eqs. (8) and (9) the wellorder on [] × M induces the degrees deg(ξ ) ∈ [] × M
(75)
for 0 = ξ ∈ D1× =
deg(U ) := deg(ξ ); 0 = ξ ∈ U
F s ν δj ,
(j,ν)∈[]×M
for a subspace U ⊆ D1×
and
[] × M = deg(U ) .
Finally the lattice M = Zr admits the conic decomposition (76)
Zr = M =
MT ,
MT := NT ⊕ (−N)T , [r] = {1, . . . , r} = T T ,
T ⊆[r]
MT = {ν ∈ M; νi 0 for i ∈ T , νi 0 for i ∈ T },
into the 2r r dimensional quadrants MT . Theorem 20. With the notations introduced above let U be a D submodule of D1× , # \ deg(U !) and ! := [] × M
Then
!1× , ! := ϕ −1 (U ) ⊆ D U := [] × M \ deg(U ).
229
The constructive solution of linear systems
D1× = F () ⊕ U ξ = ξnorm + (ξ − ξnorm ),
(1)
F () :=
F s ν δj .
(j,ν)∈
(2) The order from (74) is a generalized term order with respect to the conic decomposition (76) according to [12], i.e. (i, µ) (j, ν), ν, λ ∈ MT ⇒ (i, µ + λ) (j, ν + λ), hence deg s λ ξ = λ + deg(ξ ) if 0 = ξ ∈ D1× , λ, deg(ξ ) ∈ MT .
In general, the equation deg(s λ ξ ) = λ + deg(ξ ) does not hold. (3) σ () = ! and τ (! ) = . 1× ! ! ! then (4) If G ⊂ D i s the reduced Gröbner basis of U
! deg(g) G := ϕ(g) ˜  g˜ ∈ G, ˜ ∈ im(σ )
is a Gröbner basis of U with respect to the generalized term order (74) according to [12], i.e. deg s µ g ; µ ∈ M = Zr . deg(U ) = g∈G
The first assertion of the theorem holds for any subspace U of D1× = ν (j,ν)∈[]×M F s δj since [] × M is wellordered. The second is a reinterpretation of [21, proof of Theorem 4] and [9, (37)]. Propositions (3) and (4) are new and proven in [11]. In [21] the initial set was defined as := τ (! ) without reference to deg(U ). The parts (2) and (4) of the preceding theorem reduce the Gröbner basis algorithms from [12] for the Laurent polynomial ring F [s, s −1 ] to the standard and widely implemented algorithms for the polynomial algebra F [s, t]. The direct sum decomposition D1× = F () ⊕ U ξ = ξnorm + (ξ − ξnorm )
of the preceding theorem can be explicitly computed as in Eqs. (14) and (15). For this purpose we assume that U is given as (77)
U = D1×k R,
R ∈ Dk× ,
and choose
!k× ∈ D R
= R. with ϕ(R)
This is easy since ϕ((s, t)σ (ν) ) = s ν . From Eq. (72) we infer (78)
!1×(k+lr) R ! = ϕ −1 (U ) = D ! with U ! ! R !(k+r)× . ! := RI := ∈D R !I I R (si ti − 1)δj , 1 j , 1 i r
Theorem 21 (Transfer operators for Zr ). The data are those from the preceding !1× with ϕ(ξ˜ ) = ξ theorem and from Eqs. (77) and (78). For ξ ∈ D1× choose ξ˜ ∈ D and let ) !˜ R !, ! !1× = F (! ξ˜ = ξ˜norm + H ⊕U ξ ∈D
! !˜ = (H !˜ H !1×(k+r) , H ξ ξ ,I ξ˜ ,I I ) ∈ D
230
U. Oberst
! = [] × N2r according to (14). Then be the direct decomposition for [] × M
(79)
ξ = ξnorm + Hξ R ∈ D1× = F () ⊕ U, ξnorm := ϕ(ξ˜norm ) ∈ F
()
where
!˜ ) ∈ D1×k . , Hξ := ϕ(H ξ ,I
In particular, for ξ = s ν δj ,
(j, ν) ∈ [] × M,
and ξ˜ := (s, t)σ (ν) δj = s ν+ t ν− δj
the representations
ξ˜norm =
(80)
!˜ = H ξ ,I
!s j, σ (ν) , (i, µ) H ˜ (s, t)µ˜ δi
(i,µ)∈ ˜ !
and
! j, σ (ν) , (i, µ) H ˜ (s, t)µ˜ δi
# (i,µ)∈[k]× ˜ M
from (15) furnish ν Hs (j, ν), (i, µ) s µ δi s δj norm =
with
(i,µ)∈
(81)
!s j, σ (ν) , i, σ (µ) Hs (j, ν), (i, µ) := H and µ H(j,ν) := Hs ν δj = H (j, ν), (i, µ) s δi with (i,µ)∈[k]×M
H (j, ν), (i, µ) :=
! j, σ (ν) , (i, µ) H ˜ .
µ,τ ˜ (µ)=µ ˜
Summing up we obtain the analogue of (15) for (j, ν) ∈ [] × M: (82)
s ν δj = s ν δj norm + H(j,ν) R = Hs (j, ν), (i, µ) s µ δi + (i,µ)∈
H (j, ν, (i, µ) s µ δi .
(i,µ)∈[k]×M
The rowfinite infinite matrices Hs , resp. H, are again called the 0input, resp. the 0state, transfer operator of the difference system R ◦ w = 0,
R ∈ Dk×l , w ∈ A , v ∈ Ak .
By the given constructions these matrices can be computed by means of the Gröbner basis algorithm for the polynomial algebra F [s, t] which is implemented in most computer algebra systems. Proof. The equations for ξ are obtained from those for ξ˜ by application of ϕ . Recall ˜ . From that ϕ((s, t)µ˜ ) = s τ (µ)
The constructive solution of linear systems
!I ) = ϕ(R) = R, ϕ(R
231
!I I ) = 0 ϕ(R
and ! ! ˜ ) RI !˜ R !˜ H ! = ξ˜norm + (H ξ˜ = ξ˜norm + H ξ ξ ,I ξ ,I I !I I R !˜ R !˜ R !I + H !I I = ξ˜norm + H ξ ,I
ξ ,I I
we infer !˜ )ϕ(R !I ) = ξnorm + Hξ R. ξ = ϕ(ξ˜ ) = ϕ(ξ˜norm ) + ϕ(H ξ ,I
Application of ϕ to the first equation of (80) implies ˜ !s j, σ (ν) , (i, µ) H ˜ s τ (µ) ξnorm = δi . (i,µ)∈ ˜ ! τ
are inverses of each other this is exactly the first equation Since the maps ! σ of (81), in particular ξnorm ∈ F () . Application of ϕ to the second equation of (80) yields !˜ ) H(j,ν) = Hξ = ϕ(H ξ ,I ˜ ! j, σ (ν) , (i, µ) H ˜ s τ (µ) δi = # (i,µ)∈[k]× ˜ M
=
k
µ ! H j, σ (ν) , (i, µ) ˜ s δi
#,τ (µ)=µ i=1 µ∈M µ∈ ˜ M ˜
=
H (i, ν), (i, µ) s µ δi ,
(i,µ)∈[k]×M
i.e. the second equation of (81).
2
For k× R ∈ Dk× = F s, s −1
there is a ν ∈ Nr
such that
s ν R ∈ F [s]k× .
Let L ∈ F [s]h×k be a universal left annihilator of s ν R . It is then a universal left annihilator of R too, i.e. (83)
F [s]1×h L = η ∈ F [s]1×k ; ηR = 0 and −1 1×h
−1 1×k F s, s L = η ∈ F s, s ; ηR = 0 .
Theorem 22 (The discrete Cauchy problem for Zr ). For the data of the preceding theorem and of (83) the propositions of Theorem 1 hold. The nonrecursive computation of the transfer operators Hs and H and of the solutions of the Cauchy problem is the new feature of this theorem compared to [21] and [9].
232
U. Oberst
Modulo the preceding theorem the proof is the same as that of Theorem 1. Convergent signals in A are introduced in the following manner. Definition 23. For µ ∈ Zr let µ := (µ1 , . . . , µr ). A signal b ∈ A = CM is convergent if and only if there are C ∈ R>0 and ρ ∈ Rr>0 such that b(µ) Cρ µ for all µ ∈ M. The D submodule of convergent signals in A is denoted by C . Example 24. Consider the simplest case r = 1 and M = Z. A sequence a = (a(µ))µ∈Z is convergent if there are C > 0 and ρ > 0 such that a(µ) C ∗ p µ for all integers µ or, equivalently, that both power series a+ :=
∞
a(µ)zµ
and a− :=
µ=0
∞
a(−µ)zµ
µ=0
are convergent (near z = 0). The convergence of the second series also signifies that µ near z = ∞. Convergence does not, however, the series −∞ µ=0 a(µ)z converges µ signify that the Laurent series a¯ := ∞ µ=−∞ a(µ)z converges in U \ {0} for some open neighbourhood U of zero so that a¯ would be a holomorphic function with zero as an isolated singularity. #, resp. Theorem 25. The situation is that of Theorems 20, 21 and 22. On [] × M [] × M, we consider the wellorders from (57), resp. from (74).
(i) Let w ∈ A be the unique solution of the Cauchy problem R ◦ w = v,
w = x,
L ◦ v = 0,
for given initial data x ∈ Cr and v ∈ Ak . If v and x are convergent, i.e. v ∈ C k and x ∈ C ∩ F , then so is w or w ∈ C . (ii) The module (C, ◦) is large injective cogenerator. The proof proceeds by reduction of the propositions to those of Theorem 12. R EFERENCES [1] Cox D., Little J., O’Shea D. – Ideals, Varieties and Algorithms, SpringerVerlag, Berlin, 1996. [2] Ehrenpreis L. – Fourier Analysis in Several Complex Variables, WileyInterscience, New York, 1970. [3] Fröhler S., Oberst U. – Continuous timevarying linear systems, Systems Control Lett. 35 (1998) 97–110. [4] Fuhrmann P. – A Polynomial Approach to Linear Algebra, SpringerVerlag, Berlin, 1996. [5] Gerdt V.P. – Involutive methods applied to algebraic and differential systems, in: Hanzon B., Hazewinkel M. (Eds.), Constructive Algebra Systems Theory, Royal Netherlands Academy of Arts and Sciences, 2006, this volume. [6] Gregor J. – Convolutional solutions of partial difference equations, Math. Control, Signals, Systems 2 (1991) 205–215. [7] Hörmander L. – An Introduction to Complex Analysis in Several Variables, Van Nostrand, Princeton, 1966.
The constructive solution of linear systems
233
[8] Janet M. – Sur les systèmes d’équations aux dérivées partielles, Math. Pures Appl. 3 (1920) 65– 151. [9] Kleon S., Oberst U. – Transfer operators and state spaces for discrete multidimensional linear systems, Acta Appl. Math. 57 (1999) 1–82. [10] Oberst U. – Multidimensional constant linear systems, Acta Appl. Math. 20 (1990) 1–175. [11] Oberst U., Pauer F. – The constructive solution of linear systems of partial difference and differential equations with constant coefficients, Multidimensional Systems and Signal Processing 12 (2001) 253–308. [12] Pauer F., Unterkircher A. – Gröbner bases for ideals in monomial algebras and their application to systems of difference equations, Appl. Algebra Engrg. Comm. Comput. 9 (1999) 271–291. [13] Pedersen P.S. – Basis for power series solutions to linear, constant coefficient partial differential equations, Adv. Math. 141 (1999) 155–166. [14] Pillai H., Wood J., Royers E. – Constructive multidimensional systems theory with applications, in: Hanzon B., Hazewinkel M. (Eds.), Constructive Algebra Systems Theory, Royal Netherlands Academy of Arts and Sciences, 2006, this volume. [15] Pommaret J.F. – Partial Differential Equations and Group Theory, Kluwer Academic Publishers, Dordrecht, 1994. [16] Riquier C. – Les systèmes d’équations aux dérivées partielles, GauthiersVillars, Paris, 1910. [17] Ritt J.F. – Differential Equations from the Algebraic Standpoint, Amer. Math. Soc., New York, 1932. [18] Thomas J.M. – Riquier’s existence theorem, Ann. of Math. 30 (1929) 285–310. [19] Willems J.C. – From timeseries to linear systems. Parts I and II, Automatica 22 (1986) 561–580, and 675–694. [20] Zampieri S. – A solution of the Cauchy problem for multidimensional discrete linear shiftinvariant systems, Linear Algebra Appl. 202 (1994) 143–162. [21] Zerz E., Oberst U. – The canonical Cauchy problem for linear systems of partial difference equations with constant coefficients over the complete r dimensional integral lattice Zr , Acta Appl. Math. 31 (1993) 249–273.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
Regular singularities and stable lattices
A.H.M. Levelt Radboud University, Nijmegen, The Netherlands
A BSTRACT Let V be a C{x}[1/x]vector space of finite dimension, ∇ a regular singular connection on V and a C{x}lattice in V . In [1] and [2] E. Corel proves, among other things, the existence of a largest ∇xd/dx stable sublattice L of . He also describes algorithms for computing L in special cases. The present chapter gives a general algorithm. For the convenience of the reader E. Corel’s results and proofs are summarized.*
1. I N T R O D U C T I O N We start by recalling some classical results on ordinary linear differential equations with complex analytic coefficients. Throughout this chapter – unless otherwise stated – O will denote the ring C{x} of convergent power series in x with complex coefficients, whereas K is the field of fractions C{x}[1/x] of O . For an equation of the above kind (1)
y (n) + an−1 y (n−1) + · · · + a0 y = 0
(n ∈ N, y (k) denoting the k th derivative of the “unknown” analytic function y with respect to the argument x ), a wellknown theorem by A. Cauchy states that, in case all ai belong to O , all solutions in a neighborhood of x = 0 belong to O and form a ndimensional vector space over C. * In 2004 appeared E. Corel’s paper On Fuchs’ relation for linear differential systems, Compositio Math. 140 (2004) 1367–1398. Among many other interesting results it contains an alternative algorithm for cumputing L .
236
A.H.M. Levelt
For ai ∈ K not all belonging to O , the situation is much more complicated. (1) is said to have a singularity at x = 0. Here are some examples: √ • x is a solution of y + 14 x −2 y = 0. • log(x) is a solution of y + x −1 y = 0. • exp(1/x) is a solution of y + x −2 y = 0.
A (multivalued) solution y of (1) in a neighborhood of x = 0 has moderate growth at 0 if the following condition is satisfied: For any open sector S at 0 there exist C ∈ R, N ∈ Z such that y CxN
for x ∈ S and small x.
(1) is said to have a regular singularity at 0 if all solutions have moderate growth at 0. The notions of moderate growth at a , a ∈ C or a = ∞, are defined in an evident way. Example 1. x(1 − x)y + c − (a + b + 1)x y − aby = 0,
where a, b, c ∈ C, has regular singularities at 0, 1, ∞. Example 2. x 2 y + y = 0.
Solution y = exp(1/x). Growth not moderate: irregular singularity at 0. Theorem 1 (L. Fuchs, 1866). Equation (1) has a regular singularity at 0 iff v(ai ) −n + i
for all i ∈ {1, . . . , n − 1}.
(For f ∈ K v(f ) denotes the order of f at 0.) Assume that (1) has a regular singularity at 0. Define bi = x n−i ai . Then bi ∈ O by Fuchs’s theorem. The equation in ε (2)
ε(ε − 1) · · · (ε − n + 1) + bn−1 (0)ε(ε − 1) · · · (ε − n + 2) + · · · + b2 (0)ε(ε − 1) + b1 (0)ε + b0 (0) = 0
is called the indicial equation. If (2) has solutions ε1 , ε2 , . . . , εn not differing by integers, then there exists a fundamental system of solutions of (1) of the form y1 = x ε1 u1 , . . . , yn = x εn un ,
where u1 , . . . , un ∈ O and ui (0) = 0 for all i . (In general there is a more complicated fundamental system involving log terms.) The solutions of (2) are called the exponents at 0 of (1).
Regular singularities and stable lattices
237
Theorem 2 (Fuchs’s relation). n 1 s (3) εi − n(n − 1) = −n(n − 1), 2 1 i=1
s∈P
where ε1s , . . . , εns are the exponents at x = s . Note that the exponents at s are 0, 1, . . . , n − 1 when (1) is regular (i.e. not singular) at s . Example 3. The hypergeometric differential equation has regular singularities at 0, 1, ∞ and nowhere else. One easily computes the exponents and finds exponents at 0: 0, 1 − c exponents at 1: 0, c − a − b exponents at ∞: a, b And Fuchs’s relation comes down to 0 ε1 + ε20 − 1 + ε11 + ε21 − 1 + ε1∞ + ε2∞ − 1 = −2 −c c−a−b−1 a+b−1 2. F U C H S ’ S
RELATION FOR
FUCHSIAN
SYSTEMS
Consider the homogeneous linear firstorder system (4)
y = Ay,
where
A1,1 . . A= .
...
An,1
···
A1,n .. . , An,n
Ai,j ∈ K,
and y1 . . y= . . yn
Here again moderate growth and regular singularity can be defined in an obvious way. One has v(A) −1 ⇒ regular singularity at 0.
238
A.H.M. Levelt
(Here v(A) means inf{v(Ai,j )  1 i, j n}.) The converse is false. Counterexample: y1 y1 0 x −2 = y2 0 0 y2 has the following fundamental system of solutions 1 −x −1 . , 1 0 Can Fuchs’s relation (3) be generalized to systems on P1 , i.e. (4) where all Ai,j are in C(x)? The answer is ‘yes’ for Fuchsian systems, i.e. vs (A) −1 for all s ∈ P1 . For this one needs exponents for Fuchsian systems. In the local situation (Ai,j ∈ K, v(Ai,j ) −1) one may define the exponents as the eigenvalues of (xA)x=0 or, what comes down to the same, the eigenvalues of the residue of A dx at x = 0. When these exponents ε1 , . . . , εn do not differ by integers an easy theorem states that there exists a fundamental system of solutions U x ε , where U ∈ Gln (O) and x ε is the diagonal matrix with diagonal x ε1 , . . . , x εn . Theorem 3. Let (4) be a Fuchsian system. Then (5)
n
εis = 0.
s∈P1 i=1
The proof is based on the residue theorem on P1 in the form ress Tr(A) dx = 0. s∈P1
In his thesis [1,2] Eduardo Corel has studied Fuchs’s relations for systems on P 1 with regular singularities, but not necessarily of Fuchsian type. The exponents he needed where taken from [4] and he called them Levelt’s exponents, cf. [4] (Levelt’s Ph.D. thesis). He found a nice new interpretation of these exponents in purely algebraic terms – we shall call them Corel’s exponents – and proved Fuchsian inequalities for systems on P1 with regular singularities. (Corel’s Theorem 1.) These inequalities reduce to (5) in the Fuchsian case. We shall present an overview of Corel’s results, including sketches of proofs. For this Corel’s exponents are the convenient tool. As a second main result Corel proves that exponents in the sense of Levelt, resp. Corel coincide. (Corel’s Theorem 2.) This question will not be touched upon in the present chapter. Next we introduce the convenient language of connections. 3. C O N N E C T I O N S In this section K denotes C{x}[1/x] or C(x), V a K vector space of finite dimension n. A connection ∇ on V is a K linear map τ → ∇τ , where τ is Cderivation on K
Regular singularities and stable lattices
239
and ∇τ : V → V a Clinear map satisfying Leibniz rule, i.e. ∇τ (a v) = τ (a) v + a ∇τ (v) for all a ∈ K, v ∈ V . For such ∇ and any K basis e = (e1 , . . . , en ) of V one has uniquely determined Ai,j ∈ K such that ∇τ (ei ) = −
n
Aj,i ej .
j =1
The matrix (Aj,i ) is called the matrix of ∇τ with respect to e; notation: M(∇τ , e). For y ∈ V with coordinates y1 , . . . , yn w.r.t. e the map ∇τ in coordinates is (in evident matrix notation): τ (y1 ) A1,1 y1 . . . . → . − . . . . τ (yn ) yn An,1
··· ···
y1 A1,n . .. . .. . An,n yn
One has y solution of y = A y ⇐⇒ ∇τ (y) = 0.
Let f = (f1 , . . . , fn ) = eT , T ∈ Gln (K), be a another basis of V . Then M(∇τ , f) = T −1 M(∇τ , e)T − T −1 τ (T ).
For K = C{x}[1/x] we say that ∇ has a regular singularity at 0 when y = A y has a regular singularity, where A = M(∇d/dx , e) and e some basis of V . Note that this notion is independent of the choice of e. An O lattice in V is a free O submodule of rank = n. Equivalently, an O submodule of finite type, generating V as K vector space. Theorem 4. Equivalent conditions: (i) ∇ has a regular singularity at 0. (ii) There exists a ∇xd/dx stable lattice in V . (iii) On some K basis of V the matrix of ∇d/dx has pole order 1. The only hard point in this theorem is the implication (i) ⇒ (ii). A proof is given in [3]. Final remark. The above notion of lattice makes also sense with O replaced by C[x] and K by C(x). In that case regular singularities at points s ∈ P1 can be defined in an obvious way.
240
A.H.M. Levelt
4. E. C O R E L ’ S
RESULTS
In the subsequent Theorems 1, 5, θ = xd/dx , V is a K vector space, dimC (V ) = n, ∇ a connection on V with a regular singularity at 0, a lattice in V and e an O basis of . is called ∇θ stable, or shortly stable, when ∇θ () ⊂ . Obviously, the sum of two stable sublattices of a given lattice is again a stable sublattice. Since on the other hand every lattice in V contains stable sublattices (use Theorem 4) this leads immediately to: Proposition 1. There exists a largest ∇θ stable sublattice L of (named Levelt’s lattice by Corel). ¯ = L /xL . ∇θ induces a Clinear transformation δ in the Cvector space We shall call the eigenvalues of δ Corel’s exponents w.r.t. . Following Corel we call δ the residue of ∇ w.r.t. L ; notation resL (∇). A justification: resL (∇) = res0 (A dx), where A = M(∇, f), f being a O basis of L .
Theorem 5. The following inequalities hold (6)
1 r [ : L ] n(n − 1)r, 2
where r is the Poincaré rank of ∇ with respect to , i.e. −v(M(∇θ , e)) if 0, r= otherwise. 0 ([ : L ] is the index of L in and can be defined as dimC (/L ).) A proof of the theorem can be based upon Proposition 2. Let M be a ∇θ stable sublattice of , let k1 k2 · · · kn be the elementary divisors of M in and let r be the Poincaré rank of ∇ w.r.t. . Finally, let m be maximal with the property k1 r,
k2 k1 + r,
...,
km km−1 + r.
Assume that m < n. Then there exists a ∇θ stable sublattice N of strictly containing M . Sketch of proof. There exist bases a of , b of M such that bi = x ki ai , all i and 0 k1 · · · kn . Define c = b1 , . . . , bm , x −1 bm+1 , . . . , x −1 bn .
Define N as the lattice generated by c. Then N has the desired properties, as the reader may verify. 2
Regular singularities and stable lattices
241
As an immediate consequence of the latter proposition we have in the case of M = L ki ki−1 + r
for all i, 1 i n,
and obviously k1 = 0. It follows that ki (i − 1)r , all r , whence 1 [ : L ] = k1 + · · · + kn n(n − 1)r. 2
On the other hand r [ : L ] for obvious reasons. So we have proved (6). Proposition 3 (Theorem 1 of Corel). Let A be an n × n matrix with entries in C(x) and let y = A y have regular singularities everywhere on P1 . Then (7)
n 1 − n(n − 1)h(A) εis −h(A), 2 1 s∈P i=1
where ε1s , . . . , εns are Corel’s exponents at s ∈ P1 , h(A) = rs = sup 0, −vs (A dx) − 1 s∈P1
s∈P1
and rs is the Poincaré rank at s ∈ P1 . This proposition follows easily (this is left to the reader) from the local version: Proposition 4. Let ∇ has a regular singularity at 0. Let ε1 , . . . , εn be Corel’s exponents, r the Poincaré rank w.r.t. and A the matrix of ∇d/dx w.r.t. a basis of . Then (8)
n 1 − n(n − 1)r εi − res0 Tr(A dx) −r. 2 i=1
Proof. There exists bases a of , resp. b of L , such that b = aT , where T is the diagonal matrix with entries x k1 , . . . , x kn , k1 , . . . , kn being the elementary divisors of /L . Define A = M(∇d/dx , a), B = M(∇θ , b). Then xA = M(∇θ , a) and B = T −1 xAT + T −1 θ (T ).
Hence
Tr(B) = Tr(xA) + Tr T −1 θ (T ) = Tr(xA) + (k1 + · · · + kn ).
Substituting x = 0 in the latter identity we find ε1 + · · · + εn = Tr(Bx=0 ) = res0 Tr(A) + (k1 + · · · + kn ) res0 Tr(A) − [ : L ] to which we apply Theorem 5. This leads to (8).
2
242
A.H.M. Levelt
5. C O M P U T I N G L E V E L T ’ S
LATTICES
In this section we are going to compute the largest ∇θ stable sublattice L of a lattice in a K vector space V with regular singular connection ∇ . Again θ is the derivation xd/dx of K . The first step is the construction of a ∇θ stable sublattice of V . This has been explained in [3]: ˜ = + ∇θ () + · · · + ∇ n−1 () θ
is such a stable lattice. An algorithm can be found in [5]. For sufficient large i ∈ N one has x i ⊂ . Let i0 be the smallest i with this property. Define M = x i0 . Then M is a stable sublattice of and x −1 M ⊂ . Note that x −1 M is ∇θ stable too. Hence ∇θ induces a Clinear map δ in the ndimensional Cvector space M = x −1 M/M . Let φ : x −1 M → M be the canonical map. Now we are going to look to sublattices N of x −1 M containing M . φ maps N onto a Csubspace φ(N ) of M . The situation is depicted in the following diagram: M ⊂
(9)
N φ
⊂ x −1 M ∇θ φ
φ(N ) ⊂
M δ
Important and easy fact: N → φ(N ) is a bijection of sublattices of x −1 M containing M onto Csubspaces of M respecting inclusions. Moreover, ∇θ stable N correspond to δ stable subspaces of M . Let us look now at ∩ x −1 M . Claim. If M is the largest ∇θ stable sublattice of ∩ x −1 M , then M = L . Proof. Let P be a stable sublattice of . We must prove P ⊂ M . Replacing P by P + M one may assume that P contains M . Take the smallest k ∈ N such that x k P ⊂ M . Sufficient to prove: k = 0. Suppose, on the contrary, that k 1. Then x k−1 P ⊂ x −1 M . Also x k−1 P ⊂ . Hence x k−1 P ⊂ ∩ x −1 M . Since x k−1 P is also stable, it must be contained in M . This is a contradiction. Our claim has been proved. 2 Next write F = φ( ∩ x −1 M). F is a Csubspace of M . As we shall see (cf. Proposition 5, below) there exists a largest δ stable subspace FL of F . By the above fact we know then that M1 = φ −1 (FL ) is the largest ∇θ stable sublattice of ∩ x −1 M .
243
Regular singularities and stable lattices
M ⊂
M1
∪
φ −1 (F
−1 −1 L ) ⊂ ∩x M ⊂ x M ∇θ φ φ φ
(10)
FL
⊂
F
⊂
M δ x −1 M/M
Now there are two possibilities: (a) M1 = M . Then M is the largest stable sublattice of ∩ x −1 M and by the above claim also the largest stable sublattice L of . (b) M1 = M . Then M1 is a stable sublattice of strictly containing M . In this case we repeat the construction leading from M to M1 . The result is a stable module M2 which is either the largest stable submodule of or a stable submodule of strictly containing M1 . It will be clear now that after a finite number of steps one finds the largest stable submodule L of . Proposition 5. Let E be a Cvector space of finite dimension n, δ a linear transformation of E and F ⊂ E a subspace. Define ∞ i −1 FL = (F ). δ i=0
Then FL is the largest δ stable subspace of F . The elementary proof is left as an exercise. Note that FL =
n
i=0 (δ
i )−1 (F ).
Algorithm LSSL for computing L . Input: V , ∇ with regular singularity at 0, lattice ⊂ V . Output: L . Step 1 P = + ∇θ () + · · · + ∇θn−1 (λ). Step 2 Compute smallest k ∈ Z such that x k P ⊂ . Put M = x k P . Compute δ induced by ∇θ in M = x −1 M/M . Step 3 do F := φ( ∩ x −1 M) FL := lsss(M, δ, F ) N := φ −1 (FL ) if N = M , then break else M := N while true. In this algorithm lsss(M, δ, F ) computes the largest δ stable subspace of F . It is an easy programming exercise in a computer algebra system such as M APLE.
244
A.H.M. Levelt
A complete M APLE implementation of the algorithm LSSL, based on [5], is available from the author (
[email protected]). Example 4 (Taken from [1] (Matrix A∗(4) of Example 4 at page 73)). The system of differential equations y = Ay , where 1 0 0 2 x 1 1 2 1 A= x − 2(x−1)2 0 x − 2(x−1)2 1 1 0 − x1 + 2(x−1) 2 2(x−1)2 has regular singularities at 0, 1, ∞. The Poincaré rank = 0 at ∞. This leads to an immediate computation of the exponents at ∞: 0, 1, −1. The Poincaré rank at 0 is 1. Nevertheless Corel succeeds in computing the exponents at 1 applying several tricks. Result: 0, 0, −1. For the exponents at ε10 , ε20 , ε30 at x = 0 he only finds a partial result: = ε10 + ε20 + ε30 satisfies −4 0 because of Theorem 3 and −2 ei0 2. However, algorithm LSSL computes immediately 0, 0, −2 as the exponents at 0. R EFERENCES [1] Corel E. – Exposants, réseaux de Levelt et relations de Fuchs pour les systèmes différentiels réguliers, thèse doctorat de troisième cycle, Université Louis Pasteur, Strasbourg, 28 juin 1999. [2] Corel E. – Relations de Fuchs pour les systèmes différentiels réguliers, Bull. Soc. Math. France 129 (2001) 189–210. [3] Gérard R., Levelt A.H.M. – Invariants mesurant l’irrégularité en un point singulier d’un système d’équations différentielles linéaires, Ann. Inst. Fourier (Grenoble) 23 (1973) 157–195. [4] Levelt A.H.M. – Hypergeometric functions, II, Nederl. Akad. Wetensch. Proc. Ser. A 64 (1961) 373–385. [5] Levelt A.H.M. – Stabilizing differential operators, in: Singer M. (Ed.), Differential Equations and Computer Algebra, Academic Press, 1991, pp. 181–228. [6] Levelt A.H.M. – Calcul des réseaux de Levelt, Bull. Soc. Math. France 129 (2001) 211–213.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
Involutive methods applied to algebraic and differential systems
Vladimir P. Gerdt Laboratory of Information Technologies, Joint Institute for Nuclear Research, 141980 Dubna, Russia Email:
[email protected] (V.P. Gerdt)
1. I N T R O D U C T I O N Among the properties of systems of analytical partial differential equations (PDEs) which may be investigated without their explicit integration there are compatibility and formulation of an initialvalue problem providing existence and uniqueness of the analytical solution. The classical Cauchy–Kowalevskaya theorem establishes a class of quasilinear PDEs which admit posing such an initialvalue problem. The main obstacle in investigating other classes of PDE systems of some given order q is existence of integrability conditions, that is, such relations for derivatives of order q which are differential but not pure algebraic consequences of equations in the system. An involutive system of PDEs has all the integrability conditions incorporated in it. This means that prolongations of the system do not reveal integrability conditions. Extension of a system by its integrability conditions is called completion. The quasilinear systems of Cauchy–Kowalevskaya type form a particular family of involutive systems. Extension of a system by its integrability conditions is called completion. In this chapter we present some recent achievements in development of constructive methods for completion to involution of polynomial and differential systems. From the completion point of view linear homogeneous PDE systems with constant coefficients can be associated with pure polynomial systems [9,10,4,6], and polynomial involutive systems [5] give a good alternative of the reduced Gröbner bases in commutative algebra [1]. The general involutive approach is based on the concept of involutive monomial division [5] which is defined for a finite monomial set. Every particular division provides for each monomial in the set the selfconsistent separation of variables into multiplicative and nonmultiplicative. An involutive basis is constructed by combining of nonmultiplicative prolongations with multiplicative reductions.
246
V.P. Gerdt
In differential algebra, in addition to their role as canonical bases of differential ideals, linear and quasilinear (orthonomic) involutive systems allow one to pose of an initial value problem providing uniqueness of its solution and to determine the structure of arbitrariness in general analytical solution [6]. Involutive bases of polynomial ideals [5] and linear differential ideals [6] are Gröbner bases of special form. Though involutive bases are generally redundant as Gröbner ones, their use makes more accessible the structural information of polynomial and differential ideals. Janet bases may be cited as typical representatives of involutive bases and have been used in algebraic and Lie symmetry analysis of differential equations. A minimal Janet basis is also a Pommaret basis whenever the latter is finite [8]. In the given chapter we briefly describe the concept of involutivity for systems of ordinary differential equations (Section 2), the notion of involutive polynomial bases (Section 3) and outline some applications (Section 4). 2. I N V O L U T I O N
OF ORDINARY DIFFERENTIAL EQUATIONS
The concept of involution was invented by E. Cartan 100 years ago [2] for systems of partial differential equations (PDEs) and led to development of powerful geometric and algebraic methods of their analysis (see historical remarks in [10,6] and references therein). Though just for PDEs involutive methods reveal the full power, their application to ordinary differential equations and finitelydimensional dynamical systems is also very fruitful. The below definitions taken from the formal theory of PDEs [10] and circumscribed to ODEs. Consider a system of k ODEs of the order q with m unknown functions y1 (x), . . . , ym (x) of the most general form (1)
Rq : j x, yα , yα(1) , . . . , yα(q) = 0
(1 j k)
with yα(i) ≡ d i yα /dx i . A prolongation of system (1) is the system of order q + 1 (q) j x, yα , yα(1) , . . . , yα = 0, Rq+1 : (1 j k), j x, y α , yα(1) , . . . , yα(q) = 0 D is the total derivation operator where D ∂ = ∂ + yα(i+1) (i) D ∂x ∂yα m
(yα(0) ≡ yα ).
α=1 i0
Analogously, the r th prolongation Rq+r of Rq is defined as enlargement of system (1) with all the total derivatives of its equations up order r inclusive. An integrability condition of Rq is the equation of order q which is differential but not pure algebraic consequence of Rq . In other words, the integrability conditions of Rq , if they exist, are obtained by algebraic manipulations (by
Involutive methods applied to algebraic and differential systems
247
elimination of the derivatives of the order greater than q ) over the prolonged systems. As an example consider the following secondorder system in two unknowns x(t), y(t) x˙ + y 2 = 0, x¨ + xy + tx = 0. Here the integrability condition 2y y˙ − xy − tx = 0 is obtained from the first prolongation by elimination of x¨ . A system (1) is called involutive if it has no integrability conditions. Thus, involutive systems have all integrability conditions incorporated in them. Enlargement of a system with its integrability conditions is completion. Any system can be completed to involution in finitely many steps by sequential prolongations and elimination [10–12]. 3. I N V O L U T I V E
POLYNOMIAL BASES
The basic algorithmic ideas go back to M. Janet [9] who invented the constructive approach to study of PDEs in terms of the corresponding monomial sets which is based on the following association between derivatives and monomials: (2)
µ1 ∂ µ1 +···+µn uα µn µ1 µn ⇐⇒ x1 · · · xn α . ∂x1 · · · ∂xn
The monomials associated with the different dependent variables uα are to be considered as belonging to different monomial sets. The association (2) allows one to reduce the involutivity analysis of linear homogeneous systems of PDEs to one of pure algebraic systems [9,10,4,6]. Having in mind this fact consider below the involutivity of algebraic systems. Let R = K[x1 , . . . , xn ] be a ring of multivariate polynomials over a zero characteristic coefficient field K . Then a finite set F = {f1 , . . . , fm } ⊂ R of polynomials in R is a basis of the ideal m
F = f1 , . . . , fm = hi fi  hj ∈ R . i=1
In the involutive approach to commutative (polynomial) algebra [5], which is a mapping of the involutivity analysis of linear PDEs [6,10], for every polynomial in a finite set F the set variables x1 , . . . , xn are separated into disjoint subsets of multiplicative and nonmultiplicative variables. To be selfconsistent such a separation must satisfy some axioms [5], and every appropriate separation generates an involutive monomial division in the following sense. Fix a linear admissible monomial order such that m = 1 ⇒ m 1, m1 m2 ⇐⇒ m1 m m2 m
248
V.P. Gerdt
holds for any monomials (power products of the variables with integer exponents) m, m1 , m2 . Then for every polynomial f in F one can select its leading monomial lm(f ) (with respect to ). All leading monomials in F form a finite monomial set U . If u ∈ U divides a monomial w such that all the variables which occur in w/u are multiplicative for u, then u is called involutive divisor of w . We shall denote by L an involutive division, which specifies a set of multiplicative (resp. nonmultiplicative) variables for every monomial u in any given finite monomial set U and write uL w if u is (L)involutive divisor of w . In the latter case we shall also write w = u × v where, by the above definition, monomial v = w/u contains only multiplicative variables. In papers [5,7,3] a number of particular involutive divisions was introduced and studied. As an example we present here one of them called after M. Janet who was one of the founders of the involutive approach to PDEs and who devised the related separation of variables [9]. Given a finite set U of monomials in {x1 , . . . , xn } and a monomial u = d x1 1 · · · xndn ∈ U , a variable xi (i > 1) is Janet multiplicative for u if its degree di in u is maximal among all the monomials in U having the same degrees in variables x1 , . . . , xi−1 . As for x1 , it is Janet multiplicative for u if d1 takes the maximal value among degrees in x1 of monomials in U . If a variable is not Janet multiplicative for u in U it is considered as Janet nonmultiplicative. Consider, for example, a monomial set U = u = x1 x2 , v = x2 x32 , w = x33 . This gives the following Janet multiplicative and nonmultiplicative variables for monomials in U : Monomial
Variables Multiplicative
Nonmultiplicative
x1 x2
x1 , x2 , x3
—
x2 x3
x2 , x3
x1
x32
x3
x1 , x2
Given a finite polynomial set F , a Noetherian [5] involutive division L, for instance, Janet division, and an admissible monomial order , one can algorithmically construct [5] a minimal Linvolutive basis or Lbasis G ⊂ R of ideal F = G such that for any polynomial f in the ideal there is a polynomial g in G satisfying lm(g)L lm(f ), and every polynomial g in G does not contain monomials having involutive divisors among the leading monomials of other polynomials in G. If F = {f1 , . . . , fm } ⊂ R is a polynomial set, L is an involutive division and is an admissible monomial order, then any polynomial p in R can be rewritten (reduced) modulo ideal F as p=h−
m i=1 j
aij fi × uij ,
Involutive methods applied to algebraic and differential systems
249
where aij are elements (coefficients) of the base field K , uij are Lmultiplicative monomials for lm(fi ) such that lm(f )uij lm(p) for all i , j , and there are no monomials occurring in h which have Linvolutive divisors among {lm(f1 ), . . . , lm(fm )}. In this case h is said to be in the Lnormal form modulo F and written as h = NF L (p, F ). If G is Lbasis, then NF L (p, G) is uniquely defined1 for any polynomial p . In this case NF L (p, G) = 0 if and only if p belongs to the ideal G generated by G. Moreover, if the ideal is radical for which any its element (polynomial) vanishes at the common roots of all the polynomials in G if and only if this polynomial belongs to the ideal, then it follows that the condition NF L (p, G) = 0 is necessary and sufficient for vanishing p on those common roots. It is important to emphasize that any involutive basis is a Gröbner basis, generally redundant, and can be used in the same manner as the reduced Gröbner basis [1]. 4. S O M E
APPLICATIONS
Completion of differential equations to involution is the most universal algorithmic method for their algebraic analysis [9,10,6,11] and can be applied in particular for the following purposes. • Check the compatibility of PDE systems. In the case of the system inconsistency there is an integrability condition of form 1=0
which is revealed in the course of completion. • Analysis of arbitrariness in general analytic solution of analytic systems of PDEs. • Elimination of a subset of dependent variables, that is, obtaining the differential consequences of a given system, if they exist, which do not contain the dependent variables specified. • Posing of an initial value problem for a system of analytic PDEs providing existence and uniqueness of locally holomorphic solutions. • Lie symmetry analysis of DEs. Completion to involution of the determining equations for Lie symmetry generators is the most general algorithmic method of their integration. • Computation of “hidden” constraints for constrained dynamical systems and their numerical indices. • Preanalysis for numerical integration. Revealing the integrability conditions and their explicit involvement into design of numerical schemas makes the numerical solution more relevant to the algebraic structure of the initial system. 1
For other properties of involutive bases, proofs and illustrating examples see [5].
250
V.P. Gerdt
ACKNOWLEDGEMENT This work was partially supported by the RFBF grants 010100708, 001596691 and by grant INTAS 991222. R EFERENCES [1] Becker T., Weispfenning V., Kredel H. – Gröbner Bases. A Computational Approach to Commutative Algebra, Graduate Texts in Math., vol. 141, SpringerVerlag, New York, 1993. [2] Cartan E. – Sur certaines expressions différentielles à le problème de Pfaff, Ann. Ecole Normale, 3e serie 16 (1899) 239–332, Sur l’integration des systèmes d’équations aux différentielles totales, ibid. 18 (1901) 241–311. [3] Chen Y.F., Gao X.S. – Involutive directions and new involutive divisions, Comput. Math. Appl. 41 (2001) 945–956. [4] Gerdt V.P. – Gröbner bases and involutive methods for algebraic and differential equations, Math. Comput. Modelling 25 (8/9) (1997) 75–90. [5] Gerdt V.P., Blinkov Yu.A. – Involutive bases of polynomial ideals, Math. Comput. Simulation 45 (1998) 519–542, Minimal involutive bases, ibid., 543–560. [6] Gerdt V.P. – Completion of linear differential systems to involution, in: Computer Algebra in Scientific Computing/CASC’99, SpringerVerlag, Berlin, 1999, pp. 115–137. [7] Gerdt V.P. – Involutive division technique: Some generalizations and optimizations, Zap. Nauchn. Sem. St.Petersburg. Otdel. Mat. Inst. Steklov. (POMI) 258 (1999) 185–206. [8] Gerdt V.P. – On the relation between Pommaret and Janet bases, in: Ganzha V.G., Mayr E.W., Vorozhtsov E.V. (Eds.), Computer Algebra in Scientific Computing, SpringerVerlag, Berlin, 2000, pp. 167–181. [9] Janet M. – Leçons sur les systèmes d’equations aux dérivées partielles, Cahiers Scientifiques, vol. IV, GauthierVillars, Paris, 1929. [10] Pommaret J.F. – Partial Differential Equations and Group Theory, New Perspectives for Applications, Kluwer, Dordrecht, 1994. [11] Seiler W.M. – Indices and solvability for general systems of differential equations, in: Computer Algebra in Scientific Computing/CASC’99, SpringerVerlag, Berlin, 1999, pp. 365–385. [12] Seiler W.M., Tucker R.W. – Involution and constrained dynamics, J. Phys. A. 28 (1995) 4431–4451.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
The Cartan covering and complete integrability of the KdV–mKdV system
P.H.M. Kersten a and I.S. Krasil’shchik b a University of Twente, Faculty of Mathematical Sciences, P.O. Box 217, 7500 AE Enschede, The Netherlands b Independent University of Moscow, B. Vlasevsky 11, 119002 Moscow, Russia
Email:
[email protected] (P.H.M. Kersten)
A BSTRACT The coupled KdV–mKdV system arises as the classical part of one of superextensions of the KdV equation. For this system, we prove its complete integrability, i.e., existence of a recursion operator and of infinite series of symmetries. After giving a short introduction into the theory of symmetries, coverings, and the notion of Cartancovering, the recursion operator will be constructed as a symmetry in the Cartan covering of the KdV–mKdV system.
INTRODUCTION There are several supersymmetric extensions of the classical Korteweg–de Vries equation (KdV) [6,9,10]. One of them is of the form (the socalled N = 2, A = 1 extension [2]) ut = −u3 + 6uu1 − 3ϕϕ2 − 3ψψ2 − 3ww3 − 3w1 w2 + 3u1 w 2 + 6uww1 + 6ψϕ1 w − 6ϕψ1 w − 6ϕψw1 , ϕt = −ϕ3 + 3ϕu1 + 3ϕ1 u − 3ψ2 w − 3ψ1 w1 + 3ϕ1 w 2 + 6ϕww1 , ψt = −ψ3 + 3ψu1 + 3ψ1 u + 3ϕ2 w + 3ϕ1 w1 + 3ψ1 w 2 + 6ψww1 , wt = −w3 + 3w 2 w1 + 3uw1 + 3u1 w,
where u and w are classical (even) independent variables while ϕ and ψ are odd ones (here and below the numerical subscript at an unknown variable denotes its Key words and phrases: Coupled KdV–mKdV system, Cartancovering, Complete integrability, Recursion operators, Symmetries, Conservation laws, Coverings, Deformations, Superdifferential equations
252
P.H.M. Kersten and I.S. Krasil’shchik
derivative over x of the corresponding order). Being completely integrable itself, this system gives rise to an interesting system of even equations
(1)
ut = −u3 + 6uu1 − 3ww3 − 3w1 w2 + 3u1 w 2 + 6uww1 , wt = −w3 + 3w 2 w1 + 3uw1 + 3u1 w,
which can be considered as a sort of coupling between the KdV (with respect to u) and the modified KdV (with respect to w ) equations. In fact, setting w = 0, we obtain ut = −u3 + 6uu1 ,
while for u = 0 we have wt = −w3 + 3w 2 w1 .
The above indicates why we call (1) the KdV–mKdV system. In what follows, we prove complete integrability, cf. [1], of system (1) by establishing existence of infinite series of symmetries and/or conservation laws. Toward this end we construct a recursion operator using the techniques of deformation theory introduced in [5] and extensively described and exemplified in [6]. In practical situations the construction of a deformation of the equation structure boils down to the construction of symmetries in an augmented setting of the equation or system at hand. In Section 1 of this lecture we shall set out the nonlocal setting for differential equations and describe the notion of nonlocal symmetries in this setting. Section 2 deals with a particular type of nonlocality, the Cartan covering of an equation. Section 3 combines the two previous types of coverings, and it is this covering where the recursion operator for symmetries is obtained as a symmetry in this covering. In these first three sections the classical KdV equations acts as the main example. Finally in Section 4 we shall present symmetries, conservation laws, nonlocalities and the recursion operator for symmetries for the coupled KdV–mKdV system (1). 1. N O N L O C A L
SETTING FOR DIFFERENTIAL EQUATIONS
As standard example, to illustrate the notions, we take KdVequation (2)
ut = uux + uxxx .
We consider Y ⊂ J ∞ (x, t; u) the infinite prolongation of (2), cf. [7,8], where coordinates in the infinite jet bundle J ∞ (x, t; u) are given by (x, t, u, ux , ut , . . .) and Y is formally described as the submanifold of J ∞ (x, t; u) defined by
The Cartan covering and complete integrability of the KdV–mKdV system
253
ut = uux + uxxx ,
(3)
uxt = uuxx + u2x + uxxxx , .. .
As internal coordinates in Y one chooses (x, t, u, ux , uxx , . . .) while ut , uxt , . . . are obtained from (3). The Cartan distribution on Y is given by the total partial derivative vector fields
x = ∂x + D
(4)
un+1 ∂un ,
n0
t = ∂t + D
unt ∂un ,
n0
where u1 = ux , u2 = uxx , . . . ; u1t = uxt ; u2t = uxxt , . . . . Classically the notion of a generalized or higher symmetry of a differential equation F = 0 is defined as a vertical vector field V (5)
x (f )∂u + D x2 (f )∂u + · · · , V = f = f ∂u + D 1 2
where f ∈ C ∞ (Y ) such that, (6)
F (f ) = 0,
where in (6) F is the universal linearisation operator [11,7] which reads in the case of KdVequation (3) (7)
x (f ) − u1 · f − (D x )3 (f ) = 0. t (f ) − D D
Let now W ⊂ Rm with coordinates (w1 , . . . , wm ). The Cartan distribution on Y ⊗ W is given by x + Dx = D
m j =1
(8) t + Dt = D
m
Xj
∂ , ∂wj
T j ∂wj ,
j =1
where X j , T j ∈ C ∞ (Y ⊗ W ) such that (9)
[D x , D t ] = 0
which yields the socalled covering condition Dx (T ) − Dt (X) + [X, T ] = 0
254
P.H.M. Kersten and I.S. Krasil’shchik
j whereas in (9) [*,*] is the Lie bracket for vector fields X = m j =1 X ∂wj , T = m j j =1 T ∂wj defined on W . A nonlocal symmetry is a vertical vector field on Y ⊗ W , i.e., of the form (5), which satisfies (f ∈ C ∞ (Y ⊗ W ))
(10)
¯F (f ) = 0
which for KdV results in (11)
D t (f ) − uD x (f ) − u1 f − (D x )3 (f ) = 0.
Formally this is just what is called the shadow of the symmetry, i.e., not bothering about the ∂wj , j = 1, . . . , m, components. The construction of the associated ∂wj , j = 1, . . . , m, components is called the reconstruction problem [4]. For reasons of simplicity, we omit this reconstruction problem, i.e., reconstructing the vector field from its shadow. The classical Lenard recursion operator R for KdV equation, (12)
1 2 R = Dx2 + u + u1 Dx−1 3 3
which is just such, that
(13)
f0 = u1 , Rf0 = f1 = uu1 + u3 , 5 10 5 Rf1 = f2 = u5 + u3 u + u2 u1 + u1 u2 , 3 3 6
i.e., creating the (x, t)independent hierarchy of higher symmetries, has an action on vertical symmetry f¯−1 (Gallileiboost)
(14)
f¯−1 = (1 + tu1 )/3, Rf¯−1 = f¯0 = 2u + xu1 + 3t (u3 + uu1 ), 4 1 f¯1 = Rf¯0 = 3t (f2 ) + x(f0 ) + 4u2 + u2 + u1 Dx−1 (u). 3 3
If we introduce the variable p(= w1 ) through px = u,
(15)
1 pt = u2 + u2 , 2
i.e.,
Dt (u) = Dx
1 u2 + u2 2
then f¯1 is the shadow of a nonlocal symmetry in the onedimensional covering of KdVequation by p = w1 ,
X1 = u,
1 T1 = u2 + u2 . 2
The Cartan covering and complete integrability of the KdV–mKdV system
255
So, by its action the Lenard recursion operator creates nonlocal symmetries in a natural way. More applications of nonlocal symmetries can be found in, e.g., [6]. 2. A
SPECIAL TYPE OF COVERING:
THE CARTANCOVERING
We discuss a special type of the nonlocal setting indicated in the previous section, the socalled Cartancovering. As mentioned before we shall illustrate this by the KdVequation. Let Y ⊂ J ∞ (x, t; u) be the infinite prolongation of KdVequation (3). Contact one forms on T J ∞ (x, t; u) are given by
(16)
α0 = du − u1 dx − ut dt, α1 = du1 − u2 dx − u1t dt, α2 = du2 − u3 dx − u2t dt.
From the total partial derivative operators of the previous section we have
(17)
x (α1 ) = α2 , . . . , x (α1 ) = α1 , D D Dt (α0 ) = α0 ux + α1 u + α3 = αt , x )i (αt ). t (αi ) = (D D
We now define the Cartancovering of Y by Y ⊗ R∞ , where local coordinates are given (x, t, u, u1 , . . . , α0 , α1 , . . .) by x + DxC = D
(18)
∂ (αi+1 ) , ∂αi i
t + x )i αt ∂ . DtC = D (D ∂αi i
It is a straightforward check, and obvious that C C Dx , Dt = 0, (19) i.e., they form a Cartan distribution on Y ⊗ R∞ . Note 1. Since at first αi (i = 0, . . .) are contact forms, they constitute a Grassmann algebra (graded commutative algebra) (α), where αi ∧ αj = −αj ∧ αi ,
i.e., xy = (−1)xy yx,
where x, y are contact (∗)forms of degree x and y respectively. So in effect we are dealing with a graded covering.
256
P.H.M. Kersten and I.S. Krasil’shchik
Note 2. Once we have introduced the Cartancovering by (18) we can forget about the specifics of αi (i = 0, . . .) and just treat them as (odd) ordinary variables, associated with their differentiation rules. One can discuss nonlocal symmetries in this type of covering just as in the previous section, the only difference being: f ∈ V ∞ (Y ) ⊗
(α).
In the next section we shall combine constructions of the previous section and this section, in order to construct the recursion operator for symmetries. 3. T H E R E C U R S I O N O P E R A T O R CARTANCOVERING
AS SYMMETRY IN THE
We shall discuss the recursion operator for symmetries of KdVequation as a geometrical object, i.e., a symmetry in the Cartancovering. Our starting point is the fourdimensional covering of the KdVequation in Y ⊗R4 where
(20)
1 D x = Dx + u∂w1 + u2 ∂w2 + u3 − 3u21 ∂w3 + w1 ∂w4 , 2 1 2 1 3 1 2 u + u2 ∂w1 + u − u1 + uu2 ∂w2 D t = Dt + 2 3 2 3 4 2 2 2 u − 6u1 u3 + 3u u2 − 6uu1 + 3u2 ∂w3 + (u1 + w2 )∂w4 , + 4
D x , D t satisfy the covering condition (9), and note that due to the fact that the coefficients of ∂wi (i = 1, 2, 3) in (20) are independent of wj (j = 1, 2, 3). These coefficients constitute conservation laws for the KdVequation. We have the following “formal” variables. w1 =
u dx,
w2 =
(21) w3 =
1 2 u dx, 2
3 u − 3u21 dx,
w4 =
w1 dx,
where in (21) w4 is of a higher nonlocality. We now build the Cartancovering of the previous section on the covering given by (20) by introduction of the contact forms α0 , α1 , α2 , . . . (16) and
The Cartan covering and complete integrability of the KdV–mKdV system
(22)
1 2 u + u2 α−1 = dw1 − u dx − 2 1 1 3 u − α−2 = dw2 − u2 dx − 2 3
257
dt, 1 2 u + uu2 dt 2 1
and similarly for α−3 , α−4 . It is straightforward to prove the following relations D x (α−1 ) = α0 ,
(23)
D x (α−2 ) = uα0 ,
D t (α−1 ) = uα0 + α0 , D t (α−2 ) = u2 α0 − u1 α1 + uα2 + u2 α0 ,
D x (α−3 ) = 3u α0 − 6u1 α1 , . . . . 2
We are now constructing symmetries in this Cartancovering of KdVequation which are linear w.r.t. αi (i = −4, . . . , 0, 1, . . .). The symmetry condition for f ∈ C ∞ (Y ⊗ R4 ) ⊗ 1 (α) is just given by (7) (24)
¯C F (f ) = 0,
which for the KdV equation results in C 3 C C D t (f ) − uD x (f ) − ux f − D x f = 0.
As solutions of these equations we obtained f 0 = α0 , 2 1 u α0 + α2 + u1 α−1 , f1 = 3 3 4 4 4 u2 + u2 α0 + (2u1 )α1 + u α2 + α4 f2 = 9 3 3 1 1 + (uu1 + u3 )α−1 + (u1 )α−2 . 3 9
As we mentioned above we are working in effect with formvalued vector fields f 0 , f 1 , f 2 . For these objects one can define Frölicher–Nijenhuis and (by contraction) Richardson–Nijenhuis brackets [5,6]. Without going into details, for which the reader is referred to [5], we can construct the contraction of a (generalized) symmetry and a form valued symmetry p.e. 2 ∂ 1 (25) R= uα0 + α2 + u1 α−1 + ···. 3 3 ∂u The contraction being defined by (26)
C
(V1 R) = (V Ru )∂u + D x (V Ru )∂u1 + · · · .
Start now with (27)
V1 = u1
∂ ∂ + u2 + ··· ∂u ∂u1
258
P.H.M. Kersten and I.S. Krasil’shchik
whose prolongation in the setting Y ⊗ R4 is (28)
∂ ∂ ∂ ∂ 1 + u2 + ··· + u + u2 ∂u ∂u1 ∂w1 2 ∂w2
∂ ∂ + u3 − 3u21 + w1 ∂w3 ∂w4
V 1 = u1
then
2 1 ∂ u u1 + 1 · u3 + u1 · u + ··· (V R) = 3 3 ∂u ∂ + · · · = V3 = (u3 + uu1 ) ∂u
(29)
and similarly
(30)
∂ 5 10 5 2 + · · · = V5 . (V 3 R) = u5 + u3 u + u2 u1 + u u1 3 3 6 ∂u
The result given above means that the well known Lenard recursion operator for symmetries of KdVequation is represented as a symmetry, f1 , in the Cartancovering of this equation and in effect is a geometrical object. 4. T H E
COUPLED
KDV–MKDV
SYSTEM
In this section we shall discuss the complete integrability of the KdV–mKdV system E , given in (1), i.e., (31)
ut = −u3 + 6uu1 − 3ww3 − 3w1 w2 + 3u1 w 2 + 6uww1 , wt = −w3 + 3w 2 w1 + 3uw1 + 3u1 w.
In order to demonstrate the complete integrability of this system, we shall construct the recursion operator for symmetries of this coupled system, leading to infinite hierarchies of symmetries and, most probably, of conservation laws. Due to the very special form of the final results, it seems that integrability of this system, which looks at first glance quite ordinary, has not been discussed before. In order to discuss complete integrability, we shall start to discuss conservation laws in Subsection 4.1 leading to the necessary nonlocal variables. In Subsection 4.2 we shall discuss local and nonlocal symmetries of the system, while in Subsection 4.3 we construct the recursion operator or deformation [5], by the construction of a symmetry in the Cartan covering of Eq. (31). 4.1. Conservation laws and nonlocal variables Here we shall construct conservation laws for (31) in order to arrive at an Abelian covering of the coupled KdV–mKdV system as was shown KdV equation (2). So we construct X = X(x, t, u, . . . , w . . .), T = T (x, t, u, . . . , w . . .) such that
The Cartan covering and complete integrability of the KdV–mKdV system
(32)
259
Dx (T ) = Dt (X),
where Dx , Dt are defined as the total partial derivative operators on the infinite jetbundle associated to Eq. (31) and in a similar way we construct nonlocal conservation laws by the requirement (33)
D x (T ) = D t (X),
where D ∗ is defined as the prolongation of D∗ towards the covering of the equation by nonlocal variables arising from local conservation laws; moreover X , T are dependent on local variables x , t , u, . . . , w, . . . as well as the already determined nonlocal variables, denoted here by p∗ or p∗,∗ , which are associated to the conservation laws (X, T ) by the formal definition Dx (p∗ ) = (p∗ )x = X, Dt (p∗ ) = (p∗ )t = T .
Proceeding in this way, we obtained the following set of nonlocal variables (34)
p0,1 , p0,2 , p1 , p1,1 , p1,2 , p2,1 , p3 , p3,1 , p3,2 , p4,1 , p5 ,
where their defining equations are given by (p1 )x = u, (p1 )t = 3u2 + 3uw 2 − u2 − 3ww2 , (p0,1 )x = w, (p0,1 )t = 3uw + w 3 − w2 , (p0,2 )x = p1 , (p0,2 )t = −6p3 − u1 , (p1,1 )x = cos(2p0,1 )p1 w + sin(2p0,1 )w 2 ,
(p1,1 )t = cos(2p0,1 ) 3p1 uw + p1 w 3 − p1 w2 + uw1 − u1 w − w 2 w1
+ sin(2p0,1 ) 4uw 2 + w 4 − 2ww2 + w12 , (p1,2 )x = cos(2p0,1 )w 2 − sin(2p0,1 )p1 w,
(p1,2 )t = cos(2p0,1 ) 4uw 2 + w 4 − 2ww2 + w12
+ sin(2p0,1 ) −3p1 uw − p1 w 3 + p1 w2 − uw1 + u1 w + w 2 w1 ,
(p2,1 )x = 4 cos(2p0,1 )p1,1 w 2 − 4 sin(2p0,1 )p1 p1,1 w + w p12 − 2u + w 2 /2,
(p2,1 )t = 4 cos(2p0,1 )p1,1 4uw 2 + w 4 − 2ww2 + w12
+ 4 sin(2p0,1 )p1,1 −3p1 uw − p1 w 3 + p1 w2 − uw1 + u1 w + w 2 w1 + 3p12 uw + p12 w 3 − p12 w2 + 2p1 uw1 − 2p1 u1 w − 2p1 w 2 w1 − 8u2 w
− uw 3 + 2uw2 − 2u1 w1 + 2u2 w + w 5 + 3w 2 w2 /2,
(p3 )x = − u2 − uw 2 + ww2 /2,
260
P.H.M. Kersten and I.S. Krasil’shchik
(p3 )t = −4u3 − 9u2 w 2 + 2uu2 − 3uw 4 + 11uww2 − uw12 − u21 + u1 ww1
+ 4u2 w 2 + 6w 3 w2 + 3w 2 w12 − ww4 + w1 w3 − w22 /2,
(p3,1 )x = cos(2p0,1 )w p13 − 6p1 u + 39p1 w 2 − 24p1,1 p1,2 w + 12p3 + 6u1
+ 2 sin(2p0,1 )w 12p1 p1,1 p1,2 + 18p1 w1 + 2w 3 + 3w2
+ 6p1,2 w −p12 + 2u − w 2 /12,
(p3,2 )x = 2 cos(2p0,1 )w 12p1 p1,1 p1,2 − 18p1 w1 − 2w 3 − 3w2
+ sin(2p0,1 )w p13 − 6p1 u + 39p1 w 2 + 24p1,1 p1,2 w + 12p3 + 6u1
+ 6p1,1 w −p12 + 2u − w 2 /12, 2 p1,2 − 6p1 p1,2 u + 3p1 p1,2 w 2 (p4,1 )x = 8 cos(2p0,1 )w p13 p1,2 + 12p1 p1,1 2 − 12p1,1 p1,2 w + 18p1,1 uw − 4p1,1 w 3 − 6p1,1 w2 + 12p1,2 p3 + 6p1,2 u1 2 − 6p1 p1,1 u + 3p1 p1,1 w 2 + 8 sin(2p0,1 )w p13 p1,1 + 12p1 p1,1 p1,2 2 + 12p1,1 p1,2 w + 12p1,1 p3 + 6p1,1 u1 − 18p1,2 uw + 4p1,2 w 3 + 6p1,2 w2 2 2 − 24p12 p1,2 + 12p12 u − 6p12 w 2 − 48p1 p3 + w −p14 − 24p12 p1,1
2 2 2 2 − 24p1 u1 + 48p1,1 u − 24p1,1 w 2 + 48p1,2 u − 24p1,2 w2
− 60u2 + 44uw 2 + 24u2 − 13w 4 + 6ww2 /48, (p5 )x = 12u3 + 24u2 w 2 − 6uu2 + 6uw 4 − 30uww2 − 3u2 w 2 − 8w 3 w2
+ 6ww4 /6.
In the previous equations, we skipped explicit formulas for (p3,1 )t , (p3,2 )t , (p4,1 )t , and (p5 )t , because they are too massive, though quite important for the setting to be well defined and in order to avoid ambiguities. The reader is referred to [3] for these explicit formulas. It is quite a striking result that functions cos(2p0,1 ), sin(2p0,1 ) appear in the presentation of the conservation laws and their associated nonlocal variables. We should note that p1 , p0,1 , p3 , p5 arise from local conservation laws and we shall call p1 , p0,1 , p3 , p5 nonlocalities of first order. In a similar way we see that p0,2 , p1,1 , p1,2 arise from nonlocal conservation laws, where their x  and t derivatives are dependent on the firstorder nonlocalities. For this reason p0,2 , p1,1 , p1,2 are called nonlocalities of second order. Proceeding in this way p2,1 , p3,1 , p3,2 , p4,1 constitute nonlocalities of third order. 4.2. Local and nonlocal symmetries In this section we shall present results for the construction of local and nonlocal symmetries of system (31). In order to construct these symmetries, we consider the system of partial differential equations obtained by the infinite prolongation of (31) together with the covering by the nonlocal variables p0,1 , p0,2 , p1 , p1,1 , p1,2 , p2,1 , p3 , p3,1 , p3,2 , p4,1 , p5 .
The Cartan covering and complete integrability of the KdV–mKdV system
261
So, in the augmented setting governed by (31), their total derivatives and the equations given in Subsection 4.1 we construct symmetries Y = (Y u , Y w ) which have to satisfy the symmetry condition ¯E Y = 0.
From this condition we obtained the following symmetries Y0,1 , Y1,1 , Y1,2 , Y1,3 , Y2,1 , Y3,1 , Y3,2 , Y3,3 , u , Y w are given as where generating functions Y∗,∗ ∗,∗
u Y0,1 = 3t 6uu1 + 6uww1 + 3u1 w 2 − u3 − 3ww3 − 3w1 w2 + xu1 + 2u,
w = 3t 3uw1 + 3u1 w + 3w 2 w1 − w3 + xw1 + w, Y0,1 u = u1 , Y1,1 w Y1,1 = w1 , u = cos(2p0,1 )(2uw − w2 ) + sin(2p0,1 )(u1 + 2ww1 ), Y1,2 w Y1,2 = − cos(2p0,1 )u − sin(2p0,1 )w1 , u = cos(2p0,1 )(u1 + 2ww1 ) + sin(2p0,1 )(−2uw + w2 ), Y1,3 w = − cos(2p0,1 )w1 + sin(2p0,1 )u, Y1,3 u Y2,1 = 2 cos(2p0,1 )(p1,1 u1 + 2p1,1 ww1 − 2p1,2 uw + p1,2 w2 )
+ 2 sin(2p0,1 )(−2p1,1 uw + p1,1 w2 − p1,2 u1 − 2p1,2 ww1 )
+ 2p1 uw − p1 w2 + 2uw1 + 3u1 w + 2w 2 w1 − w3 /2, w Y2,1 = 2 cos(2p0,1 )(−p1,1 w1 + p1,2 u) + 2 sin(2p0,1 )(p1,1 u + p1,2 w1 )
− p1 u + u1 + ww1 /2,
u = 6uu1 + 6uww1 + 3u1 w 2 − u3 − 3ww3 − 3w1 w2 /3, Y3,1
w = 3uw1 + 3u1 w + 3w 2 w1 − w3 /3, Y3,1 u Y3,2 = cos(2p0,1 ) −2p12 uw + p12 w2 − 4p1 uw1 − 6p1 u1 w − 4p1 w 2 w1 + 2p1 w3 2 2 + 8p1,1 p1,2 u1 + 16p1,1 p1,2 ww1 − 8p1,2 uw + 4p1,2 w2 − 4p2,1 u1
− 8p2,1 ww1 + 10u2 w + 6uw 3 − 8uw2 − 14u1 w1 − 8u2 w − 11w 2 w2
2 − 14ww12 + 2w4 + 2 sin(2p0,1 ) −8p1,1 p1,2 uw + 4p1,1 p1,2 w2 − 2p1,2 u1 2 − 4p1,2 ww1 + 4p2,1 uw − 2p2,1 w2 + 6uu1 + 10uww1 + 3u1 w 2 − u3
+ 2w 3 w1 − 3ww3 − 5w1 w2 + 4p1,2 2p1 uw − p1 w2 + 2uw1 + 3u1 w
+ 2w 2 w1 − w3 /8, w 2 = cos(2p0,1 ) p12 u − 2p1 u1 − 2p1 ww1 − 8p1,1 p1,2 w1 + 4p1,2 u + 4p2,1 w1 Y3,2
2 2 2 − 4u − 3uw + 2u2 + 4ww2 + 2w1 2 + 2 sin(2p0,1 ) 4p1,1 p1,2 u + 2p1,2 w1 − 2p2,1 u − 3uw1 − 3u1 w − 3w 2 w1
+ w3 + 4p1,2 (−p1 u + u1 + ww1 ) /8,
262
P.H.M. Kersten and I.S. Krasil’shchik
2 u 2 Y3,3 = 2 cos(2p0,1 ) 2p1,1 u1 + 4p1,1 ww1 − 4p2,1 uw + 2p2,1 w2 − 6uu1
2 − 10uww1 − 3u1 w + u3 − 2w 3 w1 + 3ww3 + 5w1 w2 + sin(2p0,1 ) −2p12 uw + p12 w2 − 4p1 uw1 − 6p1 u1 w − 4p1 w 2 w1 + 2p1 w3 2 2 − 8p1,1 uw + 4p1,1 w2 − 4p2,1 u1 − 8p2,1 ww1 + 10u2 w + 6uw 3
− 8uw2 − 14u1 w1 − 8u2 w − 11w 2 w2 − 14ww12 + 2w4
+ 4p1,1 2p1 uw − p1 w2 + 2uw1 + 3u1 w + 2w 2 w1 − w3 /8,
w 2 Y3,3 = 2 cos(2p0,1 ) −2p1,1 w1 + 2p2,1 u + 3uw1 + 3u1 w + 3w 2 w1 − w3 2 u + 4p2,1 w1 − 4u2 + sin(2p0,1 ) p12 u − 2p1 u1 − 2p1 ww1 + 4p1,1
− 3uw 2 + 2u2 + 4ww2 + 2w12 + 4p1,1 (−p1 u + u1 + ww1 ) /8.
4.3. Recursion operator Here we present the recursion operator R for symmetries for this case obtained as a higher symmetry in the Cartan covering of system of Eqs. (1) augmented by equations governing the nonlocal variables (34). As demonstrated there, this symmetry is a formvalued vector field (or a vectorfieldvalued oneform) and has to satisfy (35)
¯C E R = 0.
In order to arrive at a nontrivial result as was explained for classical KdV equation (3), (25), we have to introduce nonlocal variables p0,1 , p0,2 , p1 , p1,1 , p1,2 , p2,1 , p3 , p3,1 , p3,2 , p4,1 , p5
and their associated Cartan contact forms ωp0,1 , ωp0,2 , ωp1 , ωp1,1 , ωp1,2 , ωp2,1 , ωp3 , ωp3,1 , ωp3,2 , ωp4,1 , ωp5 .
The final result, which is dependent on the nonlocal Cartan forms ωp0,1 , ωp1 , ωp1,1 , ωp1,2 ,
is given by (36)
R = Ru
∂ ∂ + Rw + ···, ∂u ∂w
where the components R u , R w are given by (37)
Ru = ωu2 (−1) + ωu 4u + w 2 + ωw2 (−2w) + ωw1 (−w1 ) + ωw (3uw − 2w2 )
+ ωp1,2 − cos(2p0,1 )(u1 + 2ww1 ) + sin(2p0,1 )(2uw − w2 )
+ ωp1,1 cos(2p0,1 )(−2uw + w2 ) − sin(2p0,1 )(u1 + 2ww1 )
The Cartan covering and complete integrability of the KdV–mKdV system
263
+ ωp1 (2u1 + ww1 ) + ωp0,1 2p1 uw − p1 w2 + 2uw1 + 3u1 w + 2w 2 w1 − w3 ,
Rw = ωw2 (−1) + ωw 2u + w 2 + ωu (2w)
+ ωp1,2 cos(2p0,1 )w1 − sin(2p0,1 )u
+ ωp1,1 cos(2p0,1 )u + sin(2p0,1 )w1 + ωp1 (w1 ) + ωp0,1 (−p1 u + u1 + ww1 ).
We shall now present this result in a more conventional form which appeals to expressions using operators of the form Dx and Dx−1 . In order to do this, we first split (37) into the socalled local part and nonlocal parts, consisting of terms associated to ωu2 , ωu , ωw2 , ωw1 , ωw and those associated to ωp1,2 , ωp1,1 , ωp1 , ωp0,1 respectively. The first part will account for Dx presentation, while the second one accounts for the Dx−1 part. Due to the action of contraction ϕ R, the local part is given by the following matrix operator: −Dx2 + 4u + w 2 −2wDx2 − w1 Dx + 3uw − 2w2 . 2w −Dx2 + 2u + w 2 The nonlocal part will be split into parts associated to ωp1 , ωp0,1 and ωp1,2 , ωp1,1 , respectively. The first one is given as (2u1 + ww1 )Dx−1 (2p1 uw − p1 w2 + 2uw1 + 3u1 w + 2w 2 w1 − w3 )Dx−1 . w1 Dx−1 (−p1 u + u1 + ww1 )Dx−1 To deal with the last part, let us introduce the notation: A1 = cos(2p0,1 )(−2uw + w2 ) − sin(2p0,1 )(u1 + 2ww1 ), A2 = cos(2p0,1 )u + sin(2p0,1 )w1 , B1 = − cos(2p0,1 )(u1 + 2ww1 ) + sin(2p0,1 )(2uw − w2 ), B2 = cos(2p0,1 )w1 − sin(2p0,1 )u,
being the coefficients at ωp1,1 and ωp1,2 in (37). According to the presentations of (p1,1 )x and (p1,2 )x , i.e., (p1,1 )x = cos(2p0,1 )p1 w + sin(2p0,1 )w 2 , (p1,2 )x = cos(2p0,1 )w 2 − sin(2p0,1 )p1 w,
we introduce their partial derivatives with respect to p0,1 , p1 , and w as α1 = −2p1 w sin(2p0,1 ) + 2w 2 cos(2p0,1 ), α2 = w cos(2p0,1 ), α3 = p1 cos(2p0,1 ) + 2w sin(2p0,1 ), β1 = −2w 2 sin(2p0,1 ) − 2p1 w cos(2p0,1 ),
264
P.H.M. Kersten and I.S. Krasil’shchik
β2 = −w sin(2p0,1 ), β3 = 2w cos(2p0,1 ) − p1 sin(2p0,1 ).
From this we arrive in a straightforward way at the last nonlocal part of the recursion operator, i.e.,
A1 Dx−1 α2 Dx−1
A1 Dx−1 (α1 Dx−1 + α3 )
A2 Dx−1 α2 Dx−1 A2 Dx−1 (α1 Dx−1 + α3 ) B1 Dx−1 β2 Dx−1 B1 Dx−1 (β1 Dx−1 + β3 ) + . B2 Dx−1 β2 Dx−1 B2 Dx−1 (β1 Dx−1 + β3 )
So, in the final form we obtain the recursion operator as R= + + +
−Dx2 + 4u + w 2
−2wDx2 − w1 Dx + 3uw − 2w2
2w
−Dx2 + 2u + w 2
(2u1 + ww1 )Dx−1 w1 Dx−1
(2p1 uw − p1 w2 + 2uw1 + 3u1 w + 2w 2 w1 − w3 )Dx−1
A1 Dx−1 α2 Dx−1
(−p1 u + u1 + ww1 )Dx−1 A1 Dx−1 (α1 Dx−1 + α3 )
A2 Dx−1 α2 Dx−1
A2 Dx−1 (α1 Dx−1 + α3 )
B1 Dx−1 β2 Dx−1
B1 Dx−1 (β1 Dx−1 + β3 )
B2 Dx−1 β2 Dx−1
B2 Dx−1 (β1 Dx−1 + β3 )
.
5. C O N C L U S I O N We gave an outline of the theory of symmetries of differential equations, leading to the construction of recursion operators for symmetries of such equations. The extension of this theory to the nonlocal setting of differential equations is essential for getting nontrivial results. The theory has been applied to the construction of the recursion operator for symmetries for a coupled KdV–mKdV system, leading to a highly nonlocal result for this system. Moreover the appearance of nonpolynomial nonlocal terms in all results, e.g., conservation laws, symmetries and recursion operator is striking and reveals some unknown and intriguing underlying structure of the equations.Work on the construction of Bäcklund transformations for this system is in progress. R EFERENCES [1] Dodd R.K., Eilbeck J.C., Gibbons J.D., Morris H.C. – Solitons and Nonlinear Wave Equations, Academic Press, 1982. [2] Kersten P.H.M. – Supersymmetries and recursion operators for N = 2 supersymmetric KdVequation, RIMS Kokyuroku 1150 (2000) 153–161. [3] Kersten P.H.M., Krasil’shchik I.S. – Complete integrability of the soupled KdV–mKdV system, in: Marimoto T., Sato H., Yamaguchi K. (Eds.), Lie Groups, Geometric Structures and
The Cartan covering and complete integrability of the KdV–mKdV system
[4] [5] [6] [7] [8]
[9] [10]
[11]
265
Differential Equations – One Hundred Years after Sophus Lie, in: Adv. Stud. Pure Math., vol. 37, Math. Soc. Japan, 2002, pp. 151–171, arXiv:nlin.SI/0010041(25102000). Khor’kova N.G. – Conservation laws and nonlocal symmetries, Mat. Zametki 44 (1) (1988) 134– 144, 157, translation in Math. Notes 44 (1–2) (1988) 562–568. Krasil’shchik I.S. – Some new cohomological invariants for nonlinear differential equations, Differential Geom. Appl. 2 (4) (1992) 307–350. Krasil’shchik I.S., Kersten P.H.M. – Symmetries and Recursion Operators for Classical and Supersymmetric Differential Equations, Kluwer Acad. Publ., Dordrecht, 2000. Krasil’shchik I.S., Lychagin V.V., Vinogradov A.M. – Geometry of Jet Spaces and Nonlinear Partial Differential Equations, Gordon and Breach, New York, 1986. Krasil’shchik I.S., Vinogradov A.M. – Nonlocal trends in the geometry of differential equations: Symmetries, conservation laws, and Bäcklund transformations, Acta Appl. Math. 15 (1–2) (1989) 161–209. Krivonos S., Sorin A. – Extended N = 2 supersymmetric matrix (1, s)KdV hierarchies, Phys. Lett. A 251 (1999) 109. Mathieu P. – Open problems for the super KdV equation, in: AARMSCRM Workshop on Baecklund and Darboux transformations. The Geometry of Soliton Theory (June 4–9, 1999, Halifax, Nova Scotia). Vinogradov A.M. – Local symmetries and conservation laws, Acta Appl. Math. 3 (1984) 21–78.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
Variations sur la notion de contrôlabilité ✩
Michel Fliess LIX, Ecole polytechnique & Projet ALIEN, INRIA Futurs, 91128 Palaiseau, France Email :
[email protected]
1. I N T R O D U C T I O N Le lecteur peu familier de la théorie du contrôle trouvera dans les lignes qui suivent une explication de notre problématique. 1.1. Une grue plate Fig. 1 représente une grue. Il s’agit de transporter la charge m, tout en esquivant, si nécessaire, des obstacles, en jouant sur le déplacement D du chariot sur le rail OX et la longueur R du filin. La tâche est difficile si l’on veut aller vite à cause des oscillations de m, d’où bien des études, théoriques et pratiques, en robotique et en automatique1 (voir [38] et sa bibliographie). La loi de Newton et la géométrie du modèle permettent d’écrire les équations du mouvement2 : mx¨ = −T sin θ, m¨z = −T cos θ + mg, (1) x = R sin θ + D, z = R cos θ où – (x, z) (les coordonnées de la charge m), T (la tension du filin) et θ (l’angle du filin avec l’axe verticale OZ ) sont les inconnues ; ✩ On the authors’ request the editors would like to stress that this contribution dates from 2001 and hence reflects the stateoftheart in 2001. 1 L’automatique, terme populaire chez les ingénieurs, doit être comprise comme synonyme du contrôle des mathématiciens (control en américain). Rappelons qu’en américain on dit aussi, ou, plutôt, disait, automatic control.
2
Comme les mécaniciens, les automaticiens emploient la notation des fluxions, où x˙ =
x (ν) =
dν x dt ν
.
dx dt ,
2 x¨ = d 2x ,
dt
268
M. Fliess
Figure 1. La grue.
– D et R sont les contrôles, ou commandes, c’estàdire les variables sur lesquelles on agit. Un calcul immédiat démontre que sin θ , T , D et R sont fonctions de (x, z) et d’un nombre fini de leurs dérivées : sin θ = T=
x−D , R
mR(g − z¨ ) , z
(¨z − g)(x − D) = xz, ¨ (x − D)2 + z2 = R 2
donc, xz ¨ D = x − z¨ − g , 2 xz ¨ 2 . R = z + z¨ − g
Donnonsnous une loi horaire (x(t), z(t)) pour la charge m, satisfaisant les objectifs mentionnés plus haut, à savoir contourner les obstacles, sans engendrer d’oscillations. Les autres variables du système, notamment les contrôles D et R , s’obtiennent sans intégrer d’équations différentielles, comme fonctions de x, z et d’un nombre fini de leurs dérivées. La grue est un système non linéaire, appelé (différentiellement) plat, et le couple (x, z) est appelé sortie plate, ou linéarisante. On trouvera cidessous des simulations.3 Figs. 2 et 3 démontrent l’absence d’oscillations.4 Les deux suivantes décrivent la régulation du chariot 3 La loi horaire (x(t), z(t)) choisie est une fonction polynômiale du temps [38]. Pour cet exemple donc, comme pour tous ceux évoqués plus bas, en dimension finie ou non, les calculs numériques n’exigent jamais la discrétisation des équations différentielles. 4 On « voit » sur la Fig. 2 que cela est dû à une accélération au début de la trajectoire planifiée de m, suivie d’une décélération en fin de course. Une telle propriété semble plus difficile à obtenir si l’on avait voulu commencer par déterminer directement les commandes D(t) et R(t), ainsi qu’il est courant dans la littérature actuelle.
269
Variations sur la notion de contrôlabilité
Figure 2. Nonoscillation de la charge.
1.5
deviation horizontale (m)
1
0.5
0
0.5
1
1.5 0
5
temps (s)
10
15
Figure 3. Déviation horizontale.
et du filin, la Fig. 6 illustre les efforts correspondants, qui sont les « vrais » contrôles. Ce contrôle, dit en boucle ouverte, open loop en américain, doit, pour atténuer des perturbations d’origines diverses, comme, par exemple, des imperfections de modélisation, être accompagnée d’un bouclage, feedback en américain, facile à déterminer car (1) est équivalent, en un sens qui sera précisé, au système linéaire
(2)
x (α) = u, z(β) = v
où α, β 1, et où u et v sont les nouvelles variables de contrôle.5 Renvoyons à [38] pour plus de détails.6 5 Le bouclage en question stabilise (2). Ce peut être, par exemple, un bouclage statique d’état qui permet de placer les pôles de (2) dans le plan complexe. 6 Pour sécuriser des grues conduites par un personnel non spécialisé, l’US Navy a lancé un vaste programme de recherche. La platitude du modèle correspondant permet d’envisager une aide efficace à la conduite, éventuellement à distance [68].
270
M. Fliess
regulation bas niveau chariot, mesure (), consigne (   )
4 3.5
vitesses (m/s)
3 2.5 2 1.5 1 0.5 0 0.5 0
5
temps (s)
10
15
Figure 4. Régulation du chariot. Mesure (–), consigne ( ).
regulation bas niveau cable, mesure (), consigne (   )
2.5 2 1.5
vitesse (m/s)
1 0.5 0 0.5 1 1.5 2 2.5
0
5
temps (s)
10
15
Figure 5. Régulation du filin. Mesure (–), consigne ( ).
La platitude est une propriété d’un système d’équations différentielles ordinaires, sousdéterminé, c’estàdire à moins d’équations que d’inconnues : il existe m variables y = (y1 , . . . , ym ) telles que : 1. Toute variable du système s’exprime comme fonction différentielle7 de y . 2. Toute composante de y s’exprime comme fonction différentielle des variables du système. 3. Les composantes de y et leurs dérivées sont fonctionnellement indépendantes. 7
Une fonction différentielle [98] de y est une fonction des composantes de y et de leurs dérivées jusqu’à un ordre fini.
271
Variations sur la notion de contrôlabilité
8000
effort chariot: F () et effort tambour: C/b (   )
6000
effort (N)
4000 2000 0 2000 4000 6000 8000
0
5
temps (s)
10
15
Figure 6. Forces correspondantes. Chariot (–), tambour ( ).
Comme déjà dit, y est une sortie plate, ou linéarisante. Elle sert à planifier les trajectoires du système, c’estàdire à imposer un comportement désiré, satisfaisant des contraintes naturelles, souvent données par des inégalités, comme l’évitement d’obstacles.8 1.2. Où est la contrôlabilité ? La notion de contrôlabilité a été inventée en 1960 par Kalman (cf. [60]) à propos des systèmes linéaires de la forme
(3)
x u x 1 1 1 d . .. = A ... + B ... dt xn xn um
où – u = (u1 , . . . , um ) et x = (x1 , . . . , xn ) sont, respectivement, la commande et l’état, – A ∈ Rn×n et B ∈ Rn×m sont des matrices constantes. L’état x évolue dans un Respace vectoriel E , de dimension n. On dit que (3) est contrôlable, ou commandable, si on peut joindre deux points de l’espace d’état, c’estàdire si, et seulement si, étant donnés deux points P0 , P1 ∈ E et deux instants t0 , t1 , t0 < t1 , il existe une commande u, définie sur [t0 , t1 ], telle que x(tι ) = Pι , ι = 0, 1. La contrôlabilité est, avec l’observabilité, due aussi à Kalman, un concept 8 C’est là, d’un point de vue applicatif, l’essence de notre méthodologie, qui permet de simplifier la résolution de bien des questions pratiques. La généralisation à la dimension infinie, c’estàdire aux systèmes à retards et aux équations aux dérivées partielles, est accomplie plus bas.
272
M. Fliess
clé pour la compréhension9 des propriétés structurelles et qualitatives, comme la stabilisation. L’extension de la contrôlabilité au nonlinéaire de dimension finie et à la dimension infinie a suscité depuis près de quarante ans une littérature considérable, qui n’a en rien épuisé ce sujet riche et varié. Les auteurs, dans leur quasitotalité, ont considéré des généralisations naturelles de (3). En dimension infinie linéaire, par exemple, l’état appartient à un espace fonctionnel bien choisi. Pour le nonlinéaire de dimension finie, on utilise (4)
dx = F (x, u) dt
où l’état x appartient à une variété différentiable, de dimension finie, Rn par exemple, dont F est un champ de vecteurs paramétré par la commande u. Or, comme l’avait remarqué très tôt Rosenbrock [113] en linéaire de dimension finie, une écriture commode du phénomène étudié n’est pas nécessairement de type (3) ou (4). Ainsi, les Éqs. (1) de la grue ne sont pas sous la forme (4), et il est, en un certain sens, impossible de les y ramener [44]. Examinons la dynamique linéaire à état et commande monodimensionnels (5)
x˙ = u. ˙
Elle diffère de (3) par la présence de la dérivée u˙ . D’après x(t) = u(t) + c , c ∈ R, pour tout couple de points P0 , P1 ∈ R, il existe une commande permettant de les joindre dans un intervalle de temps [t0 , t1 ]. Il ne faut pourtant pas croire que (5) soit contrôlable10 car x(t) − u(t) est une constante, c , non influencée par le choix futur de la commande. La contrôlabilité doit donc recevoir une définition intrinsèque, indépendante de toute représentation particulière. A un système linéaire de dimension finie, associons un R[ dtd ]module de type fini. Le système est contrôlable si, et seulement si, est libre.11 Avec (3) on retrouve le concept classique : le sousmodule de torsion de correspond au sousespace de noncontrôlabilité dans la décomposition de Kalman (voir, par exemple, [4,59]). Ajoutons que (5) n’est pas contrôlable puisque l’élément x − u, qui vérifie dtd (x − u) = 0, est de torsion. Résumonsnous ! Un système linéaire de dimension finie est contrôlable si, et seulement si, il existe m variables y = (y1 , . . . , ym ) telles que : 1. toute variable du système s’exprime comme combinaison linéaire finie des composantes de y et de leurs dérivées ; 9 Les traités d’automatique linéaire sont, aujourd’hui, si abondants qu’il ne peut être question de les analyser. Mentionnons, toutefois, les livres américains de Kailath [59] et Sontag [138] et celui, français, de d’AndréaNovel et Cohen de Lara [4]. On y trouvera des développements sur les bouclages statiques d’état évoqués plus haut. 10 Dans l’approche dite polynômiale de Rosenbrock [113], (5) n’est pas, non plus, contrôlable (voir [4]). 11 Nous montrerons en 3.3, pour la robustesse, et en 3.4, pour la commande prédictive d’un moteur électrique, comment ce formalisme permet une solution facile de questions d’automatique posées dans la pratique.
Variations sur la notion de contrôlabilité
273
2. toute composante de y s’exprime comme combinaison linéaire finie des variables du système et de leurs dérivées ; 3. les composantes de y et leurs dérivées sont linéairement indépendantes. Les systèmes non linéaires plats de 1.1 apparaissent comme des analogues non linéaires des systèmes linéaires contrôlables. Un système linéaire de dimension finie est, donc, plat si, et seulement si, il est contrôlable : la platitude est une généralisation, autre que celles usuelles dans la littérature (voir, par exemple, [57, 58,93,138]), de la contrôlabilité linéaire de Kalman. Ce sera le fil conducteur de notre exposé. Quant aux outils mathématiques, ils doivent user d’un nombre fini de dérivées, inconnu à l’avance. Deux choix possibles sont : 1. L’algèbre différentielle et la géométrie algébrique différentielle, nées avant tout des travaux de Ritt [111] et Kolchin [64], sont apparues entre les deux Guerres Mondiales comme généralisations aux équations différentielles des concepts et outils de l’algèbre commutative et de la géométrie algébrique.12 2. La géométrie différentielle des jets d’ordre infini, de création plus récente, s’est plus particulièrement développée autour de Vinogradov (voir [65,66] et [141, 146]). Parente des travaux d’É. Cartan sur l’équivalence absolue [16], elle a eu un impact certain sur les questions classiques de physique mathématique que sont les symétries et les lois de conservation. Ces deux voies se sont développées de manière indépendante et seuls de trop rares auteurs ont insisté sur leur parenté [73,141,40,42]. Les confronter dans notre cadre, le contrôle, n’est pas chose facile. Néanmoins, la caractérisation de la platitude, qui est, comme nous le verrons brièvement en 5.4, un problème d’intégrabilité, semble, en l’état actuel, plus adaptée à la géométrie différentielle. Empressonsnous d’ajouter que l’économie d’écriture de l’algèbre différentielle et les notions de dimensions qui y ont été développées, comme le degré de transcendance différentielle,13 sont excellentes pour aborder des thèmes comme l’inversion entréesortie et la représentation d’état [26,28], et divers types de synthèse par bouclage dynamique [20,22,124]. 1.3. Passage à la dimension infinie Commençons par la classe la plus facile en dimension infinie, celle des systèmes à retards. Le système à un seul retard, (6) 12
x(t) ˙ = ax(t) + u(t − 1),
a ∈ R,
Un texte récent, dû à Buium et Cassidy [14], dispense un résumé lucide et lumineux de l’histoire de l’algèbre différentielle et de ses développements les plus récents. 13 Voir [40,42] pour une généralisation à la géométrie différentielle des jets infinis, qui a permis d’élucider certains points en mécanique non holonome et en théorie de jauge. Cette carence de la géométrie différentielle tient, sans doute, au fait qu’à l’exception notable de Gromov [53], les équations différentielles sousdéterminées ont été peu étudiées.
274
M. Fliess
où u est la commande et x l’état, joue un rôle certain dans les applications (voir, par exemple, [99] pour les prédicteurs de Smith). On lui associe, comme pour le R[ dtd ]module de (3), un module B sur l’anneau R[ dtd , σ ] des polynômes en deux indéterminées, où σ désigne l’opérateur de retard, σf (t) = f (t − 1). Il découle du théorème de Quillen–Suslin, résolvant la conjecture de Serre, que B n’est pas libre, mais sans torsion. On récupère une base, c’estàdire un module libre, en introduisant l’avance σ −1 , c’estàdire en prenant le module localisé d R[ dt , σ, σ −1 ] ⊗R[ d ,σ ] B , qui est libre de base x , car dt
(7)
u(t) = x(t ˙ + 1) − x(t + 1).
On dit, avec [48], que (6) est contrôlable σ libre, propriété fondamentale pour, comme avec la grue, imposer une trajectoire.14 Ce type de contrôlabilité sera au cœur de nos préoccupations.15 En guise de système à paramètres répartis, ou distribués, c’estàdire régi par des équations aux dérivées partielles, examinons l’équation de la chaleur à une seule variable d’espace : 2 ∂ ∂ w(z, t) = 0, 0 z 1, t 0. (8) − ∂z2 ∂t La condition initiale est w(z, 0) = 0. Les conditions aux bords sont ∂w(0, t) =0 ∂t
et w(1, t) = u(t)
où u(t) est la commande. Le calcul opérationnel usuel (voir [23]) transforme (8) en l’équation différentielle ordinaire en la variable indépendante z (9)
wˆ zz − s wˆ = 0
avec conditions aux deux bouts wˆ z (0) = 0, w(1) ˆ = uˆ . Le paramètre s désigne, évidemment, la dérivation par rapport au temps ; uˆ et wˆ sont, dans l’interprétation aujourd’hui dominante, les transformées de Laplace.16 Réécrivons la solution de (9) √ ch(z s) wˆ = √ ch( s) 14
La noncausalité de (7) est sans importance puisque l’on désire suivre une trajectoire planifiée, c’estàdire prescrite à l’avance. 15 On notera l’influence de la géométrie algébrique moderne de Grothendieck (cf. [54]) dans de tels changements d’anneaux de base par produits tensoriels, afin d’obtenir la propriété souhaitée (voir [48, 86]).
16 uˆ = +∞ e−sτ u(τ ) dτ , w(z) ˆ = 0+∞ e−sτ w(z, τ ) dτ . Nous suivrons ici l’approche remarquable du 0 calcul opérationnel due à Mikusi´nski [84,85], qui nous évitera toute difficulté analytique.
Variations sur la notion de contrôlabilité
275
sous la forme (10)
Qwˆ = P uˆ
√ √ où P = ch(z s), Q = ch( s). Pour les mêmes raisons qu’en (6), le C[P , Q]module correspondant17 est sans torsion, mais non libre. Le module localisé sur C[P , Q, (P Q)−1 ] est libre, de base ζˆ = w(0) : ˆ
(11)
√ wˆ = ch(z s)ζˆ , √ uˆ = ch( s)ζˆ .
√ Pour calculer une trajectoire désirée, on explicite l’action de ch(z s) sur ζ (t) par la formule18
(12)
z2ν d ν ζ (2ν)! dt ν
ν0
en supposant ζ fonction Gevrey de classe < 2, plate en t = 0. Avec l’équation des cordes vibrantes 2 ∂ ∂2 w(z, t) = 0 − ∂z2 ∂t 2 et des conditions initiales nulles et identiques aux bords, le calcul opérationnel √ fournit des solutions analogues, à ceci près qu’il faut y remplacer e±x s par e±xs . On aboutit, ainsi, à un système à retards [89,92]. L’équation, dite des télégraphistes, 2 ∂ ∂2 ∂ − a 2 − b − c w(z, t) = 0 ∂z2 ∂t ∂t où a > 0, b, c 0, nécessite des fonctions de Bessel (voir [47,127]). Au contraire du cas parabolique de l’équation de la chaleur, la planification des trajectoires pour ces équations hyperboliques utilise le passé et le futur sur un intervalle fini (théorème de Paley–Wiener). 1.4. Quelques références Les systèmes non linéaires plats ont été découverts, comme outil du contrôle, par Jean Lévine, Philippe Martin, Pierre Rouchon et l’auteur, et mathématisés d’abord grâce à l’algèbre différentielle [36,38]. Leur présentation dans le langage des diffiétés de Vinogradov, c’estàdire dans une géométrie différentielle des jets ∂ ∂ Nous n’utilisons pas un C[ ∂z , ∂t ]module, c’estàdire un point de vue de D modules, car la prise en compte de la commande u au bord y semble plus délicate. 18 On peut interpréter l’infinitude de cette série comme étant liée à la dimension infinie, commentaire qui s’étend aux systèmes à retards (voir, par exemple, la formule (7)), si l’on regarde les opérateurs de retard et d’avance à l’aide d’un développement infini en puissances de la dérivation. Rappelons qu’en dimension finie on utilise seulement un nombre fini de dérivées.
17
276
M. Fliess
infinis, se trouve pour la première fois en [37] (voir, aussi, [43]). Pomet [105], de façon très voisine, van Nieuwstadt, Rathinam et Murray [143], dans le langage des formes extérieures d’É. Cartan, ont aussi proposé une définition de la platitude par géométrie différentielle. Il y a longtemps déjà, Hilbert, dans un article [55] isolé de son œuvre, avait, à propos d’un exemple, remarqué la possibilité d’une paramétrisation ne nécessitant pas d’intégrations d’équations différentielles,19 sans la définir formellement (voir, aussi, Cartan [15]). Comme beaucoup d’autres choses, c’est Kalman (cf. [61]) qui, pour les systèmes linéaires de dimension finie, a introduit les modules en contrôle. Il l’a fait pour la représentation d’état et aboutit à un module de torsion, finiment engendré, sur un anneau principal. La généralisation aux systèmes à retards est dû à Kamen [62] (voir, aussi, [135]). De manière entièrement dissemblable, l’équivalence entre systèmes linéaires et modules, due à l’auteur [29,30], prend en compte toutes les variables, sans nécessairement, comme le point de vue comportemental, ou behavioral, de Willems (voir [104]), faire de distinctions entre ces variables. Cette approche, valable aussi avec des coefficients non constants, a servi à préciser diverses propriétés structurelles (voir [11,12,31,33]). Le cas des systèmes à retards est le fait d’Hugues Mounier et l’auteur (voir [48,86]), celui des équations aux dérivées partielles de Mounier, Rouchon, Joachim Rudolph et l’auteur [50,92].20 On trouvera en [49,68,78,83,117,119,125] d’autres tours d’horizon, avec des perspectives différentes. 1.5. Applications Il y a eu, en peu d’années, un grand nombre d’illustrations concrètes. Beaucoup ont été testées avec succès en laboratoire. Certaines sont exploitées industriellement.21 1.5.1. Systèmes plats Le classement par rubriques donne une idée de l’ubiquité de la platitude : – – – –
robotique [63,67], et véhicules non holonomes [38,39,120] ; aéronautique [76,77] ; moteurs électriques [74,81,82,132,147], et paliers magnétiques [69,130] ; industrie automobile, que cela soit pour les suspensions actives [68], les embrayages [70], ou les balais d’essuieglace [7] ; – hydraulique [6] ; – génie chimique avec divers types de réacteurs [101,114,116,122,126,133] ; – agroalimentaire [5]. 19
Hilbert écrit explicitement integrallos. Renvoyons à [94,103,107] pour d’autres points de vue sur l’utilisation des modules en contrôle linéaire. 21 Des raisons de confidentialité ne permettent pas d’inclure les références correspondantes. 20
Variations sur la notion de contrôlabilité
277
1.5.2. Systèmes à retards Ils ont été appliqués à l’aérodynamique, au raffinage et aux antennes [89,101, 102]. Certaines situations, comme celles des réseaux à haut débit [88], nécessitent des retards variables. 1.5.3. Équations aux dérivées partielles La commande de l’équation des cordes vibrantes, qui se rattache, comme nous l’avons vu, aux systèmes à retards, a été appliquée à diverses situations de verges flexibles [89,92], qui peuvent se matérialiser à propos de certains problèmes de forage [87]. L’équation de la chaleur a été utilisée pour un réacteur chimique [51] et un échangeur de chaleur [127]. Les calculs [50] faits pour éviter les vibrations d’une barre, modélisée par l’équation d’Euler–Bernoulli, ont été confirmés expérimentalement en [2,56]. L’équation des télégraphistes [47] a conduit à une restauration active de signal le long d’un câble et au contrôle d’un échangeur de vapeur [127]. On trouvera en [101,115] d’autres cas, notamment en génie chimique ou électrique, et sur certains types de cables, parfois avec des équations plus générales. 2. S Y S T È M E S
LINÉAIRES ABSTRAITS
2.1. Généralités Soit A un domaine d’intégrité, c’estàdire un anneau commutatif sans diviseurs de zéro, supposé, pour simplifier, noethérien. Un système Alinéaire, ou un Asystème, est un Amodule de type fini. La catégorie des Asystèmes est, donc, celle des Amodules de type fini. Une entrée, ou un contrôle, ou, encore, une commande, est une partie finie u ⊂ , peutêtre vide, telle que le module quotient /spanA (u) est de torsion. L’entrée u est dite indépendante si le Amodule spanA (u) est libre, de base u. Une Adynamique D est un Asystème muni d’une entrée u. Pour une Adynamique sans entrée, c’estàdire u = ∅, D est de torsion. Une sortie est une partie finie y ⊂ . Un Asystème entréesortie S est une Adynamique munie d’une sortie. Soit B un anneau commutatif noethérien qui est, aussi, une Aalgèbre. Le système B = B⊗A est un B module, c’estàdire un B système, appelé Bextension de . Ici, B sera toujours obtenue par localisation, c’estàdire B = S −1 A, où S est une partie multiplicativement stable. 2.2. Commandabilité L’Asystème est dit Bcontrôlable sans torsion (resp. Bcontrôlable projectif, Bcontrôlable libre) si le B module B est sans torsion (resp. projectif, libre). On sait que la B commandabilité libre (resp. projective) implique la B commandabilité projective (resp. sans torsion). Supposons B contrôlable libre. Toute base du B module libre B est appelée sortie Bplate, ou Bbasique.
278
M. Fliess
Remarque 2.2.1. Le système entréesortie Alinéaire S est dit B observable22 si B S = spanB (u, y). 2.3. π liberté Le résultat suivant [48], capital en dimension infinie, découle directement de [123, Proposition 2.12.17, p. 233]. Théorème 2.3.1. Soient – un système Alinéaire , – B = S −1 A une Aalgèbre, où S est une partie multiplicativement stable de A, tels que B soit B contrôlable libre. Il existe, alors, un élément π ∈ S tel que soit A[π −1 ]contrôlable libre. On dit que est (contrôlable) π libre et π sera nommé élément libérateur. Toute base du A[π −1 ]module libre A[π −1 ] ⊗A est appelée sortie (plate) π libre, ou sortie π plate. 3. S YSTÈMES
LINÉAIRES D E D I M E N S I O N F I N I E
Un système linéaire sur l’anneau principal R[ dtd ] des polynômes différentiels de la forme
aα
finie
dα , dt α
aα ∈ R, α 0,
est dit de dimension finie. Considérons, en effet, une dynamique R[ dtd ]linéaire D , d’entrée u. Le module de torsion D/spanR[ d ] (u), qui est de type fini, est, en tant dt que Respace vectoriel, de dimension finie. 3.1. Représentation d’état Soit D une R[ dtd ]dynamique. Posons n = dimR (D/ spanR[ d ] (u)). Choisissons dt
dans D un ensemble η = (η1 , . . . , ηn ) dont le résidu23 η = (η 1 , . . . , η n ) en D/spanR[ d ] (u) est une base. Il vient dt
η1 η1 d . . .. = F .. dt ηn ηn
22
On trouvera en [46] (voir, aussi, [32]), des développements récents sur l’observabilité et les observateurs intégraux exacts. Renvoyons à [75] pour une application à un moteur électrique. 23 C’estàdire l’image canonique dans D/ span (u). R[ d ] dt
Variations sur la notion de contrôlabilité
279
où F ∈ Rn×n . Donc η u η 1 1 1 ν α d . d . . . .. = F .. + (13) Gα α . dt dt α=0 um ηn ηn où Gα ∈ Rn×m . On appelle η un état généralisé, et (13) une représentation d’état généralisée. Soit η˜ = (η˜ 1 , . . . , η˜ n ) un autre état généralisé. Comme les résidus de η et de η˜ dans D/ spanR[ d ] (u) en sont des bases en tant qu’espace vectoriel, il vient dt
(14)
η˜ η u 1 1 1 γ d . ... = P ... + Qγ γ .. dt finie um η˜ n ηn
où P ∈ Rn×n , det(P ) = 0, Qγ ∈ Rn×p . On remarque que (14) dépend en général de l’entrée et d’un nombre fini de ses dérivées. Supposons en (13) ν 1 et Gν = 0. Posons, selon (14), η η¯ u 1 1 1 ν−1 d . . .. = .. + Gν ... . dt ν−1 um ηn η¯ n
Il vient η¯ u η¯ 1 1 1 ν−1 α d . d .. . ¯ .. = F .. + . Gα α . dt dt α=1 um η¯ n η¯ n
L’ordre maximal de dérivation de u y est au plus ν − 1. Par récurrence, on aboutit à
(15)
x u x 1 1 1 d . .. = F ... + G ... dt xn xn um
où A ∈ Rn×n , B ∈ Rn×m . On appelle (15) une représentation d’état kalmanienne24 ; x = (x1 , . . . , xn ) est un état kalmanien. Deux états kalmaniens x et x˜ = (x˜1 , . . . , x˜n ) sont reliés par une transformation indépendante de l’entrée :
(16)
x˜ x 1 1 ... = P ... x˜n xn
où P ∈ Rn×n , det(P ) = 0. Nous avons démontré le 24
C’est Kalman qui a répandu ces représentations d’état (cf. [60,61]).
280
M. Fliess
Théorème 3.1.1. Toute dynamique R[ dtd ]linéaire admet une représentation d’état kalmanienne (15). Deux états kalmaniens sont reliés par (16). Remarque 3.1.1. On peut, contrairement à une vaste littérature récente sur les systèmes linéaires implicites, toujours se ramener à une représentation kalmanienne. C’est dû au fait que le groupe engendré par les transformations d’état (14) est plus gros que ceux admis dans ces travaux sur l’implicite. Il est instructif de rappeler que les transformations d’état dépendant du contrôle, négligées jusqu’à présent par les théoriciens, avaient été parfois, il y a longtemps, utilisées en pratique. 3.2. Contrôlabilité L’anneau R[ dtd ] étant principal, R[ dtd ]contrôlabilités sans torsion et libre se confondent. Nous dirons, donc, qu’un système R[ dtd ]linéaire est contrôlable si, et seulement si, il est R[ dtd ]contrôlable libre. La dynamique (15) est contrôlable au sens de Kalman (voir [60] et [4,59,138]) si, et seulement si, le critère de Kalman rg B, AB, . . . , An−1 B = n
est vérifié. La démonstration du résultat suivant a été esquissée en 1.2 : Théorème 3.2.1. La dynamique (13) est contrôlable au sens de Kalman si, et seulement si, elle est contrôlable. 3.3. Digression : robustesse Un système R[ dtd ]linéaire pert est dit perturbé25 si l’on y a distingué un sousensemble fini de perturbations = (1 , . . . , q ). Le quotient = pert / spanR[ d ] ( ) est le système non perturbé. Dans une dynamique perturbée Dpert , on dt suppose spanR[ d ] (u) ∩ spanR[ d ] ( ) = {0} dt
dt
ce qui signifie que commandes et perturbations n’interagissent pas. Alors, la restriction du morphisme canonique Dpert → D = Dpert / spanR[ d ] ( ) à spanR[ d ] (u) dt dt est un isomorphisme : on notera encore, par léger abus de notation, u l’image de u dans D . Restreignonsnous à une dynamique monovariable, c’estàdire où m = 1, telle que la dynamique non perturbée soit contrôlable. Soit z une base de D qui est de rang 1. Il vient u = ωz, ω ∈ R[ dtd ], ω = 0, et, donc, u = ωzpert + ε , où zpert ∈ 25
Conserver des performances acceptables en dépit d’une certaine méconnaissance du modèle est un chapitre primordial de l’automatique, appelé robustesse. Cette ignorance peut se traduire mathématiquement en linéaire par la présence de pertubations additives.
Variations sur la notion de contrôlabilité
281
Dpert a pour image z ∈ D et ε ∈ spanR[ d ] ( ). On en déduit facilement [45] la dt représentation d’état perturbée pert pert x˙1 = x2 , . . .pert pert x˙n−1 = xn , n pert x˙n = aν xνpert + bu + ε , ν=1
a1 , . . . , aν , b ∈ R, b = 0, où la perturbation ε ∈ spanR[ d ] ( ) est assortie, c’estàdt
dire vérifie ce qu’on appelle en américain la matching condition d’Utkin [142].26 Cette condition permet, grâce à des hypothèses naturelles sur ε , d’atténuer, et c’est un résultat important rendu possible par notre démarche, la perturbation par un bouclage approprié. Cette propriété, toujours vérifiée dans notre cadre, suppose dans le point de vue ancien un changement d’état dépendant des perturbations. Considérons, en effet, pert pert x˙1 = x2 + 1 , pert x˙2 = u + 2 . pert
Posons x1
pert
pert
= x˜1 , x2
pert
= x˜2
− 1 . Il vient
pert pert x˙˜ 1 = x˜2 , pert = u + 2 + ˙ 1. x˙˜ 2
La généralisation au cas multivariable (m 2) se fait par la forme de Brunovský perturbée (cf. [45]). 3.4. Digression : commande prédictive d’un moteur électrique La commande prédictive, due, avant tout, à Richalet [110], s’est largement popularisée dans l’industrie.27 C’est une forme de commande « anticipante », feedforward en américain, qui consiste à imposer un comportement « désiré » au système. Notre pointdevue permet de résoudre plusieurs, sinon la plupart des problèmes ouverts28 concernant ce type de commande, à savoir la stabilisation et la robustesse, le 26
Cette condition a été définie à propos du contrôle par modes glissants, sliding modes en américain, pour lequel les modules sont aussi fort utiles (voir [52] pour une étude préliminaire). 27 Renvoyons à [45] pour une analyse bibliographique plus fouillée. 28 Un problème ouvert en mathématiques appliquées n’a pas le même contenu qu’en mathématiques pures. Il n’y a pas, en d’autres termes, d’analogues du dernier théorème de Fermat, ni des conjectures de Riemann ou Poincaré, c’estàdire d’énoncés précis à démontrer. Un rapide commentaire s’impose donc. Un problème ouvert est, dans les sciences de l’ingénieur, un magma plus ou moins flou, auquel une traduction mathématique donne éventuellement corps. Parmi l’infinité de traductions possibles, certaines reflètent mieux les buts visés et admettent une solution élégante. Le débat épistémologique se déplace donc sur un terrain nouveau qu’un tel exposé ne permet de creuser.
282
M. Fliess
déphasage non minimal, la prise en compte des contraintes, tout en simplifiant considérablement la génération de trajectoires. L’exemple suivant, emprunté à [45], est un moteur électrique à courant continu
Figure 7. Moteur à courant continu.
régi par les équations Ri 1 x2 − L , TM k J
(17a)
x˙1 =
(17b)
x˙2 = −
J 1 kRD x1 − x2 + u TM Ti k Ti Ri T i
où x1 = ω est la vitesse angulaire de la tige du rotor, x2 = i le courant rotorique. Le contrôle u est la tension source, L est la perturbation due au couple de charge. Les valeurs nominales des paramètres sont J = 2.7 kgm2 , k = 2.62 Nm/A, kRD = 500, TM = 0.1829 s, Ti = 0.033 s, Ri = 0.465 . Les spécifications nominales du moteur sont : puissance 22 kW, tension 440 V, courant 52.6 A, vitesse angulaire 157 rad/s. Le moteur supporte, pendant de courtes périodes, une surcharge approximative de 1.6 à 15 fois les valeurs nominales de la tension et du courant. Commençons par le cas non perturbé, c’estàdire L = 0. Il est immédiat de vérifier qu’alors (17) est contrôlable. Une sortie plate est z = x1 , que l’on peut mesurer. Il vient, alors, x1 = z, x2 = u=
TM k z˙ , Ri
Ri J TM k Ti TM k z+ z˙ + z¨ . TM k kRD kRD kRD
Une fois fixée la loi horaire prédite z (t), ces équations déterminent (x1 (t), x2 (t), u (t)). Transférons z du repos x1 (t0 ) = 0 rad/s à la valeur x1 (tT ) = 145 rad/s en l’intervalle de temps T = tT − t0 = 0.5 s. La Fig. 8 présente ces trajectoires prédites, calculées par interpolation polynômiale de manière à assurer la continuité aux deux bouts.
Variations sur la notion de contrôlabilité
283
Figure 8. Trajectoires prédites du moteur.
Figure 9. Réponse en boucle fermée (–) et trajectoires prédites (–·).
Dans une seconde étape, on atténue la perturbation en posant u = u + uε , où t uε (t) = α1 e1 (t) + α0 0 e1 (τ ) dτ , e1 (t) = z(t) − z (t), α1 , α2 ∈ R, est un correcteur PI, ou propotionnelintégral.29 La Fig. 9 donne des simulations. 29
Un tel correcteur et, plus généralement, les correcteurs PID, ou proportionnelsintégrauxdifférentiels, sont d’un grand usage dans l’industrie (cf. [3]).
284
M. Fliess
Figure 10. Comportement avec changement d’échelle de temps : réponse en boucle fermée (–) et trajectoires prédites (–·).
Pour éviter des saturations, on doit satisfaire la contrainte x2 max < 500 A. Un d , σ > 0, donne changement d’échelle de temps dtd = σ dτ x1 = z, u=
x2 =
TM k d σ z, Ri dτ
Ri J TM k d Ti TM k 2 d 2 z+ σ z+ σ z. TM k kRD kRD dτ kRD dτ 2
Les simulations de la Fig. 10, avec σ = 0.5, fournissent un résultat très satisfaisant. Remarque 3.4.1. On trouvera en [74] un exemple de moteur électrique réel commandé selon ces techniques.
4. S YSTÈMES
LINÉAIRES DE DIMENSION INFINIE
4.1. Où est la dimension infinie ? Soit un module de torsion de type fini T , sur l’anneau de polynômes R[ω1 , . . . , ωρ ] en ρ 2 indéterminées. Soit J l’idéal de R[ω1 , . . . , ωρ ], annulateur de T . Suppo/ R, tel que J ∩ R[χ] = {0}. Il existe donc sons l’existence de χ ∈ R[ω1 , . . . , ωρ ], χ ∈ au moins un élément τ ∈ T tel que spanR[ω1 ,...,ωρ ] {χ ν τ  ν 0} soit de dimension infinie en tant que Respace vectoriel. Il en découle que T est aussi un Respace vectoriel de dimension infinie. Prenons une dynamique R[ω1 , . . . , ωρ ]linéaire D . Ce qui précède s’applique au module de torsion D/ spanR[ω1 ,...,ωρ ] (u).
Variations sur la notion de contrôlabilité
285
4.2. Théorème de Quillen–Suslin et commandabilité libre Plutôt que, dans un tel exposé, invoquer de nombreux exemples concrets, examinons le système R[ω1 , ω2 ]linéaire (18)
ω 1 w1 = ω 2 w2
qui se rencontre souvent (voir les systèmes (6) et (8)). Proposition 4.2.1. Le système (18) n’est pas R[ω1 , ω2 ]contrôlable libre, mais il est R[ω1 , ω2 ]contrôlable sans torsion. Démonstration. Réécrivons (18) sous la forme
w1 (ω1 , −ω2 ) w2
= 0.
La matrice de présentation (ω1 , −ω2 ) est de rang générique 1, c’estàdire de rang 1 sur le corps de fractions R(ω1 , ω2 ). D’après la résolution de la conjecture de Serre [131] par Quillen [108] et Suslin [139], (18) n’est pas R[ω1 , ω2 ]contrôlable libre, car ce rang chute si l’on remplace ω1 et ω2 par 0. D’après [145], (18) est R[ω1 , ω2 ]contrôlable sans torsion car les mineurs sont premiers entre eux. 2 Remarque 4.2.1. On trouvera d’autres applications du théorème de Quillen–Suslin en [48]. 4.3. Systèmes à retards Un système linéaire à retards est un système R[ dtd , σ1 , . . . , σr ]linéaire, où σ1 . . . , σr sont les opérateurs de retards : (σι f )(t) = f (t − hι ),
ι = 1, . . . , r, hι > 0.
Il est, toujours, loisible de se ramener à la situation où les hι sont incommensurables, c’estàdire où le Qespace vectoriel qu’ils engendrent est de dimension r . Alors, d R[ dt , σ1 , . . . , σr ] est isomorphe à un anneau de polynômes en r + 1 indéterminées. Soit un système R[ dtd , σ1 , . . . , σr ]linéaire, que nous supposons d R[ dt , σ1 , . . . , σr ]contrôlable sans torsion. Soit R(σ1 , . . . , σr ) le corps de fractions de R[σ1 , . . . , σr ]. Introduisons par localisation le système R(σ1 , . . . , σr )[ dtd ]linéaire d ⊗R[ d ,σ ,...,σr ] R(σ1 , . . . , σr ) dt 1 dt
qui, étant sans torsion, est libre car R(σ1 , . . . , σr )[ dtd ] est un anneau principal. D’après le Théorème 2.3.1, est π libre, π ∈ R[σ1 , . . . , σr ]. Nous avons vu, dans l’introduction, que (6) est σ libre.
286
M. Fliess
Figure 11. Bras de robot téléopéré.
4.4. Exemple de bras de robot téléopéré On considère un robot flexible, téléopéré à distance. Plus précisément, prenons un modèle simple du premier mode d’un robot flexible à un bras, actionné par un moteur recevant ses ordres d’une plateforme distante. Notons – – – –
τ le temps de transmission des ordres, qr (t) le déplacement rigide, qe (t) le premier mode du déplacement élastique, C(t) le couple moteur actionnant le bras.
L’équation fondamentale de la dynamique s’écrit : Mrr Mre q¨r O C + = Mer Mee q¨e Ke qe 0 où Mxy désignent les masses équivalentes et Ke la raideur élastique. Par ailleurs, le moteur étant téléopéré, les ordres u(t) sont transmis depuis un module distant et arrivent avec un retard de transmission τ : u(t) = C(t − τ ).
Les équations du système se réécrivent alors : Mrr q¨r (t) + Mre q¨e (t) = u(t − τ ), Mer q¨r (t) + Mee q¨e (t) = −Ke qe (t).
Ce système est σ libre, de sortie σ plate ω(t) = Mer qr (t) + Mee qe (t)
où σ désigne l’opérateur retard d’amplitude τ . En effet, qe (t) = −
1 ω(t), ¨ Ke
qr (t) =
1 ω(t) − Mee qe (t) Mer
287
Variations sur la notion de contrôlabilité
donc, qr (t) =
1 Mee ω(t) − ω(t), Mer Ke Mer
qe (t) = −
1 ω(t), ¨ Ke
Mrr 1 Mrr Mee u(t) = ω(t ¨ + τ) + − Mre ω(4) (t + τ ). Mer Ke Mer
Pour une trajectoire désirée ωd (t) arrêtarrêt : 2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0
1
2
3
4
5
6
7
8
9
10
Figure 12. Trajectoire désirée ωd (t).
on obtient une loi de commande en boucle ouverte réalisant le suivi exact de la forme :
Figure 13. Commande en boucle ouverte ud (t).
288
M. Fliess
4.5. Systèmes à paramètres répartis Par manque de place, concentronsnous sur l’équation de la chaleur (8). Dans la ver√ sion due à Mikusi´nski [84,85] du calcul opérationnel (voir aussi [144]), Q = ch s √ appartient au corps des opérateurs et P = ch z s est une fonction opérationnelle. On peut démontrer que P et Q sont Calgébriquement indépendantes. Il s’ensuit que (10), qui est de même nature que (18), est C[P , Q]contrôlable sans torsion, mais n’est pas C[P , Q]contrôlable libre. D’après (11), il est clair que (10) est P Qlibre.30 D’après [85], on a z2n √ sn ch z s = (2n)! n0
où la série du second membre converge opérationnellement. Pour que (12) soit valide, il suffit que la fonction ζ : R → R vérifie les conditions suivantes : 1. ζ est C ∞ , à support contenu dans [0, +∞[ ; en particulier, ζ est plate en t = 0, donc ζ (n) (0) = 0, pour tout n 0. 2. ζ est Gevrey d’ordre31 < 2. La fonction C ∞ 0 −1 φ(t) = e td
si t 0, si t > 0
où d > 0, est plate en t = 0, à support dans [0, +∞[. Elle est Gevrey de classe 1 + d1 (cf. [112]). Prendre pour ζ une telle fonction n’est pas satisfaisant, car cela revient à obtenir un comportement désiré asymptotiquement, c’estàdire pour t → +∞. On y remédie, en introduisant avec Ramis [109], la fonction Gevrey de classe 1 + d1 0 t −1 dτ ηd (t) = C exp d τ (1 − τ )d 0 1 30 31
si t < 0, si 0 < t < 1, si t 1
Il est évident que (10) est, aussi, P libre, ou Qlibre. L’élément π du Théorème 2.3.1 n’est pas unique. Rappelons que la fonction ζ est dite Gevrey, de classe ou d’ordre µ 1 si, et seulement si, – elle est C ∞ , – pour tout compact C ⊂ R et tout entier n 0, on a ζ (n) (t) cn+1 (n!)µ
où c > 0 est une constante dépendant de ζ et C .
289
Variations sur la notion de contrôlabilité
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Figure 14. La fonction de Gevrey utilisée pour d = 1.
où 1 C= 0
exp
−1 dτ τ d (1 − τ )d
est une constante de normalisation, représentée dans la Fig. 14 pour d = 1. On trouvera la miseenœuvre [50] de cette méthode à propos de l’équation d’Euler–Bernoulli des poutres flexibles, qui a été testée avec succès en [2,56]. 4.6. Généralisations en dimension infinie Audelà du linéaire de dimension finie, les définitions de la contrôlabilité ont été nombreuses et variées. Pour les systèmes à retards, une comparaison entre la plupart des notions existantes a été menée à bien grâce aux modules [48,86]. Il serait instructif de la poursuivre pour les équations aux dérivées partielles. Le contrôle d’équations aux dérivées partielles à plusieurs variables d’espace est, bien entendu, à étudier. Pour des géométries simples, les transformations intégrales classiques (cf. [134]) semblent être la continuation naturelle de ce qui précède. On trouvera quelques exemples concrets d’équations aux dérivées partielles non linéaires en [72,119], traités avec le même esprit. 5. S YSTÈMES
NON LINÉAIRES
Nous traiterons les deux formalismes évoqués en introduction, l’algèbre différentielle, introduite en 1985 en contrôle par l’auteur (voir [26]), et la géométrie différentielle des jets infinis.
290
M. Fliess
5.1. Algèbre différentielle Un anneau différentiel (ordinaire) est un anneau commutatif A, muni d’une seule32 dérivation, notée dtd = ·, telle que, pour tout a ∈ A, a˙ ∈ A, de sorte que, pour tout a, b ∈ A, – –
d ˙ + b˙ , dt (a + b) = a d ˙ + a b˙ . dt (ab) = ab
Une constante est un élément c ∈ A, tel que c˙ = 0. L’ensemble des constantes de A forme le sousanneau des constantes. Un idéal différentiel de A est un idéal, clos pour la dérivation. Un morphisme φ : A → B d’anneaux différentiels est un morphisme d’anneaux, qui commute avec la dérivation : dtd φ = φ dtd . Exemple 5.1.1. Dans l’anneau différentiel k{X1 , . . . , Xn } des polynômes sur k en les indéterminées différentiels X1 , . . . , Xn , tout élément est un polynôme en les indéterminées {Xι(νι )  ι = 1, . . . , n, νι 0}. Un corps différentiel (ordinaire) est un anneau différentiel qui est un corps. Nous nous restreindrons aux corps de caractéristique nulle par souci de simplicité : on ne connait pas encore de phénomènes concrets régis par des équations différentielles à coefficients dans un corps de caractéristique non nulle.33 Ce qui suit est une transcription des propriétés élémentaires d’extensions de corps non différentiels. Une extension différentielle K/k consiste en la donnée de deux corps différentiels k, K, k ⊆ K , tels que la dérivation de k soit la restriction à k de la dérivation de K . Notation. On note k le souscorps différentiel de K engendré par k et une partie ⊆ K . Un élément ξ ∈ K est différentiellement kalgébrique si, et seulement si, la famille des dérivées {ξ (ν)  ν 0} est k algébriquement dépendante ; ξ satisfait donc une équation différentielle algébrique P (ξ, . . . , ξ (n) ) = 0, où P est un polynôme sur k en n + 1 indéterminées. Sinon, ξ est dit différentiellement k transcendant. L’extension K/k est différentiellement algébrique si, et seulement si, tout élément de K est différentiellement k algébrique ; sinon, elle est dite différentiellement transcendante. Un ensemble {ξi  i ∈ I } d’éléments de K est dit différentiellement k algébriquement indépendant si, et seulement si, l’ensemble (ν ) {ξi i  i ∈ I, νi 0} est algébriquement k indépendant. Un tel ensemble est appelé base de transcendance différentielle de K/k si, et seulement si, il est maximal par rapport à l’inclusion. Deux telles bases ont même cardinalité qui est le 32
Un anneau différentiel muni de plusieurs dérivations est dit partiel. Nous limitant aux équations différentielles ordinaires, nous n’en aurons pas besoin. 33 Ce n’est pas vrai pour les équations aux différences !
Variations sur la notion de contrôlabilité
291
degré de transcendance différentiel de K/k , noté deg tr diff(K/k). Rappelons que deg tr diff(K/k) = 0 si, et seulement si, K/k est différentiellement algébrique. Soit K/k une extension différentielle de type fini. Alors, les deux conditions suivantes sont équivalentes [64] : – K/k est différentiellement algébrique ; – deg tr(K/k) < ∞, c’estàdire le degré de transcendance (non différentiel) de K/k est fini. Soient k un corps différentiel et K/k une extension algébrique au sens usuel, c’estàdire non différentielle. Alors, K possède une structure canonique de corps différentiel, telle que K/k soit une extension différentielle. 5.2. Systèmes Un k système est une extension différentielle K/k de type fini. Remarque 5.2.1. En pratique, un système est donné par des équations différentielles algébriques, c’estàdire par un idéal différentiel de k{X1 , . . . , Xn }. Supposons premier34 ; alors K est le corps de fractions de k{X1 , . . . , Xn }/. Si n’est pas premier, on s’y ramène grâce à la généralisation différentielle du théorème de décomposition de Lasker–Noether, due à Raudenbush et Ritt (cf. [111,64]). Une k dynamique est un k système K/k , muni d’une entrée u = (u1 , . . . , um ) telle que l’extension K/k u soit différentiellement algébrique. L’entrée est dite indépendante si, et seulement si, u est une base de transcendance différentielle de K/k . Un k système entréesortie est une k dynamique munie d’une sortie, c’estàdire une partie finie y = (y1 , . . . , yp ) de K . 5.2.1. Représentations d’état Soit n le degré de transcendance de K/k u et x = (x1 , . . . , xn ) une base de transcendance. Il vient [26,28] (19)
(α ) (α ) Fi x˙i , x, u1 , u˙ 1 , . . . , u1 1i , . . . , um , u˙ m , . . . , um mi = 0,
i = 1, . . . , m,
où les Fi sont des polynômes sur k . Deux antinomies notables sont à noter par rapport à (4), c’estàdire par rapport à la représentation d’état non linéaire usuelle : 1. La présence en (19) de dérivées de l’entrée35 est confirmée par la grue de l’introduction [44]. 34
Les idéaux différentiels rencontrés en pratique sont le plus souvent premiers. C’est, notamment, vrai lorsqu’on a une représentation de type (4) à second membre polynômial (voir [25]). 35 L’exemple élémentaire x˙ = (u) ˙ 2 démontre qu’en nonlinéaire il est en général impossible de chasser les dérivées de l’entrée par un changement d’état dépendant de l’entrée et de ses dérivées. Renvoyons à [21] pour un résultat général.
292
M. Fliess
2. La forme implicite de (19) par rapport à x˙i , confirmée par un circuit électrique non linéaire [35], est reliée aux points d’impasse des équations différentielles ordinaires. 5.2.2. Equivalence et platitude Deux systèmes K1 /k et K2 /k sont dits équivalents si, et seulement si, les les clôtures algébriques K 1 et K 2 de K1 et K2 sont telles que les extensions différentielles K 1 /k et K 2 /k soient différentiellement isomorphes. Par manque de place, nous ne donnerons pas la traduction en terme de bouclage endogène, qui est un bouclage dynamique particulier.36 Par analogie avec la situation non différentielle, l’extension K/k est dite différentiellement transcendante pure si, et seulement si, il existe une base de transcendance différentielle b = (b1 , . . . , bm ) de K/k , telle que K = k b . Un système est dit (différentiellement) plat si, et seulement si, il est équivalent à une extension différentiellement transcendante pure k b /k . Alors, b est appelée sortie (differentiellement) plate, ou linéarisante, du système plat. 5.2.3. Platitude et systèmes linéaires contrôlables Soit un R[ dtd ]système linéaire, c’estàdire un R[ dtd ]module de type fini. La Ralgèbre symétrique Sym() possède une structure canonique d’anneau différentiel. Soit L le corps différentiel de fractions de Sym(), qui n’a pas de diviseurs de zéro. Il est facile de vérifier que l’extension L/R est différentiellement transcendante pure si, et seulement si, est libre. Nous avons esquissé la démontration37 du Théorème 5.2.1. Un système non linéaire est plat si, et seulement si, il est équivalent à un système linéaire contrôlable. On en déduit le Corollaire 5.2.1. Un système R[ dtd ]linéaire est plat si, et seulement si, il est contrôlable. 5.2.4. Digression : systèmes non linéaires à retards Un formalisme pour les systèmes non linéaires à retards (cf. [27]) est donné par les corps différentiels aux différences [18], qui généralisent les corps différentiels et les corps aux différences [17]. Un tel corps K , supposé de caractéristique nulle, sera ici muni d’une seule dérivation dtd et d’un ensemble fini {σ1 , . . . , σr } d’injections K → K , qui représentent les retards. Un système non linéaire à retards est, alors, une extension K/k de type fini. Le concept de platitude s’étend à ce cadre (voir [90, 91,129], où des exemples concrets sont présentés). 36
Un type important de bouclages endogènes et, donc, dynamiques, est fourni par les bouclages quasistatiques [20], également utiles en platitude [22,124,128]. 37 D’après une démonstration analogue, deux systèmes R[ d ]linéaires sont équivalents si, et seulement dt si, les modules correspondants sont isomorphes.
293
Variations sur la notion de contrôlabilité
5.3. Géométrie différentielle Soit I un ensemble dénombrable, fini ou non, de cardinalité . Notons R l’ensemble des applications I → R, où R est muni de la topologie produit, qui est de Fréchet. Pour tout ouvert V ⊂ R , soit C ∞ (V) l’ensemble des fonctions V → R, dépendant d’un nombre fini de variables et C ∞ . Une R variété C ∞ est définie comme en dimension finie par un R atlas. Les notions de fonctions, de champs de vecteurs, de formes différentielles C ∞ sur un ouvert sont claires. Si {xi  i ∈ I } sont des coordonnées locales, remarquons qu’un champ de vecteurs peut correspondre à l’expression infinie i∈I ζi ∂x∂ i , tandis qu’une forme différentielle finie ωi1 ...ip dxi1 ∧ · · · ∧ dxip est toujours finie. La notion de morphisme (local)
∞ C entre deux R et R variétés C ∞ , où et ne sont pas nécessairement égaux, est claire de même que la notion d’isomorphisme (local). Par contre, la nonvalidité du théorème des fonctions implicites dans ces espaces de Fréchet de dimension infinie interdit l’équivalence des diverses caractérisations usuelles des submersions et immersions (locales) entre variétés de dimension finie. Choisissons avec [146] la définition suivante : une submersion (resp. immersion) (locale) C ∞ est un morphisme (local) C ∞ tel qu’il existe des coordonnées locales où c’est une projection (resp. injection). Une diffiété M est une R variété C ∞ , munie d’une distribution de Cartan CT M, c’estàdire une distribution de dimension finie et involutive. La dimension n de CT M est la dimension de Cartan de M. Une section (locale) de CT M est un champ de Cartan (local) de M. La diffiété est dite ordinaire (resp. partielle) si, et seulement si, n = 1 (resp. n > 1). Un morphisme C ∞ (local) entre diffiétés est dit de Lie–Bäcklund 38 si, et seulement si, il est compatible avec les distributions de Cartan. Les notions de submersions et d’immersions de Lie–Bäcklund (locales) sont claires. Alors, la catégorie des équations différentielles est celle des diffiétés munies des morphismes de Lie–Bäcklund. Restreignonsnous, dorénavant, aux diffiétés ordinaires, c’estàdire aux équations différentielles ordinaires. (νi )
Exemple 5.3.1. Soit la diffiété de coordonnées globales {t, yi 0} et de champ de Cartan
 i = 1, . . . , m; νi
(ν +1) ∂ ∂ d = + yi i . (ν ) dt ∂t ∂y i m
i=1 νi 0
i
Notée R × Rm ∞ et appelée diffiété triviale, car elle correspond à l’équation triviale 0 = 0, elle joue un rôle fondamental. 38
Cette terminologie est due à Ibragimov (voir [1] et, aussi, [146]). Elle a été critiquée par plusieurs auteurs (voir, par exemple, [98]). Vinogradov et son école parlent de C morphisme (voir [65,66]), car ces morphismes généralisent les transformations de contact classiques. Il convient d’ajouter que la théorie des diffiétés n’en étant qu’à ses débuts, la terminologie est fluctuante selon les auteurs.
294
M. Fliess
Une diffiété M est dite (localement) de type fini39 si, et seulement si, il existe une submersion de Lie–Bäcklund (locale) M → R × R∞ m dont les fibres sont de dimension finie ; m est la dimension différentielle (locale) de M. Exemple 5.3.2. Soit la dynamique non linéaire (20)
x˙ = F (x, u)
où l’état x = (x1 , . . . , xn ) et le contrôle u = (u1 , . . . , um ) appartiennent à des ouverts de Rn et Rm ; F = (F1 , . . . , Fn ) est un mtuple de fonctions C ∞ de leurs arguments. (ν ) Associons à (20) la diffiété D de coordonnées {t, x1 , . . . , xn , ui i  i = 1, . . . , n; νi 0}. La distribution de Cartan est engendrée par le champ de Cartan (ν +1) ∂ ∂ d ∂ = + Fk + ui i . (ν ) dt ∂t ∂xk ∂u i n
k=1
m
i=1 νi 0
i
Une fibration de Lie–Bäcklund (local) est un triplet σ = (X , B, π), où π : X → B est une submersion de Lie–Bäcklund (locale) entre deux diffiétés. Pour tout b ∈ B, π −1 (b) est une fibre. Soit une autre fibration de Lie–Bäcklund σ = (X , B, π ) de même base B . Un morphisme de Lie–Bäcklund α : σ → σ est un morphisme de Lie–Bäcklund σ : X → X tel que π = π α . La notion d’isomorphisme de Lie– Bäcklund est claire. 5.3.1. Systèmes Un système [40] est une fibration de Lie–Bäcklund (locale) σ = (S, R, τ ), où – S est une diffiété de type fini à champ de Cartan ∂S donné, – R est muni d’une structure canonique de diffiété, de coordonnée globale t , et de champ de Cartan ∂t∂ , – les champs de Cartan ∂S et ∂t∂ sont τ associés. Un morphisme de Lie–Bäcklund (resp. une immersion, une submersion, un isomorphisme de Lie–Bäcklund) (local) ϕ : (S, R, τ ) → (S , R, τ ) entre deux systèmes est un morphisme (resp. immersion, submersion, isomorphisme) de Lie–Bäcklund (local) entre S et S tel que – τ = τ ϕ, – ∂S et ∂S sont ϕ associés. Une dynamique est une submersion de Lie–Bäcklund (locale) δ : (S, R, τ ) → (U, R, µ), telle que les champs de Cartan ∂S et ∂U sont δ associés. En général, U est un ouvert de la diffiété triviale R × Rm : il joue le rôle de contrôle. Remplaçons, par léger abus de notation, ∂S et ∂U , qui sont des dérivations totales par rapport à t , par dtd . 39
Voir [40] pour une définition intrinsèque.
Variations sur la notion de contrôlabilité
295
5.3.2. Équivalence et platitude Deux systèmes (S, R, τ ) et (S , R, τ ) sont dits (localement) différentiellement équivalents si, et seulement si, ils sont (localement) Lie–Bäcklund isomorphes. Ils sont dits (localement) orbitalement équivalents si, et seulement si, S et S
sont (localement) Lie–Bäcklund isomorphes. La première définition préserve le temps.40 mais non la seconde : elle introduit un changement de temps.41 (νi ) m Le triplet (R × Rm ∞ , R, pr), où R × R∞ = {t, yi } est le système trivial et (ν ) pr la projection {t, yi i } → t , est un système trivial. Le système (S, R, τ ) est (localement) différentiellement plat si, et seulement si, il est (localement) différentiellement équivalent à un système trivial ; il est (localement) orbitalement plat si, et seulement si, il est (localement) orbitalement équivalent à la diffiété triviale. L’ensemble y = (y1 , . . . , ym ) est une sortie sortie plate, ou linéarisante. On vérifie, comme en 5.2.3, qu’un système est (orbitalement) plat si, et seulement si, il est (orbitalement) équivalent à un système linéaire commandable. 5.3.3. Digression : accessibilité forte En théorie géométrique habituelle (voir [57,58,93,138]), une dynamique affine en contrôle est un objet privilégié d’étude : (21)
x˙ = f0 (x) +
m
ui fi (x).
i=1
L’état x appartient à une variété différentiable C ∞ de dimension n. Soit L la distribution d’accessibilité faible, c’estàdire la distribution soustendue par l’algèbre de Lie L engendrée par les champs de vecteurs f0 , f1 , . . . , fm . L’algèbre de Lie d’accessibilité forte L0 est la distribution soustendue par l’idéal de Lie L0 de L engendré par f1 , . . . , fm . La dynamique (21) satisfait localement la condition de Lie d’accessibilité forte [140] (voir, aussi, [57,58,93,138]), si, et seulement si, L0 est localement de dimension n. Une intégrale première locale, ou une constante de mouvement, I sur la diffiété M est une fonction C ∞ locale I de D à valeurs réelles, telle que dI dt = 0. Elle est dite triviale si, et seulement si, c’est une constante. Proposition 5.3.1. Pour (21), les deux propriétés suivantes sont équivalentes : – (21) satisfait localement la condition de Lie d’accessibilité forte, – toute intégrale première locale de la diffiété associée M est triviale. Démonstration. Supposons que (21) ne satisfasse pas localement la condition de Lie d’accessibilité forte. Alors, (21) se décompose localement de façon suivante (voir [57,93]) : 40
Comme celle donnée avec les corps différentiels. Ce changement de temps est très utile en pratique (planification de trajectoires [38], bouclage stabilisant [39]) car il permet de « s’affranchir » de certaines singularités.
41
296
(22)
M. Fliess
¯ x˙¯ = f¯0 (x) ¯ x) ˜ = x˙˜ = f˜0 (x,
m
¯ x) ˜ ui f˜i (x,
i=1
où x¯ = (x¯1 , . . . , x¯κ ) et x˜ = (x˜κ+1 , . . . , x˜n ). L’application du théorème de redresse¯ fournit localement κ intégrales premières ment des champs de vecteurs à x˙¯ = f¯0 (x) locales non triviales. Supposons maintenant l’existence d’une intégrale première I (t, x, u, u, ˙ . . . , u(α) ) telle qu’il existe i ∈ {1, . . . , m} vérifiant ∂I(α) = 0. Alors, dI dt ,
qui contient le terme
ui(α+1) ∂I(α) , ∂ui
∂ui
est identiquement nulle si
∂I (α) ∂ui
l’est. Par
récurrence il en découle que I ne dépend que de t, x1 , . . . , xn . L’existence d’une telle intégrale première contredit la condition de Lie d’accessibilité forte. 2 II ne semble pas exister de définition de l’accessibilité forte valable pour un système non linéaire arbitraire. Une diffiété est dit localement fortement accessible si, et seulement si, toute intégrale première locale y est triviale. Exemple 5.3.3. Il est aisé de démontrer que tout système trivial est localement fortement accessible [42]. Il en découle que tout système localement orbitalement plat l’est aussi. 5.4. Quelques problèmes ouverts en nonlinéaire On trouvera en [41] des précisions supplémentaires sur certains des problèmes abordés. 5.4.1. Caractérisation de la platitude Aucun critère général de platitude n’est connu. C’est avant tout, sembletil, un problème d’intégrabilité42 pour lequel la géométrie différentielle est mieux adaptée.43 Exemple 5.4.1. La dynamique (α1 ) x1 = u1 , (α2 ) x2 = u2 , x˙3 = u1 u2 , 42
On s’en persuadera en considérant le système linéaire tangent, ou variationel, attaché au système (voir [38] pour une approche algébrique par différentielles de Kähler, et [42] pour l’analogue en géométrie différentielle des jets infinis). Si l’on fait l’hypothèse, naturelle d’après ce qui précède, de la forte accessibilité du système, le linéaire tangent, qui est à coefficients variables, est contrôlable (cf. [19, 136,137]). Le module correspondant, défini sur un anneau principal non commutatif, est libre [29]. La platitude équivaut à l’existence d’une base intégrable, c’estàdire dont les composantes sont les différentielles de fonctions. 43 Des résultats partiels existent déjà (voir [79,80,143,100,106]).
Variations sur la notion de contrôlabilité
297
α1 , α2 > 0, est plate. Une sortie plate est α1 (α −ι) x1 1 u2(ι−1) , y1 = x3 + ι=1 y2 = x2 .
Il faut donc utiliser des dérivations d’ordre min(α1 , α2 ) du contrôle et l’on ne sait s’il en existe avec des ordres de dérivation moindre. La difficulté tient au fait que l’on ne sait pas borner a priori pour un système donné l’ordre de dérivation nécessaire pour vérifier son éventuel platitude. Si c’était le cas, un critère d’intégrabilité (cf. [13]) serait une réponse satisfaisante. Que l’on permette à l’auteur de suggérer l’écriture d’une cohomologie de Spencer d’ordre infini, c’estàdire indépendante de l’ordre de dérivation. 5.4.2. Une variante du théorème de Lüroth Une conjecture essentielle, que nous expliciterons dans nos deux langages, est la suivante : – Soit K/k un système plat. Pour tout corps différentiel L, k ⊂ L ⊂ K, L/k est aussi un système plat.44 C’est une variante du théorème de Lüroth, généralisé aux corps différentiels par Ritt [111].45 – L’image par une submersion de Lie–Bäcklund d’un système trivial (resp. une diffiété triviale) est un système trivial (une diffiété triviale). Dans le jargon du contrôle, sa véracité impliquerait que tout système linéarisable par bouclage dynamique (voir [38,43]) est (différentiellement) plat, c’estàdire linéarisable par bouclage endogène. Une autre conséquence serait une meilleure prise en compte des symétries naturelles du système pour examiner la platitude et déterminer les sorties plates.46 5.4.3. Classification Il faudrait une classification (locale) des systèmes à une équivalence près, équivalence décrite en en 5.2.2 et 5.3.2, où la classe la plus simple serait celle des systèmes plats. En voici la signification dans les deux langages employés : 1. Il s’agit de décrire les extensions de type fini K/k de corps différentiels ordinaires à une extension algébrique47 de K près.48 44
Une extension algébrique L/k est considérée comme un système plat trivial. Dans cet ordre, mentionnons qu’Ollivier [96] a utilisé la platitude pour répondre par la négative à une question posée par Ritt [111] sur la généralisation du théorème de Lüroth différentiel. Citons aussi [97] pour une réponse négative à un problème de Ritt sur une généralisation différentielle d’un problème de M. Noether. 46 Voir [121] pour un cas particulier. 47 Il s’agit d’une extension algébrique usuelle, c’estàdire non différentielle. 48 Le défaut [38] en est une première ébauche. En voici la définition. Parmi toutes les bases de transcendance différentielle de l’extension différentielle K/k , il en existe une, b , telle que le degré de 45
298
M. Fliess
2. Contrairement à la géométrie locale des variétés différentiables de dimension finie, celle des diffiétés n’est pas triviale49 (voir [40] pour quelques précisions). Classifier localement les systèmes non linéaires50 revient à décrire cette géométrie locale à un changement de coordonnées près.51 Beaucoup des propriétés de synthèse y trouveraient une interprétation simple et naturelle.52 5.4.4. Contrôlabilités Déterminer, éventuellement comme conséquence de la classification précédente, une hiérarchie de contrôlabilités où – la plus faible serait l’accessibilité forte ; – la plus forte serait la platitude, différentielle ou orbitale. 5.4.5. Caractérisation trajectorienne de la contrôlabilité Rappelons très brièvement la caractérisation trajectorienne de la contrôlabilité linéaire, due à Willems (cf. [104]), qui ne distingue pas, elle aussi, les variables du système : Un système linéaire de dimension finie, à coefficients constants, est contrôlable si, et seulement si, il est possible de joindre toute trajectoire passée à toute trajectoire future. Le lien avec la liberté du R[ dtd ]module correspondant53 est donné en [30]. On se convainc aisément – que les systèmes vérifiant les divers critères habituels de contrôlabilité non linéaire ne satisfont pas une telle propriété, – qu’un système (orbitalement) plat possède une propriété analogue en dehors de toute singularité. Caractériser les systèmes non linéaires jouissant localement d’une telle propriété. transcendance (non différentielle) de K/k b soit minimum. Ce degré de transcendance est le défaut. Il est nul si, et seulement si, le système est plat. Rappelons la commande à haute fréquence utilisée en [38] pour certains systèmes de défaut non nul. 49 Une diffiété n’est pas, en général, localement Lie–Bäcklund isomorphe à une diffiété triviale ! 50 D’après le théorème de redressement des champs de vecteurs, la classification locale des systèmes dynamiques, c’estàdire non contrôlés, de dimension finie, est triviale en dehors des singularités ! 51 Ce point de vue se transpose, évidemment, à la géométrie algébrique différentielle, c’estàdire à la variété algébrique différentielle (cf. [14]) correspondant au système non linéaire. 52 Dans le formalisme des corps différentiels, c’est déjà le cas pour le découplage et le rejet de perturbations par bouclage quasistatique (cf. [20]). 53 Une trajectoire [30] d’un système R[ d ]linéaire est, alors, un morphisme de R[ d ]modules → dt dt C ∞ (t0 , t1 ), −∞ t0 < t1 +∞.
Variations sur la notion de contrôlabilité
299
5.4.6. Algèbre différentielle réelle Il faudrait développer une algèbre différentielle réelle pour mieux tenir compte des besoins du contrôle. Exemple 5.4.2. Le système54 (23)
x˙12 + x˙22 = x˙3
n’est pas plat sur R, d’après le critère de la variété réglée [38,118]. Il √ l’est sur C. 55 z = x + x x ˙ = z ˙ z ˙ , où −1, z2 = Réécrivons, en effet, (23) sous la forme 3 1 2 1 1 2 √ x1 − x2 −1. Une sortie plate est donnée par y1 = x3 − z˙ 1 z2 , y2 = z1
car z2 = − yy˙¨12 . Un calcul analogue démontrerait que x˙12 − x˙22 = x˙3 est plat sur R. 5.4.7. Calcul formel La mise au point d’algorithmes, relevant, par exemple, du calcul formel, serait d’un grand secours. Les méthodes constructives de l’algèbre différentielle ont déjà eu un impact certain en contrôle non linéaire (voir, par exemple, [24,25,34,71, 95]). Les travaux récents de Boulier et coll. [9,10] devraient y être d’une grande importance. 5.4.8. Généricité La platitude, qui n’est pas une propriété générique (cf. [118]), est très souvent rencontrée en pratique. N’y atil point là, et c’est une question de nature épistémologique, une contradiction avec le rôle prépondérant parfois accordée à la généricité ? 6. E N
GUISE DE CONCLUSION
Diverses parties ont été enseignées en théorie du contrôle. Cela permet, tout en insistant sur les applications, d’éclairer certains chapitres d’algèbre, d’analyse ou de géométrie, rarement abordés dans les cours de mathématiques appliquées. Plusieurs expériences, en France et à l’étranger, ont amplement démontré la possibilité de faire passer le message à un public non mathématicien, comme, parfois, celui des ingénieurs, grâce, notamment, à des travaux pratiques sur ordinateurs. REMERCIEMENTS L’auteur tient à exprimer sa reconnaissance à Jean Lévine, Richard Marquez, Hugues Mounier et Pierre Rouchon pour leur aide lors de la rédaction. Ce 54 55
Exemple dû à P. Rouchon (communication personnelle). √ Selon une certaine tradition en électricité, on écrit −1 au lieu de i .
300
M. Fliess
travail a été effectué sous l’auspice de la commission européenne TMR, contrat ERBFMRXTCT970137. R ÉFÉRENCES [1] Anderson R.L., Ibragimov N.H. – Lie–Bäcklund Transformations in Applications, SIAM, Philadelphia, 1979. [2] Aoustin Y., Fliess M., Mounier H., Rouchon P., Rudolph J. – Theory and practice in the motion planning control of a flexible robot arm using Mikusi´nski’s operators, in: Proc. 4th Sympos. Robotics Control (Nantes, 1997), pp. 287–293. [3] Aström K.J., Hägglund T. – Automatic Tuning of PID Controllers, 2nd ed., Instrument Society of America, Research Triangle Park, NC, 1995. [4] d’AndréaNovel B., Cohen de Lara M. – Commande linéaire de systèmes dynamiques, Masson, Paris, 1993. [5] Baron R., Boillereaux L., Lévine J. – Platitude et conduite nonlinéaire : Illustration en extrusion et en photobioréacteur, à paraître. [6] Bindel R., Nitsche R., Rothfuß R., Zeitz M. – Flachheitsbasierte Regelung eines hydraulischen Antriebs mit zwei Ventilen für einen Großmanipulator, Automatisierungstechnik 48 (2000) 124–131. [7] Bitauld L., Fliess M., Lévine J. – A flatness based control synthesis of linear systems: An application to windshield wipers, in: Proc. 4th Europ. Control Conf. (Bruxelles, 1997). [8] Boichet J., Delaleau E., Diep N., Lévine J. – Modelling and control of a 2 D.O.F. highprecision positionning system, in: Proc. 5th Europ. Control Conf. (Karlsruhe, 1999). [9] Boulier F., Triangulation de systèmes différentiels, à paraître. [10] Boulier F., Lazard D., Ollivier F., Petitot M. – Computing representations for radicals of finitely generated differential ideals, à paraître. [11] Bourlès H., Fliess M. – Pôles and zéros of linear systems: An intrinsic approach, Internat. J. Control 68 (1997) 897–922. [12] Bourlès H., Marinescu B. – Poles and zeros at infinity of linear timevarying systems, IEEE Trans. Automat. Control 44 (1999) 1981–1985. [13] Bryant R.L., Chern S.S., Gardner R.B., Goldschmidt H.L., Griffiths P.A. – Exterior Differential Systems, Springer, New York, 1991. [14] Buium A., Cassidy P.J. – Differential algebraic geometry and differential algebraic groups: From algebraic differential equations to diophantine geometry, in: Bass H., Buium A., Cassidy P.J. (Eds.), Selected Works of Ellis Kolchin, Amer. Math. Soc., Providence, RI, 1999, pp. 567– 636. [15] Cartan É. – Sur l’intégration de certains systèmes indéterminés d’équations différentielles, J. Reine Angew. Math. 45 (1915) 86–91. Œuvres complètes, t. III, pp. 1169–1174, GauthierVillars, Paris, 1953. [16] Cartan É. – Les problèmes d’équivalence, Séminaire de Mathématiques, 1937, pp. 113–136. Œuvres complètes, t. III, pp. 1311–1334, GauthierVillars, Paris, 1953. [17] Cohn R.M. – Difference Algebra, Interscience, New York, 1965. [18] Cohn R.M. – A difference–differential basis theorem, Canad. J. Math. 22 (1970) 1224–1237. [19] Coron J.M. – Linearized control systems and applications to smooth stabilization, SIAM J. Control Optim. 32 (1994) 358–386. [20] Delaleau E., Pereira da Silva P.S. – Filtrations in feedback systems: Part I – Systems and feedbacks, Part II – Input–output decoupling and disturbance decoupling, Forum Math. 10 (1998) 147–174, 259–276. [21] Delaleau E., Respondek W. – Lowering the orders of derivatives of control in generalized state space systems, J. Math. Systems Estim. Control 5 (1995) 1–27. [22] Delaleau E., Rudolph J. – Control of flat systems by quasistatic state feedbacks of generalized states, Internat. J. Control 71 (1998) 745–765. [23] Doetsch G. – Theorie und Anwendung der LaplaceTransformation, Springer, Berlin, 1937.
Variations sur la notion de contrôlabilité
301
[24] Diop S. – Elimination in control theory, Math. Control Signals Systems 4 (1991) 17–32. [25] Diop S. – Differentialalgebraic decision methods and some applications to system theory, Theoret. Comput. Sci. 98 (1992) 137–161. [26] Fliess M. – Automatique et corps différentiels, Forum Math. 1 (1989) 227–238. [27] Fliess M. – Some remarks on nonlinear input–output systems with delays, in: Descusse J., Fliess M., Isidori A., Leborgne D. (Eds.), New Trends in Nonlinear Control Theory, in: Lecture Notes in Control and Inform. Sci., vol. 122, Springer, Berlin, 1989, pp. 172–181. [28] Fliess M. – Generalized controller canonical forms for linear and nonlinear dynamics, IEEE Trans. Automat. Control 35 (1990) 994–1001. [29] Fliess M. – Some basic structural properties of generalized linear systems, Systems Control Lett. 15 (1990) 391–396. [30] Fliess M. – A remark on Willems’ trajectory characterization of linear controllability, Systems Control Lett. 19 (1992) 43–45. [31] Fliess M. – Une interprétation algébrique de la transformation de Laplace et des matrices de transfert, Linear Algebra Appl. 203–204 (1994) 429–442. [32] Fliess M. – Sur des pensers nouveaux faisons des vers anciens, in: Actes Conf. Internat. Franc. Automat. (CIFA) (Lille, 2000). [33] Fliess M., Bourlès H. – Discussing some examples of linear system interconnections, Systems Control Lett. 27 (1996) 1–7. [34] Fliess M., Glad S.T. – An algebraic approach to linear and nonlinear control, in: Trentelman H., Willems J. (Eds.), Essays on Control: Perspectives in the Theory and its Applications, Birkhäuser, Boston, 1993, pp. 223–267. [35] Fliess M., Hasler M. – Questioning the classic state space description via circuit examples, in: Kashoek M.A., van Schuppen J.H., Ran A.C.M. (Eds.), Realization and Modelling in System Theory, Birkäuser, Bâle, 1990, pp. 1–12. [36] Fliess M., Lévine J., Martin P., Rouchon P. – Sur les systèmes non linéaires différentiellement plats, C. R. Acad. Sci. Paris I 315 (1992) 619–624. [37] Fliess M., Lévine J., Martin P., Rouchon P. – Linéarisation par bouclage dynamique et transformations de Lie–Bäcklund, C. R. Acad. Sci. Paris I 317 (1993) 981–986. [38] Fliess M., Lévine J., Martin P., Rouchon P. – Flatness and defect of nonlinear systems: Introductory theory and applications, Internat. J. Control 61 (1995) 1327–1361. [39] Fliess M., Lévine J., Martin P., Rouchon P. – Design of trajectory stabilizing feedback for driftless flat systems, in: Proc. 3rd European Control Conf. (Rome, 1995), pp. 1882–1887. [40] Fliess M., Lévine J., Martin P., Rouchon P. – Deux applications de la géométrie locale des diffiétés, Ann. Inst. H. Poincaré Phys. Théor. 66 (1997) 275–292. [41] Fliess M., Lévine J., Martin P., Rouchon P. – Some open questions related to flat nonlinear systems, in: Blondel V.D., Sontag E.D., Vidyasagar M., Willems J.C. (Eds.), Open Problems in Mathematical Systems and Control Theory, Springer, Londres, 1998, pp. 99–103. [42] Fliess M., Lévine J., Martin P., Rouchon P. – Nonlinear control and diffieties with an application to physics, in: Henneaux M., Krasil’shchuk J., Vinogradov A. (Eds.), Secondary Calculus and Cohomological Physics, in: Contemp. Math., vol. 219, Amer. Math. Soc., Providence, RI, 1998, pp. 81–91. [43] Fliess M., Lévine J., Martin P., Rouchon P. – A Lie–Bäcklund approach to equivalence and flatness of nonlinear systems, IEEE Trans. Automat. Control 44 (1999) 922–937. [44] Fliess M., Lévine J., Rouchon P. – A generalised state variable representation for a simplified crane description, Internat. J. Control 58 (1993) 277–283. [45] Fliess M., Marquez R. – Continuoustime linear predictive control and flatness: A moduletheoretic setting with examples, Internat. J. Control 73 (2000) 606–623. [46] Fliess M., Marquez R., Delaleau E. – State feedbacks without asymptotic observers and generalized PID regulators, in: Isidori A., LamnabhiLagarrigue F., Respondek W. (Eds.), Nonlinear Control in the Year 2000, Springer, Londres, 2000. [47] Fliess M., Martin P., Petit N., Rouchon P. – Commande de l’équation des télégraphistes et restauration active d’un signal, Traitem. Signal 15 (1998) 619–625.
302
M. Fliess
[48] Fliess M., Mounier H. – Controllability and observability of linear delay systems: An algebraic approach, ESAIM Control Optim. Calc. Var. 3 (1998) 301–314. [49] Fliess M., Mounier H. – Tracking control and π freeness of infinite dimensional linear systems, in: Picci G., Gilliam D.S. (Eds.), Dynamical Systems, Control, Coding, Computer Vision, Birkhäuser, Bâle, 1999, pp. 45–68. [50] Fliess M., Mounier H., Rouchon P., Rudolph J. – Systèmes linéaires sur les opérateurs de Mikusi´nski et commande d’une poutre flexible, Proc. ESAIM 2 (1997) 183–193. [51] Fliess M., Mounier H., Rouchon P., Rudolph J. – Controlling the transient of a chemical reactor: A distributed parameter approach, in: Proc. CESA’98 IMACS Multiconf. (Hammamet, Tunisie, 1998). [52] Fliess M., SiraRamirez H. – Régimes glissants, structures variables linéaires et modules, C. R. Acad. Sci. Paris I 317 (1993) 703–706. [53] Gromov M. – Partial Differential Relations, Springer, Berlin, 1986. [54] Grothendieck A., Dieudonné J.A. – Éléments de géométrie algébrique, I, Springer, Berlin, 1971. [55] Hilbert D. – Über den Begriff der Klasse von Differentialgleichungen, Math. Ann. 73 (1912) 95–108. Gesammelte Abhandlungen, 3. Bd., 81–93, Chelsea, New York, 1965. [56] Hisseine D., Lohmann B., Kuczynski A. – Two control approaches for a flexible link manipulator, in: Proc. IASTED Conf. Robotics Automation (Santa Barbara, 1999). [57] Isidori A. – Nonlinear Control Systems, 3rd ed., Springer, New York, 1995. [58] Jurdjevic V. – Geometric Control Theory, Cambridge University Press, Cambridge, 1997. [59] Kailath T. – Linear Systems, PrenticeHall, Englewood Cliffs, NJ, 1980. [60] Kalman R.E. – Mathematical description of linear dynamical systems, J. SIAM Control 1 (1963) 152–192. [61] Kalman R.E. – Algebraic theory of linear systems, in: Kalman R.E., Farb P.L., Arbib M.A. (Eds.), Topics in Mathematical System Theory, McGrawHill, New York, 1969, pp. 237–339. [62] Kamen E.W. – On an algebraic theory of systems defined by convolution operators, Math. Systems Theory 9 (1975) 57–74. [63] Kiss B., Lévine J., Lantos B. – Trajectory planning for dextrous manipulation with rolling contacts, in: Proc. 37th IEEE Conf. Decision Control (Tempa, FL), 1999, pp. 2118–2119. [64] Kolchin E.R. – Differential Algebra and Algebraic Groups, Academic Press, New York, 1973. [65] Krasil’shchik I.S., Lychagin V.V., Vinogradov A.M. – Geometry of Jet Spaces and Nonlinear Partial Differential Equations, Gordon and Breach, New York, 1986. [66] Krasil’shchik I.S., Vinogradov A.M. (Eds.) – Symmetries and Conservation Laws for Differential Equations, Amer. Math. Soc., Providence, RI, 1999. [67] Lenoir Y., Martin P., Rouchon P. – 2kπ , the juggling robot, in: Proc. 37th IEEE Conf. Decision Control (Tampa, FL), 1998, pp. 1995–2000. [68] Lévine J. – Are there new industrial perspectives in the control of mechanical systems?, in: Frank P.M. (Ed.), Advances in Control: Highlights of ECC’99, Springer, Londres, 1999, pp. 197– 226. [69] Lévine J., Lottin J., Ponsart J.C. – A nonlinear approach to the control of magnetic bearings, IEEE Trans. Control Systems Technology 4 (1996) 524–544. [70] Lévine J., Rémond B. – Flatness based control of an automatic clutch, in: Proc. MTNS2000 (Perpignan, 2000). [71] Ljung L., Glad T. – On global identifiability for arbitrary model parametrizations, Automatica 30 (1994) 265–276. [72] Lynch A.F., Rudolph J. – Flatness based boundary control of a nonlinear parabolic equation modelling a tubular reactor, in: Isidori A., LamnabhiLagarrigue F., Respondek W. (Eds.), Nonlinear Control in the Year 2000, Springer, Londres, 2000. [73] Manin Yu.I. – Algebraic aspects of nonlinear differential equations, J. Soviet Math. 11 (1979) 1–122. [74] Marquez R., Delaleau E. – Une application de la commande prédictive linéaire basée sur la platitude, in: Actes Journées Doctorales d’Automatique, Nancy, France, 1999, pp. 148–152. [75] Marquez R., Delaleau E., Fliess M. – Commande par PID généralisé d’un moteur électrique sans capteur mécanique, in: Actes Conf. Internat. Franc. Automat. (CIFA) (Lille, 2000).
Variations sur la notion de contrôlabilité
303
[76] Martin P. – Aircraft control using flatness, in: Proc. CESA’96 IMACS Multiconf. (Lille, 1996), pp. 194–199. [77] Martin P., Devasia S., Paden B. – A different look at output feedback: Control of a VTOL aircraft, Automatica 32 (1996) 101–108. [78] Martin P., Murray R.M., Rouchon P. – Flat systems, in: Bastin G., Gevers M. (Eds.), Plenary Lectures and MiniCourses, ECC97 (Bruxelles, 1997), pp. 211–264. [79] Martin P., Rouchon P. – Feedback linearization of driftless systems, Math. Control Signal Syst. 7 (1994) 235–254. [80] Martin P., Rouchon P. – Any (controllable) driftless system with 3 inputs and 5 states is flat, Systems Control Lett. 25 (1995) 167–173. [81] Martin P., Rouchon P. – Two remarks on induction motors, in: Proc. CESA’96 IMACS Multiconf. (Lille, 1996), pp. 76–79. [82] Martin P., Rouchon P. – Flatness and sampling control of induction motors, in: Proc. IFAC World Congress (San Francisco, 1996), pp. 389–394. [83] Martin P., Rouchon P. – Systèmes plats, planification et suivi de trajectoires, Notes Journées XUPS, 1999 (http://math.polytechnique.fr/xups/vol99.html). [84] Mikusi´nski J. – Operational Calculus, vol. 1, Pergamon/PWN, Oxford/Warsaw, 1983. [85] Mikusi´nski J., Boehme T.K. – Operational Calculus, vol. 2, Pergamon/PWN, Oxford/Warsaw, 1987. [86] Mounier H. – Algebraic interpretations of the spectral controllability of a linear delay system, Forum Math. 10 (1998) 39–58. [87] Mounier H. – Systèmes d’équations aux dérivées partielles avec conditions aux bords non linéaires: construction d’observateurs sur des systèmes mécaniques (trains de tiges de forage) avec vibrations en torsion et tractioncompression, Rapport Inst. Français Pétrole, RueilMalmaison, 1997. [88] Mounier H., Mboup M., Petit N., Rouchon P., Seret D. – High speed network congestion control with a simplified timevarying delay model, in: Proc. IFAC Conf. System Structure Control (Nantes, 1998), pp. 43–47. [89] Mounier H., Rouchon P., Rudolph J. – Some examples of linear systems with delays, J. Eur. Syst. Autom. 31 (1997) 911–925. [90] Mounier H., Rudolph J. – Flatness based control of nonlinear delay systems: Example of a class of chemical reactors, Internat. J. Control 71 (1998) 871–890. [91] Mounier H., Rudolph J. – Load position tracking using a PM stepper motor with a flexible shaft: A δ flat nonlinear delay system, in: Proc. 5th Europ. Control Conf. (Karlsruhe, 1999). [92] Mounier H., Rudolph J., Fliess M., Rouchon P. – Tracking control of a vibrating string with an interior mass viewed as a delay system, ESAIM Control Optim. Calc. Var. 3 (1998) 315–321. [93] Nijmeijer H., van der Schaft A.J. – Nonlinear Dynamical Control Systems, Springer, New York, 1990. [94] Oberst U. – Multidimensional constant linear systems, Acta Math. Appl. 20 (1990) 1–175. [95] Ollivier F. – Le problème de l’identifiabilité globale : étude théorique, méthodes effectives et bornes de complexité, Thèse, École polytechnique, Palaiseau, 1990. [96] Ollivier F. – Une réponse négative au problème de Lüroth différentiel en dimension 2, C. R. Acad. Sci. Paris I 327 (1998) 881–886. [97] Ollivier F. – Une réponse négative au problème de Noether différentiel, C. R. Acad. Sci. Paris I 328 (1999) 99–104. [98] Olver P.J. – Applications of Lie Groups to Differential Equations, 2nd ed., Springer, New York, 1993. [99] Palmor Z.J. – Timedelay compensation – Smith predictor and its modifications, in: Levine W. (Ed.), The Control Handbook, GRC Press, Boca Raton, FL, 1996, pp. 224–237. [100] Pereira da Silva P.S. – Flatness of nonlinear control systems and exterior differential systems, in: Isidori A., LamnabhiLagarrigue F., Respondek W. (Eds.), Nonlinear Control in the Year 2000, Springer, Londres, 2000. [101] Petit N. – Thèse, École des Mines, Paris, 2000.
304
M. Fliess
[102] Petit N., Creff Y., Rouchon P. – δ freeness of a class of linear delayed systems, in: Proc. 4th Europ. Control Conf. (Bruxelles, 1997). [103] Pillai H.K., Shankar S. – A behavioral approach to control of distributed systems, SIAM J. Control Optim. 37 (1999) 388–408. [104] Polderman J.W., Willems J.C. – Introduction to Mathematical System Theory: A Behavioral Approach, Springer, New York, 1998. [105] Pomet J.B. – A differential geometric setting for dynamic equivalence and dynamic linearization, in: Jakubczyk B., Respondek W., Rze˙zuchowski T. (Eds.), Geometry in Nonlinear Control and Differential Inclusions, Banach Center Publications, Warsaw, 1995, pp. 319–339. [106] Pomet J.B. – On dynamic feedback linearization of fourdimensional affine control systems with two inputs, ESAIM Control Optim. Calc. Var. 2 (1997) 151–230. [107] Pommaret J.F., Quadrat A. – Localization and parametrization of linear multidimensional control systems, Systems Control Lett. 37 (1999) 247–360. [108] Quillen D. – Projective modules over polynomial rings, Invent. Math. 36 (1976) 167–171. [109] Ramis J.P. – Dévissage Gevrey, Astérisque 59–60 (1979) 173–204. [110] Richalet J. – Pratique de la commande prédictive, Hermès, Paris, 1993. [111] Ritt J.F. – Differential Algebra, Amer. Math. Soc., New York, 1950. [112] Rodino L. – Linear Differential Operators in Gevrey Spaces, World Scientific, Singapore, 1993. [113] Rosenbrock H.H. – Statespace and Multivariable Theory, Nelson, Londres, 1970. [114] Rothfuß R. – Anwendung der flachheitsbasierten Analyse und Regelung nichtlinearer Mehrgrößensysteme, VDI, Düsseldorf, 1997. [115] Rothfuss R., Becker U., Rudolph J. – Controlling a solenoid valve – a distributed parameter approach, in: Proc. MTNS2000 (Perpignan, 2000). [116] Rothfuss R., Rudolph J., Zeitz M. – Flatness based control of a nonlinear chemical reactor model, Automatica 32 (1996) 1433–1439. [117] Rothfuß R., Rudolph J., Zeitz M. – Flachheit: Ein neuer Zugang zur Steuerung und Regelung nichtlinearer Systeme, Automatisierungstechnik 45 (1997) 517–525. [118] Rouchon P. – Necessary condition and genericity of dynamic feedback linearization, J. Math. Systems Estim. Control 5 (1995) 345–358. [119] Rouchon P. – Motion planning, equivalence, infinite dimensional systems, in: Proc. MTNS2000 (Perpignan, 2000). [120] Rouchon P., Fliess M., Lévine J., Martin P. – Flatness, motion planning and trailer systems, in: Proc. 32nd IEEE Conf. Decision Control (San Antonio, TX), 1993, pp. 2700–2705. [121] Rouchon P., Rudolph J. – Invariant tracking and stabilization, in: Aeyels D., LamnabhiLagarrigue F., van der Schaft A. (Eds.), Stability and Stabilization of Nonlinear Systems, in: Lecture Notes in Control and Inform. Sci., vol. 246, Springer, Berlin, 1999, pp. 261–273. [122] Rouchon P., Rudolph J. – Réacteurs chimiques différentiellement plats : planification et suivi de trajectoires, in : Corriou J.P. (Ed.), Automatique et procédés chimiques, Hermès, Paris, 2000. [123] Rowen L.H. – Ring Theory, Academic Press, Boston, 1991. [124] Rudolph J. – Wellformed dynamics under quasistatic state feedback, in: Jakubczyk B., Respondek W., Rze˙zuchowski T. (Eds.), Geometry in Nonlinear Control and Differential Inclusions, Banach Center Publications, Warsaw, 1995, pp. 349–360. [125] Rudolph J. – Flachheitbasierte Folgerregelung, Vorlesung Uni. Linz, 2000. [126] Rudolph J. – Flatnessbased control by quasistatic feedback illustrated on a cascade of two chemical reactors, Internat. J. Control 73 (2000) 115–131. [127] Rudolph J. – Randsteuerung von Wärmetauschern mit örtlichen verteilten Parametern: Ein flachheitsbasierter Zugang, Automatisierungstechnik, 2000. [128] Rudolph J., Delaleau E. – Some examples and remarks on quasistatic feedback of generalized states, Automatica 34 (1998) 993–999. [129] Rudolph J., Mounier H. – Trajectory tracking for π flat nonlinear delay systems with a motor example, in: Isidori A., LamnabhiLagarrigue F., Respondek W. (Eds.), Nonlinear Control in the Year 2000, Springer, Londres, 2000. [130] Rudolph J., Woittennek F., von Löwis J. – Zur Regelung einer elektromagnetisch gelagerten Spindel, Automatisierungstechnik 48 (2000) 132–139.
Variations sur la notion de contrôlabilité
305
[131] Serre J.P. – Faisceaux algébriques cohérents, Ann. Math. 61 (1955) 197–278. [132] SiraRamírez H. – A passivity plus flatness controller for the permanent magnet stepper motor, Asian J. Control 2 (2000) 1–9. [133] SiraRamírez H. – Passivity vs flatness in the regulation of an exothermic chemical reactor, Eur. J. Control (2000). [134] Sneddon I.N. – The Use of Integral Transforms, McGrawHill, New York, 1972. [135] Sontag E.D. – Linear systems over commutative rings: A survey, Ricerche Automatica 7 (1976) 1–34. [136] Sontag E.D. – Finite dimensional open loop control generator for nonlinear control systems, Internat. J. Control 47 (1988) 537–556. [137] Sontag E.D. – Universal nonsingular controls, Systems Control Lett. 19 (1992) 221–224. [138] Sontag E.D. – Mathematical Control Theory: Deterministic Finite Dimensional Systems, 2nd ed., Springer, New York, 1998. [139] Suslin A.A. – Projective modules over a polynomial ring are free, Soviet Math. Dokl 17 (1976) 1160–1164. [140] Sussmann H.J., Jurdjevic V. – Controllability of nonlinear systems, J. Differential Equations 12 (1972) 95–116. [141] Tsujishita T. – Formal geometry of systems of differential equations, Sugaku Expos. 3 (1990) 25–73. [142] Utkin V.I. – Sliding Modes in Control and Optimization, Springer, Berlin, 1992. [143] van Nieuwstadt M., Rathinam M., Murray R.M. – Differential flatness and absolute equivalence of nonlinear control systems, SIAM J. Control Optim. 36 (1998) 1225–1239. [144] Yosida K. – Operational Calculus – A Theory of Hyperfunctions, Springer, New York, 1984. [145] Youla D.C., Gnavi G. – Notes on ndimensional system theory, IEEE Trans. Circuits Systems 26 (1979) 105–111. [146] Zharinov V.V. – Geometrical Aspects of Partial Differential Equations, World Scientific, Singapore, 1992. [147] Zribi M., SiraRamírez H., Ngai A. – Static and dynamic sliding mode control schemes for a PM stepper motor, Internat. J. Control, à paraître.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
Stability and stabilizability of delay–differential systems
Paolo Vettori a and Sandro Zampieri b a Departamento de Matemática, Universidade de Aveiro, Campus de Santiago, 3810194 Aveiro, Portugal b Dipartimento di Elettronica e Informatica, Università di Padova, via Gradenigo 6/a, 35131 Padova, Italy Emails:
[email protected] (P. Vettori),
[email protected] (S. Zampieri)
A BSTRACT In this chapter linear time systems with noncommensurate delays are analyzed according to the behavioral approach. The concept of autonomous delay–differential behavior is first introduced, and the class of stable delay–differential behaviors is characterized. Then, stabilizable behaviors are defined as a generalization of controllable behaviors. Some results on this class of systems are proposed. Finally, a counterexample is proposed showing that stabilizability cannot be characterized in term of rank drops of the matrix describing the system, as it happens for delay–differential behaviors with commensurate delays.
1. I N T R O D U C T I O N It is well known that the presence of delays makes the analysis of a dynamical system much more complicated. Indeed, more refined mathematical tools are needed for the investigation of the structural properties of a delay system. It is possible to distinguish two ways that have been classically used to deal with linear delay systems. The first is based on the extension of the state space approach to infinitedimensional systems, and makes use of the theory of semigroups. The second is based on the analysis of input/output systems with nonrational transfer functions, and makes use instead of the theory of Fourier/Laplace transforms. The behavioral approach can be considered as another way to deal with systems, in general, and with linear delay systems, in particular, which is based neither on the state space representation nor on the input/output representation of the system. This approach uses mathematical instruments that are closer to the ones used in the input/output approach, and its main objective is to develop an algebraic theory for this class of systems. This objective has been completely pursued in the case
308
P. Vettori and S. Zampieri
of systems in which all the delays are rational multiple of a single delay, which are called delay–differential systems with commensurate delays. On the other hand, the situation for general delay–differential systems with noncommensurate delays is much more involved, and, though many contributions have been given recently [6,12], many problems are still open for this class of systems. The behavioral description of a delay–differential system delays consists in the following set of trajectories di q B= w∈E : Ri,j i w(t − τj ) , dt i,j
where the symbol E denotes the space of smooth functions, the sum is finite, τj are delays, and Rij ∈ Rp×q . The set B is called the behavior of the system. In this set up there is no a priori distinction between input, output and state variables. Implicit neutral type systems are particular cases of these systems. In this chapter we will first present some stability results concerning autonomous behaviors, that are behaviors in which there are no inputs and in which the past of any trajectory completely determines its future. These results are in some way classical for input/state/output systems [7] and have been already extended to the behavioral approach [11]. We will propose a frequency domain method which is based on some properties of exponential polynomials and on the structure of their zeros. This technique allows us to deal with the case of noncommensurate delays too. In the second part of the chapter we will study the stabilizability of delay– differential behaviors. According to [14], a behavior is stabilizable if the past of any trajectory in the behavior can be steered to zero asymptotically. This definition is connected with the definition of controllability of a behavior which requires instead that the past of any trajectory in the behavior can be steered to zero in finite time. We will compare this property with a PBH type rank condition. It can be shown that under some hypotheses these conditions are equivalent, but that this does not hold, in general. We present a counterexample proving this fact. 2. M A T H E M A T I C A L
PRELIMINARIES
In this section we will provide the mathematical background which is needed in this chapter. For a complete exposition see [10,12]. Let E be the set of smooth functions C ∞ (R, R) endowed with the Fréchet topology of uniform convergence of the derivatives of any order on every compact. Notice that E , the topological dual of E , consists in the subspace of Schwartz distributions having compact support and it can be endowed with the weak topology. Any compact support distribution f ∈ E admits Laplace transform which is a holomorphic function that will be denoted by f (s). The set of all holomorphic functions will be denoted by O while the subset of O constituted by the Laplace transforms of all compact support distributions will be denoted by the symbol A. We shall usually identify isomorphic sets such as E and A.
Stability and stabilizability of delay–differential systems
309
A compact support distribution f ∈ E , or equivalently a holomorphic function f (s) ∈ A, acts on E through convolution f (s) : E → E,
w → f (s)w := f w.
Furthermore, this operator is linear and continuous. Important examples of such operators are the differential operator d/dt and the delay operator στ : E → E,
w → στ w,
(στ w)(t) := w(t − τ ).
These operators can be represented both by convolutions with the derivative δ or the translation δτ of the delta distribution and, equivalently, by the holomorphic functions s, e−sτ ∈ A, respectively. Let τ1 , . . . , τn be n noncommensurate positive real numbers and let σi := στi . Consider now the ring R[z0 , z1 , . . . , zn , z1−1 , . . . , zn−1 ] of Laurent polynomials in n + 1 indeterminates. Any polynomial in this ring naturally induces an operator from E to E simply by substituting the indeterminate z0 with the differential operator d/dt and the indeterminates z1 , . . . , zn with the delay operators σ1 , . . . , σn . Using the notation above, the same operator can be obtained as an element in A by substituting the indeterminate z0 with the complex variable s and the indeterminates z1 , . . . , zn with the exponential functions e−τ1 s , . . . , e−τn s . In this way any polynomial p(z0 , . . . , zn ) ∈ R[z0 , z1 , . . . , zn , z1−1 , . . . , zn−1 ] corresponds to a holomorphic function p(s) ∈ R[s, eτ1 s , . . . , eτn s , e−τ1 s , . . . , e−τn s ] which is the Laplace transform of a compact support distribution. We denote R := R s, eτ1 s , . . . , eτn s , e−τ1 s , . . . , e−τn s ,
whose elements are called exponential polynomials and represent delay–differential operators, i.e., R ⊆ A. Consider now the following subset of holomorphic functions p(s) , p(s), q(s) ∈ R . H := h(s) ∈ O: h(s) = q(s) As it is shown in [12], the elements of H can also be characterized in the following way (1)
h(s) ∈ H ⇐⇒ h(s) =
p(s) ∈O q(s)
with p(s) ∈ R and q(s) ∈ R[s].
Moreover, H ⊆ A (see [6]) and thus the elements in H can be associated with compact support distributions. They are called distributed delay–differential operators [5,6]. A matrix R(s) ∈ Hp×q acts on E q following the usual right matrix multiplication rule, R(s) : E q → E p ,
v → R(s)v.
310
P. Vettori and S. Zampieri
With the symbols imE R(s) and kerE R(s) we will mean the image and the kernel of the matrix operator R(s) over the set of functions E . On the other hand, R(s) induces an operator from Ap to Aq too (and, more in general, from Op to Oq ) by left matrix multiplication, which will be denoted by ◦R(s). We let Op := O1×p , adopting the same notation for its subspaces, hence ◦R(s) : Ap
→ Aq ,
a(s) → a(s)R(s).
There exists an important duality between the operators R(s) and ◦R(s). Let E ⊆ E q and A ⊆ Aq . Their orthogonals are defined as E ⊥ := a(s) ∈ Aq : a(s)v = 0 ∀v ∈ E , A⊥ := v ∈ E q : a(s)v = 0 ∀a(s) ∈ A .
The aforementioned duality can now be stated as follows [10, p. 364]. For every R(s) ∈ Hp×q , (2)
kerE R(s)⊥ = imA ◦R(s)
and
kerA ◦R(s)⊥ = imE R(s) ,
where · denotes both the closure with respect to the topology in E and the closure with respect to the weak topology in E ∼ = A, depending on the context. In the following we will use these wellknown properties of orthogonals [10, p. 363]: if A is a subspace of E q (or Aq ), then it can be shown that A⊥ = ( A )⊥ and A⊥⊥ = A . Furthermore, if B is another subspace, A ⊆ B ⇒ A⊥ ⊇ B ⊥ . 3. D E L A Y – D I F F E R E N T I A L
BEHAVIORS
In our setup, a behavior is a subset of E q for some q ∈ N. A delay–differential behavior is the set of solutions of a finite family of distributed delay–differential equations. Definition 1. A delay–differential behavior B is a subspace of E q which is the kernel of some matrix R(s) ∈ Hp×q , i.e., B = kerE R(s).
Observe that any delay–differential behavior B = ker R(s) is shiftinvariant, i.e., if w ∈ B then στ w ∈ B for every τ ∈ R. Indeed, R(s)στ w = R(s)e−sτ w = e−sτ R(s)w = 0. The concept of delay–differential behavior is very general and includes the very special case of classical statespace models of linear continuous or discrete time systems [9]. It also includes delay systems in state form and delay systems of neutral type. We now introduce some important notions which will be first defined for a generic behavior.
Stability and stabilizability of delay–differential systems
311
Definition 2. A behavior B is said to be autonomous if w ∈ B and w(t) = 0 ∀t 0 ⇒ w = 0.
An autonomous behavior is said to be stable if, for any w ∈ B , w(t) converges asymptotically to zero as t → ∞. In an autonomous behavior, the past completely determines the future. This is the case, for instance, of the solution set of a scalar delay–differential equation h(s)w = 0 for any nonzero h(s) ∈ H. Actually, the values of w in a suitable compact interval act as the initial conditions of a Cauchy problem, completely determining w (see Proposition 4). For controllable behaviors, on the contrary, there is complete freedom in connecting the past with the future. Actually, a behavior B ⊆ E q is said to be controllable if, for any pair of trajectories w1 , w2 ∈ B , there exist T > 0 and w ∈ B such that w1 (t) if t 0, w(t) = w2 (t − T ) if t T . For example, imE M(s), the image of M(s) ∈ Hq×d , is always a controllable behavior [12]. While any trajectory of a controllable behavior can be steered to zero in finite time, a behavior is called stabilizable if any trajectory can be steered to zero in infinite time, i.e., asymptotically. Definition 3. A delay–differential behavior B ⊆ E q is said to be stabilizable if, for any trajectory w ∈ B , there exists w ∈ B , asymptotically converging to zero, such that w(t) = w (t) for all t 0. The notion of controllable delay–differential behavior has been thoroughly analyzed in [12,13]. In this contribution we will give some results about autonomous, stable and stabilizable delay–differential behaviors. 4. S T A B L E
BEHAVIORS
There are several ways to investigate stability. In the case of commensurate delay–differential systems, approaches based on operator theoretic techniques have already been used within the behavioral framework [11]. In this chapter we will present an alternative way based on a classical result by Hille and Phillips, which provides a characterization of the invertible elements in an suitable convolutional algebra. Some technical details can be found in Appendix B. In this section we want to characterize stability of B = kerE R(s) in terms of the matrix R(s). As a preliminary result, since a stable behavior must be autonomous, we provide a useful characterization of autonomous systems. Proposition 4. The behavior B = kerE R(s) ⊆ E q is autonomous if and only if R(s) ∈ Hp×q is full column rank, i.e., has rank q .
312
P. Vettori and S. Zampieri
Proof. Notice first that the scalar case, i.e., q = 1, is a direct consequence of the Titchmarsh–Lions theorem: if the convolution of two distributions with support bounded on the left is zero, then at least one of them is zero. For every q × q minor h(s) of R(s) there exist a matrix X(s) such that (3)
X(s)R(s) = I h(s),
where I is the identity matrix of suitable dimension. If R(s) is full column rank, then there exists a nonzero minor h(s) and, by Eq. (3), for every w ∈ B we have h(s)w = 0, i.e., every component of w satisfies a scalar equation. Hence the system is autonomous. If on the other hand we suppose that R(s) is not full column rank, then there is a nonzero column C(s) ∈ Hq×1 such that R(s)C(s) = 0. Thus, imE C(s) = {0} is a controllable nontrivial subbehavior of B , which cannot be autonomous. 2 The following theorem provides a necessary and a sufficient condition for the stability of delay–differential behaviors. Theorem 5. Consider an autonomous delay–differential behavior B = kerE R(s), where R(s) ∈ Hp×q is full column rank. (i) Let I be the ideal in O generated by the q × q minors of R(s). If there exists h(s) ∈ I ∩ H such that h(λ) = 0 for all λ such that Re λ > α for some constant α < 0, then B is stable. (ii) If B is stable, then rank R(λ) = q for any λ ∈ C such that Re λ 0. Notice that for delay–differential behaviors with commensurate delays, condition (i) of the previous theorem is equivalent to the following: (i ) If rank R(λ) = q for any λ ∈ C such that Re λ > α for some constant α < 0, then B is stable. It is our conjecture that the same holds in the noncommensurate case. In order to prove Theorem 5 we need a lemma that will be proved in Appendix C. Lemma 6. Let h(s) ∈ H be such that h(s) = 0 for all s such that Re s > α for some real α ∈ R and let β > α . Then, for every w ∈ kerE h(s), there exists a constant K > 0 such that w(t) Keβt for every t > 0. Proof of Theorem 5. (i) By definition of I , there exists a matrix X(s) ∈ Oq×p such that X(s)R(s) = I h(s). By [13, Thm. 3], this is equivalent to say that B ⊆ kerE h(s)I . Therefore, by Lemma 6, every component of w ∈ B tends exponentially to zero as t → ∞. (i ) In the commensurate delays case, it has been proved [5] that H is a Bézout Domain, i.e., every finitely generated ideal is principal. As a consequence, we
Stability and stabilizability of delay–differential systems
313
can assume without loss of generality that h(s) is the generator of I . So, the rank of R(s) drops at λ if and only if h(λ) = 0. (ii) Suppose that there exists a λ ∈ C, with Re λ 0, such that rank R(λ) < q . Then there exists a vector v ∈ Cq such that R(λ)v = 0. As a direct consequence, B is not stable since the function w(t) = veλt , which does not tend to zero as t → ∞, belongs to B . Indeed, by the properties of exponential functions, R(s)w(t) = R(s)veλt = R(λ)veλt = 0.
5. S T A B I L I Z A B L E
2
BEHAVIORS
A classical result in the behavioral approach is that a differential behavior (without delays) B = kerE R(s), with R(s) ∈ R[s]p×q , is stabilizable if and only if rank R(λ) is constant for any λ ∈ C such that Re λ 0. An analogous result has been proved for delay–differential behaviors with commensurate delays [11] but, as we will show in the next section, this result cannot be extended to delay–differential behaviors with noncommensurate delays. However, the following statement holds true. Theorem 7. Consider a delay–differential behavior B = kerE R(s), where R(s) ∈ Hp×q has rank r . Let I be the ideal in H generated by all the r × r minors of R(s). If there exists h(s) ∈ I such that h(λ) = 0 for all λ such that Re λ > α for some constant α < 0, then B is stabilizable. The proof of this theorem is a direct consequence of the following more general result. Proposition 8. Consider a delay–differential behavior B = kerE R(s), where R(s) ∈ Hp×q has rank r . Let I be the ideal in H generated by all the r × r minors of R(s). Then, for every h(s) ∈ I there exist behaviors Bc and Ba such that B = Bc + Ba ,
where Bc is controllable, Ba is autonomous, and h(s)wa = 0 for every wa ∈ Ba . Proof. In [2, Thm. 8], it is proved that if 1 ∈ I , there exists X(s) ∈ Hq×p such that R(s)X(s)R(s) = R(s). However, by slightly changing the proof of the statement, it is easy to show that for every h(s) ∈ I there exists a matrix X(s) ∈ Hq×p such that R(s)X(s)R(s) = R(s)h(s). Let M(s) := I h(s) − X(s)R(s) and call Bc := imE M(s) and Ba := B ∩ kerE M(s). We want to show that B = Bc + Ba . The inclusion ‘⊇’ follows from the fact Bc ⊆ B , which holds true since R(s)M(s) = R(s)h(s) − R(s)X(s)R(s) = 0.
314
P. Vettori and S. Zampieri
To prove the opposite inclusion, take any w ∈ B . By surjectivity of h(s) (see [12]) there is a v ∈ E q such that w = h(s)v . Let wc := M(s)v and wa := X(s)R(s)v . First notice that wc + wa = M(s) + X(s)R(s) v = h(s)v = w. We want to show that wc ∈ Bc and wa ∈ Ba . The first fact is true by definition of Bc . As for the second one, note that M(s)X(s)R(s) = h(s)X(s)R(s) − X(s)R(s)X(s)R(s) = h(s)X(s)R(s) − X(s)h(s)R(s) = 0
and therefore M(s)wa = M(s)X(s)R(s)v = 0. In a similar way, R(s)wa = R(s)X(s)R(s)v = R(s)h(s)v = R(s)w = 0
thus showing that wa ∈ Ba . Finally, Bc is controllable since it is defined by an image representation. On the other hand, Ba is autonomous since every wa ∈ Ba satisfies a scalar equation. Actually, R(s)wa = 0 and so 0 = M(s)wa = h(s)wa − X(s)R(s)wa = h(s)wa .
6. A
2
NONSTABILIZABLE DELAY–DIFFERENTIAL BEHAVIOR
In this section we present an example of a delay–differential behavior B = kerE R(s) which is spectrally controllable, i.e., the rank of R(λ) is constant for every λ ∈ C, but is not stabilizable. We shall prove it by constructing a trajectory w ∈ B that cannot be steered to zero, not even asymptotically. Let R(s) := [a(s) b(s)] ∈ H1×2 , where (4)
a(s) :=
1 − e−τ (s−1) s −1
and b(s) := 1 − e−(s−1) = 1 − ee−s .
To have spectral controllability, a(s) and b(s) cannot have common zeros and this happens only if τ is an irrational number. In our example we assume that τ is a positive Liouville number, i.e., a transcendental number satisfying the following condition [8, p. 91]: for every positive integer K ∈ N there exist infinitely many pairs (n, d) ∈ N2 such that dτ − n d −K . This specific property of the delay τ is essential for the proof. We will start with some preparation. By the Liouville property it is possible to find a strictly increasing sequence dk ∈ N \ {0} such that (5)
∀k ∈ N \ {0}, ∃nk ∈ N: dk τ − nk  dk1−k .
The monotonicity of dk 1 cm := k cm−1
allows us to define another monotonic sequence: if m = 0; if m = dk ; if m = dk ∀k ∈ N.
Stability and stabilizability of delay–differential systems
315
Note that also limm→∞ cm = ∞ and, moreover, (6)
cdk = k.
It is known that every sufficiently regular periodic function can be written as a Fourier series [4, p. 46]. In this case, suppose that x has period equal to 1. Then, x(t) =
1 xm ei2πmt ,
where xm :=
m∈Z
If we let X(s) := (7)
x(t)e−i2πmt dt.
0
1 0
x(t)e−st dt then we also have
xm = X(i2πm).
Moreover [4, p. 42], any series x(t) = m∈Z xm ei2πmt with xm ∈ C defines a function x ∈ E if and only if limm→∞ mn xm  = 0 for all n ∈ N. Let us define a realvalued function x by choosing the coefficients xm = x−m (complex conjugate) in such a way that
(8)
xm  = m−cm
∀m ∈ N.
Then mn xm  = mn m−cm = mn−cm → 0 since cm is increasing and thus the exponent is negative for sufficiently big m. As a consequence, x ∈ E . Consider now the function w ∈ E 2 having the following structure: (9)
w := [0
ξ ] ,
where ξ(t) := et x(t).
The condition w ∈ kerE R(s) is equivalent to say that ξ ∈ kerE b(s), which is true since, by periodicity of x , b(s)ξ(t) = 1 − ee−s et x(t) = et x(t) − eet−1 x(t − 1) = 0. We wish to show now that the behavior B is not stabilizable. We proceed by contradiction. Suppose that B is stabilizable, i.e., there exists a function w˜ ∈ B such ˜ = w(t) for t 0 and w(t) ˜ → 0 as t → ∞. Then let v := H w˜ where H is that w(t) ˜ for t 0. the Heaviside step function, i.e., v(t) = 0 for t < 0 and v(t) = w(t) Notice that v admits Laplace transform V (s) which is holomorphic in Re s α for every α > 0. Indeed, its components vl are bounded by constants vlm , for l = 1, 2, whence we get the following sufficient condition for the existence of V (s) (see [3, p. 636])
∞
m vl (t)e−αt dt vl < ∞, α
l = 1, 2.
0
1 (s) = V1 (s). We want to show that also b(s)w˜ 2 The Laplace transform of w˜ 1 is W admits a simple Laplace transform. First note that b(s)w˜ 2 (t) = 0 for t < 0. Then,
316
P. Vettori and S. Zampieri
b(s)w˜ 2 (t) = v2 (t) − eξ(t − 1) = v2 (t) − ξ(t) = b(s)v2 (t) − ξ(t) for 0 t 1. In the end, b(s)w˜ 2 (t) = b(s)v2 (t) for t > 1. Thus
∞
b(s)w˜ 2 (t)e−st dt
0
∞ =
b(s)v2 (t)e
−st
1 dt −
0
x(t)et e−st dt = b(s)V2 (s) − X(s − 1).
0
This shows that by transforming the equation R(s)w˜ = 0, we obtain (10)
a(s)V1 (s) + b(s)V2 (s) = X(s − 1).
We evaluate Eq. (10) at s = 1 + i2πdk . First note that b(1 + i2πm) = 0. So, since −cd from (8) and (6) it follows that xdk  = dk k = dk−k , by (7) we get (11)
a(1 + i2πdk )V1 (1 + i2πdk ) = X(i2πdk ) = d −k . k
By definition of a(s) and (5), we get −i2πdk τ  e−iπdk τ eiπdk τ − e−iπdk τ   sin πdk τ  a(1 + i2πdk ) = 1 − e = = i2πdk  2πdk πdk  sin π(dk τ − nk ) π(dk τ − nk ) = dk−k . πdk πdk
Now, upon using (11) we obtain that 1 = dkk a(1 + i2πdk )V2 (1 + i2πdk ) V2 (1 + i2πdk ). But, by the Riemann–Lebesgue Lemma [3, p. 636], V2 (1 + iy) → 0 as y → ∞ and so 1 V2 (1 + i2πdk ) → 0,
as k → ∞.
This contradiction proves that the behavior B = kerE R(s) is not stabilizable. A P P E N D I X A. Z E R O S
OF EXPONENTIAL POLYNOMIALS
We recall here some definitions and results from [1, Section 12.8] which will be necessary to prove Lemma 6 in Appendix C. We give simpler proofs which are more suited to our purposes. Definition A.1. Consider a delay–differential operator p(s) ∈ R and write it as p(s) =
N i=0
pi s ni e−τi s ,
where pi = 0, τi ∈ R, ni ∈ N.
Stability and stabilizability of delay–differential systems
317
Figure A.1. Distribution diagram of (regular) exponential polynomials.
Let n¯ := maxi {ni } and τ¯ := mini {τi }. Then p(s) is regular if it contains the term ps n¯ e−τ¯ s for some p = 0. Equivalently, if p(s) is written in the form (A.1) p(s) = pτ (s)e−τ s , with pτ (s) ∈ R[s], τ ∈T
where T is a finite set of delays, then p(s) is regular if n¯ = deg pτ¯ (s) deg pτ (s), ∀τ ∈ T . The operator h(s) ∈ H is regular if it can be written as h(s) = p(s)/q(s) where q(s) ∈ R[s] and p(s) is a regular delay–differential operator. The socalled distribution diagram of p(s) is the smallest convex polygonal graph that contains the points (−τi , ni ) ∈ R × N. Consider the segments L0 , L1 , . . . , LM which form its upper part, as is shown in Fig. A.1 where we assume, without loss of generality, that τ¯ = 0. Denote by µr the slope of Lr and define the strips Vr := {s: Re(s + µr log s) c}, where c ∈ R. Then, by [1, Thm. 12.9], there exist constants ρ, c > 0 such that – every strip is disjoint from the other strips for s ρ ; – every strip contains an infinite number of zeros of p(s); – every zero λ of p(s) such that λ > ρ is contained in one of the strips Vr . A direct consequence of this facts leads to this important result. Theorem A.2. If a delay–differential polynomial is not regular, then it has an infinite number of zeros with arbitrarily large real parts. Indeed, by the convexity of the distribution diagram of p(s), µr < µ0 for every r and, moreover, p(s) is regular if and only if µ0 0. The shape of the strips Vr (see Fig. A.2), along with their properties, is sufficient to verify the statement. Corollary A.3. Regularity of the operator h(s) ∈ H is a necessary condition for the stability of B = kerE h(s) (see also [7, p. 338]).
318
P. Vettori and S. Zampieri
Figure A.2. Zero distribution of exponential polynomials.
Proof. If kerE h(s) is stable then, by Theorem 5(ii), the zeros of h(s) can have only negative real part. By writing h(s) = p(s)/q(s) where p(s) ∈ R and q(s) ∈ R[s], as in (1), we conclude that the zeros of p(s) coincide with those of h(s) up to a finite set of zeros which are canceled in the division by q(s). It follows that p(s) may have zeros which have positive, but bounded, real part. So, by Theorem A.2, p(s), and thus h(s) too, are regular. 2 In order to prove a sufficient condition for exponential stability of kerE p(s) we will make use of a bound from below of p(s). To this aim, consider the region of the complex plane Pβ,ρ := s ∈ C: Re s β and s ρ .
Theorem A.4. Let p(s) ∈ R be an exponential polynomial and be n¯ defined as in Definition A.1. Suppose that there exists a constant α ∈ R such that Re λ α only for a finite number of zeros λ of p(s). Then there is a constant ρ 0 such that for all β > α inf s −n¯ p(s) = 0.
s∈Pβ,ρ
Proof. Observe that, by Corollary A.3, since p(s) cannot have zeros with arbitrarily large real part, then it must be regular. Therefore, µ0 0 and n¯ = deg pτ¯ (s). Fix any β > α . When µ0 < 0 then, for some ρ 0, there is no intersection between any strip Vr and Pβ,ρ . Therefore, by [1, Thm. 12.10(c)], s −n¯ p(s) is uniformly bounded away from zero on Pβ,ρ . Hence, the statement is proved.
Stability and stabilizability of delay–differential systems
319
On the other hand, when µ0 = 0 the strip V0 and Pβ,ρ may have nonempty intersection. Since −c < α , this happens when β < c . In this case, we can still apply [1, Thm. 12.10(c)] to Pβ,ρ \V0 , where therefore s −n¯ p(s) > η for some η > 0. Then choose ρ such that Pβ,ρ ∩ V0 does not contain zeros of p(s). Hence, we can employ [1, Thm. 12.10(a)] and obtain that, in this set, eτ s s −n¯ p(s) = eτ Re s s −n¯ p(s) is uniformly bounded away from zero for some fixed τ ∈ T . However, being β Re s < c , it follows straightforward that s −n¯ p(s) is bounded from below too. This concludes the proof. 2 A P P E N D I X B. A
CONVOLUTIONAL ALGEBRA
Consider the following subalgebra of the algebra of distributions that have support contained in [0, +∞) (see [3, Section A.7.4]) F := f = fL + fτ δτ : fL ∈ L1loc [0, ∞) and fτ ∈ R, ∀τ ∈ T ⊆ [0, ∞) , τ ∈T
where T is a countable set and L1loc [0, ∞) is the space of locally integrable functions with support in [0, ∞). Fix a real number β . For any f ∈ F define
∞ f β :=
fL (t)e−βt dt + fτ e−βτ , τ ∈T
0
which in general may be infinite. We denote by the symbol Fβ the subalgebra of F constituted by all the distributions f ∈ F with f β < ∞. The functional · β provides a norm for the convolution algebra Fβ . It is clear that each distribution f ∈ Fβ admits Laplace transform (B.1)
f (s) = fL (s) +
fτ e−sτ ,
τ ∈T
where fL (s) is the Laplace transform of fL . Moreover, f (s) is holomorphic in {s ∈ C: Re s β}. We denote by the symbol Aβ the space of the Laplace transforms of all the distributions in Fβ (the two algebras are clearly isomorphic). Observe that if in (B.1) fL has compact support and the set T is finite, then f (s) ∈ Aβ for every β ∈ R. The last ingredient we need in order to prove Lemma 6 is the following characterization of invertible elements in Aβ [3, Thm. A.7.49]. Theorem B.1. The function f (s) ∈ Aβ is invertible over Aβ if and only if (B.2)
inf f (s) = 0.
Re sβ
320
P. Vettori and S. Zampieri
A P P E N D I X C. P R O O F
OF
LEMMA 6
Using property (1) and (A.1), we can write the operator h(s) ∈ H in the following way (C.1)
h(s) =
p(s) pτ (s) −τ s = e , q(s) τ ∈T q(s)
where pτ (s), q(s) ∈ R[s] and T is a finite set of time delays. With no loss of generality, we assume that min{τ ∈ T } = 0. Since by hypothesis there is an upper bound to the real part of the zeros of h(s), by Theorem A.2 we know that h(s) is regular and so n¯ = deg p0 (s) deg pτ (s) for every τ ∈ T . Let l := deg q(s) and construct the function (s − α)l−n¯ , n¯ l, 1 g(s) := , n¯ > l , (s − λ1 )(s − λ2 ) · · · (s − λn−l ¯ ) where λi are zeros of h(s). In this way, from (1) it follows that (C.2)
f (s) := g(s)h(s) ∈ O ⇒ f (s) ∈ H.
By definition of g(s), the rational function f˜τ (s) := g(s)pτ (s)/q(s) is proper, hence there are constants fτ and strictly proper rational functions fˆτ (s) such that f˜τ (s) = fˆτ (s) + fτ . Thus, f (s) = fτ e−τ s = fL (s) + fτ e−τ s . f˜τ (s)e−τ s = fˆτ (s)e−τ s + τ ∈T
τ ∈T
τ ∈T
τ ∈T
By elementary properties of Laplace transforms, fL ∈ L1loc [0, ∞) and so f ∈ F , since it satisfies relation (B.1). Furthermore, f (s) ∈ H ⊆ A, i.e., f is a compact support distribution. Therefore f (s) ∈ Aβ for every β ∈ R. Now, let w ∈ kerE h(s) and write it as w = w+ + w− where w+ , w− ∈ E , w+ has support bounded on the left, say in [T , +∞) for some T ∈ R, and the support of w− is bounded on the right. It is clear that w(t) and w+ (t) coincide for sufficiently large positive t . Since 0 = h(s)w = h(s)w+ + h(s)w− , the support of v := h(s)w+ = −h(s)w− is bounded both on the left and on the right and we can ensure that it is contained in [0, +∞) by a suitable choice of T . So, for every β ∈ R we have that v ∈ Fβ . Moreover, if β > α , then (C.3)
f (s)w+ = g(s)h(s)w+ = g(s)v ∈ Fβ .
Indeed, when n¯ l , g(s) is a differential operator and therefore, being v smooth, g(s)v is still a smooth function with compact support. Otherwise, if n¯ > l , g(s) ∈ Aβ since its poles, which are zeros of h(s), have real part which is less than or equal to α .
Stability and stabilizability of delay–differential systems
321
If we show that f (s) has inverse φ(s) ∈ Aβ , by (C.3) we get that w+ = φ(s)g(s)v ∈ Fβ . The proof is then concluded by noting that if we let V := max{v(t)e−βt } and ψ(s) := φ(s)g(s) = ψL (s) + ψ(s), decomposed as in Eq. (B.1), then ∞ ∞ −βt w+ (t)e = ψL (τ )v(t − τ ) dτ + ψi v(t − τi )e−βt 0
∞
i=1
ψL (τ )e−βτ v(t − τ )e−β(t−τ ) dτ
0
+
∞
ψi e−βτi v(t − τi )e−β(t−τi )
i=1
∞ ∞ −βτ −βτi ψL (τ ) e V dτ + ψi e = V ψβ ∞. 0
i=1
To prove the invertibility of f (s), note that by definition (C.1), the set of zeros of h(s) and of p(s), differ by a finite set of points and thus p(s) satisfies the hypotheses of Theorem A.4. Moreover, using (C.2), we can write f (s) =
g(s)p(s) g(s)s n¯ −n¯ = · s p(s), h(s) h(s)
with
g(s)s n¯ proper but not strictly. h(s)
So, as s → ∞, f (s) is asymptotically equal to Cs −n¯ p(s) for some nonzero C ∈ R and therefore, by Theorem A.4, f (s) η in Pβ,ρ for some positive constants ρ, η ∈ R. Finally, since f (s) does not have zeros with real part greater than α , then f (s) is clearly bounded from below in the compact region {s ∈ C: Re s β, s ρ} too. Condition (B.2) is thus satisfied and the proof is concluded. R EFERENCES [1] Bellman R., Cooke K.L. – Differential–Difference Equations, Academic Press, New York, 1963. [2] Bhaskara Rao K.P.S. – On generalized inverses of matrices over integral domains, Linear Algebra Appl. 49 (1983) 179–189. [3] Curtain R.F., Zwart H. – An Introduction to InfiniteDimensional Linear Systems Theory, Texts in Appl. Math., vol. 21, SpringerVerlag, Berlin, 1995. [4] Folland G.B. – Fourier Analysis and its Applications, Wadsworth & Brooks, Pacific Grove, CA, 1992. [5] GlüsingLüerßen H. – A behavioral approach to delay–differential systems, SIAM J. Control Optim. 35 (2) (March 1997) 480–499. [6] Habets L.C.G.J.M. – System equivalence for ARsystems over rings – with an application to delay– differential systems, Math. Control Signals Systems 12 (3) (1999) 219–244. [7] Hale J. – Theory of Functional Differential Equations, Appl. Math. Sci., SpringerVerlag, New York, 1977. [8] Niven I. – Irrational Numbers, Carus Math. Monogr., vol. 11, John Wiley and Sons, Inc., New York, 1956.
322
P. Vettori and S. Zampieri
[9] Polderman J.W., Willems J.C. – Introduction to Mathematical Systems Theory: A Behavioral Approach, Texts in Appl. Math., vol. 26, SpringerVerlag, Berlin, 1997. [10] Treves F. – Topological Vector Spaces, Distributions and Kernels, Pure Appl. Math., vol. 25, Academic Press, New York, 1967. [11] Valcher M.E. – On the stability of delay–differential systems in the behavioral framework, IEEE Trans. Automat. Control 46 (10) (October 2001) 1634–1638. [12] Vettori P., Zampieri S. – Controllability of systems described by convolutional or delay–differential equations, SIAM J. Control Optim. 39 (3) (2000) 728–756. [13] Vettori P., Zampieri S. – Some results on systems described by convolutional equations, IEEE Trans. Automat. Control 46 (5) (2001) 793–797. [14] Willems J.C. – On interconnections, control, and feedback, IEEE Trans. Automat. Control 42 (3) (March 1997) 326–339.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
Using differential algebra to determine the structure of control systems
S.T. Glad Department of Electrical Engineering, Linköping University, SE581 83 Linköping, Sweden Email:
[email protected] (S.T. Glad)
1. I N T R O D U C T I O N Differential algebra makes it possible to manipulate polynomial systems of differential equations in a systematic way. There are many possible applications for this. Below a brief sketch of the theory behind differential algebra is outlined. Then some algorithms that reduce equations and inequalities to standard forms are given. Two applications of differential algebra are presented. One is the computation of a nonlinear analogue of the socalled RGA for control systems. The other is index reduction of DAEsystems. 2. D I F F E R E N T I A L
ALGEBRA
Differential algebra goes back to the work by Ritt, [21], and Kolchin, [20]. It was introduced into control theory by Fliess, [12–15]. Its constructive side gives an analogue of Gaussian elimination for systems of polynomial differential equations. Such algorithms have been studied by, e.g., Diop, [11], Glad, [16,17] and Boulier et al., [4]. A basic tool is that of ranking, which is an ordering of the variables and their derivatives. In one variable there is only one choice: y < y˙ < y¨ < y (3) < y (4) < · · · .
In several variables there are different choices, e.g., u < u˙ < u¨ < · · · < y < y˙ < y¨ < · · · , u < y < u˙ < y˙ < u¨ < y¨ < · · · .
324
S.T. Glad
A ranking has to satisfy y (ν) < y (ν+σ ) , u(µ) < y (ν) ⇒ u(µ+σ ) < y (ν+σ ) .
Below are some possible rankings for three variables (RR) (SR)
u < u˙ < · · · < y < y˙ < · · · < v < v˙ < · · · , u < v < y < u˙ < v˙ < y˙ < · · · , u < u˙ < · · · < v < y < v˙ < y˙ < · · · .
The ranking of variables is extended to a ranking of polynomials. The highest ranking derivative of a polynomial is called the leader. Polynomials are ranked as their leaders. When polynomials have the same leader they are ranked according to their degree (as polynomials in the leader). Consider for instance the ranking (RR) and the polynomials A: uyv ¨ + 1, leader = v, 2 B: u¨ + y (5) , leader = y (5) .
Here B is ranked lower than A. On the other hand, with the ranking (SR) and the same polynomials we have A: uyv ¨ + 1, leader = y, ¨ (5) 2 B: u¨ + y , leader = y (5)
with A ranked lower than B . As a third example consider again the ranking (SR) and the polynomials A: v 2 y˙ + u¨ 2 ,
leader = u, ¨ leader = u. ¨
B: v˙ + y + u¨ + u¨ 5 ,
In this case A is ranked lower than B since its leader is raised to a lower power (2) than in B (5). Suppose the differential polynomial A has leader y (ν) . The differential polynomial B is said to be 1. partially reduced with respect to A, if it contains no higher derivative of y than y (ν) , 2. reduced with respect to A, if it is partially reduced, and is a polynomial of lower degree than A in y (ν) . Take for instance the following polynomials under the ranking (RR). A: uyv ¨ + 1, leader = v, 2 B: u¨ + y (5) , leader = y (5) .
Using differential algebra to determine the structure of control systems
325
In this case B is reduced with respect to A, but A is also reduced with respect to B . On the other hand, with the ranking (SR) A: u(y) ¨ 3 v + 1, B: (y) ¨ + y, 2
leader = y, ¨ leader = y¨
B is reduced with respect to A but not vice versa. A set of differential polynomials, all of which are reduced with respect to each other, is called an autoreduced set. For instance y¨ 2 − 4u2 y, ˙
2ux2 − y, ¨
x1 − y
is autoreduced under the ranking u(·) < y (·) < x2(·) < x1(·) while x˙1 − x22 , x˙2 − u, y − x1
is autoreduced under the ranking u < u˙ < · · · < x1 < x2 < y < x˙1 < x˙2 < y˙ < · · · .
The ranking concept can now be carried to autoreduced sets A = A1 , . . . , Ar ;
B = B1 , . . . , Bs
that are assumed to be ordered: A1 < · · · < Ar ,
B1 < · · · < Bs .
A and B are ranked according to the first pair Ai , Bi with rank Ai different from rank Bi . “Empty places” at the end are regarded as polynomials of arbitrarily high rank. Consequently, with A: u˙ 2 + u, B: u3 u˙ 2 − 1,
y¨ 5 + uy, y˙ + y,
v˙ + y + u, v¨ + uy
the set B is lower than A. On the other hand, with A: u˙ + u, B: u3 u˙ − 1,
y˙ + y
the set B is lower than A, since the “missing” second polynomial of A is automatically ranked higher than the second polynomial of B. Let be a set of differential polynomials. If A is an autoreduced set in , such that no lower autoreduced set can be formed in , then A is called a characteristic set of .
326
S.T. Glad
The separant of a polynomial G with leader uG is SG =
∂G . ∂uG
There is an algorithm for computing the partial remainder P of F with respect to the autoreduced set A. σj SAj F = P + [A], A j ∈A
where [A] denotes the differential ideal generated by A, i.e. all polynomials that can be generated by linear combinations and differentiations. P is partially reduced with respect to A and no higher than F . The algorithm is based on the following observation. If the polynomial A has the leader y (ν) then A = SA y (ν+1) + B1
and neither SA nor B1 contains y (ν+1) . Differentiating once more gives A = SA y (ν+2) + B2
etc. In analogy with the separant one can define the initial I of a polynomial as the coefficient of the highest power of the leader. For instance G = uy¨ 3 + y 2 y¨ 3 + uy y¨ 2 + y˙ 2 + uy
has leader the y¨ and the initial IG = u + y 2 under the ranking (RR). If F is partially reduced with respect to A, then using polynomial division we can get νj IAj F = R + [A] A j ∈A
with R reduced with respect to A. Combining the algorithm for partial remainder with polynomial division results in a formula for the remainder R with respect to an autoreduced set A: σj νj SAj IAj F = R + [A] A j ∈A
R is reduced with respect to A and no higher than F . As an example the remainder of F = v (4) with respect to
A: y˙ 2 − 1,
v¨ − uy
under the ranking (RR) is the polynomial remA (F ) = 2uy ¨ y˙ + 4u. ˙
Using differential algebra to determine the structure of control systems
327
Having computed a nonzero remainder with respect to an autoreduced set, it is possible to get a lower autoreduced set. In this way characteristic sets can be formed. This is the a basic idea in Ritt’s sketch of an algorithm for computing characteristic sets. 3. S O L V A B I L I T Y An important question for systems described by differential polynomials is the solvability. Consider to begin with a system of the form (1)
f1 = 0, . . . , fn = 0,
g1 = 0, . . . , gm = 0,
where the fi and gi are polynomials in the variables y1 , . . . , yN
and their derivatives. For this type of system there is an algorithm by Seidenberg, [22] to decide solvability using successive elimination of variables. The algorithm does not allow the restriction to realvalued solutions that one would want in most physical models, however. For polynomial systems of equations, inequations and inequalities without any derivatives, the theory of real algebra [19,3], gives algorithmic methods for deciding the existence of real solutions. A possible algorithm is cylindrical algebraic decomposition [1,2]. Here an algorithm in the spirit of Seidenberg, [22], will be described. The basic algorithm is the following. Algorithm FG. Input: An ordered pair (F, G) of sets F = {f1 , . . . , fn },
G = {g1 , . . . , gm },
where the fi and gi are differential polynomials corresponding to equations and inequations respectively. 1. Compute a characteristic set A = {A1 , . . . , Ap } of F . 2. If F \ A is nonempty, then go to 5. 3. If SA ∈ G, IA ∈ G for all A ∈ A then Finished. Output: (F, G). / G. 4. For some A ∈ A, either p := SA or p := IA so that p ∈ Split. Output: F ∪ {p}, G ,
F, G ∪ {p}
328
S.T. Glad
5. Let fk be the highest unreduced (with respect to A) element of F . It is then possible to form an equation p ν fk = Qfj(σ ) + R,
where fj is an element of A, and either p = Sfj or p = Ifj . If p ∈ / G then go to 8. 6. If R = 0 then F := F \ fk
and go to 1. F := F \ {fk } ∪ {R} and go to 1. 8. Split. Output: F ∪ {p}, G , F, G ∪ {p} . 7.
Proposition 1. The algorithm FG will reach one of the points marked “Finished” or “Split” after a finite number of steps. Proof. The only possible loop is via step 6 or step 7 to step 1. This involves either the removal of a polynomial or its replacement with one that is reduced with respect to A or has its highest unreduced derivative removed. If R is reduced, then it is possible to construct a lower autoreduced set. An infinite loop would thus lead to an infinite sequence of autoreduced sets, each one lower than the preceding one. It is easy to see that this is impossible. 2 Proposition 2. If the algorithm FG receives the pair (F, G) of equations and inequations and returns the two pairs (F1 , G1 ), (F2 , G2 ), then (F, G) ⇔ (F1 , G1 ) or (F2 , G2 )
in the sense that an element satisfies the equations F and the inequations G if and only if it satisfies either F1 , G1 or F2 , G2 . Proof. The set F is changed at either step 6 or step 7. If these steps are reached, then we have p ν fk = Qfj(σ ) + R
with p belonging to the set of inequations G that have to be satisfied. The problems f1 = 0, . . . , fk = 0, . . . , fn = 0; g = 0, g∈G
f1 = 0, . . . , R = 0, . . . , fn = 0;
g∈G
g = 0
Using differential algebra to determine the structure of control systems
329
are then equivalent. At the splittings at step 4 or step 8 the equivalence is obvious. 2 If the algorithm FG splits the pair (F, G) into two pairs (F1 , G1 ) and (F2 , G2 ), then the algorithm can again be used on each pair. If there is a new split, the algorithm can again be used on each pair. In this way a tree structure is generated where each node corresponds to a split generated by the algorithm. If the algorithm reaches “Finished” at step 3, then that branch of the tree is terminated. Proposition 3. The tree generated by the algorithm FG in the manner described above is finite and each branch terminates with a pair (F, G) such that F is an autoreduced set and each separant and initial of F belongs to G. Proof. Consider a pair (F1 , G1 ) generated at one node of the tree and a pair (F2 , G2 ) generated at a lower node. Then, either the lowest autoreduced set of F2 is strictly lower then the one of F1 , or else F1 = F2 . In the latter case G2 has been obtained from G1 by adding one or more elements. Since only a finite number of elements can be added to G for a fixed F , an infinite number of nodes in the tree can only be generating an infinite sequence of autoreduced sets, each one being lower than the preceding one, an impossibility. The remainder of the proposition follows from the fact that a branch of the tree can only be terminated by algorithm FG reaching “Finished” at step 3. 2 3.1. A reduced form Now consider a system which also includes strict inequalities: (2)
f1 = 0, . . . , fn = 0,
g1 = 0, . . . , gm = 0,
h1 < 0, . . . , hq < 0.
We assume that algorithm FG has been used for the equations and inequations so that f1 , . . . , fn form an autoreduced set whose separants and initials are among g1 , . . . , gm . Proposition 4. The system (2) is equivalent to a modified system (3)
f1 = 0, . . . , fn = 0, h˜ 1 < 0, . . . , h˜ q < 0,
g˜ 1 = 0, . . . , g˜ m = 0,
where all gi and hi are reduced with respect to f1 , . . . , fn . Proof. For each gk that is not reduced we can write ν µ (j ) gk Si i Ii i = g˜ k + Qij fi , i
ij
330
S.T. Glad
where g˜ k is the remainder of gk with respect to the autoreduced set f1 , . . . , fn and the Si and Ii are the separants and initials. Since these polynomials are among those which are specified to be nonzero and the fi are specified to be zero, it is clear that gk = 0 is equivalent to g˜ k = 0. Also g˜ k , being a remainder, is reduced with respect to f1 , . . . , fn . For each hk one has similarly a relation hk
Si i Ii i = hˆ k + ν
µ
(j )
Qij fi ,
where the remainder hˆ k is reduced. Multiplying by as suitable number of separants and initials this can be written hk
ν˜ µ˜ Si i Ii i = h˜ k +
ij f (j ) , Q i
where the integers ν˜ i and µ˜ i are even and h˜ i = hˆ k
σ
τ
Si i Ii i
is still reduced with respect to f1 , . . . , fn . Since hk and h˜ k differ only by a strictly positive factor when all inequations and equations are satisfied, it follows that hk < 0 and h˜ k < 0 are equivalent. 2 3.2. The solvability decision From the previous sections it follows that to determine the existence of a real solution to (2) one has to solve that problem for a number of systems
(4)
f1 = 0, . . . , fn = 0,
g1 = 0, . . . , gm = 0,
h1 < 0, . . . , hq < 0,
where f1 , . . . , fn is an autoreduced set whose separants and initials are among the gi , and where all gi and hi are reduced with respect to f1 , . . . , fn . To analyze the problem, the original physical variables are replaced by variables z1 , . . . , zn and u1 , . . . , up (p = N − n) in such a way that the leader of fi is the (ν ) derivative zi i for some νi . We introduce the notation (5)
Ziν = zi , z˙ i , z¨i , . . . , zi(ν−1) , zi(ν) ,
(6)
(σ ) (σ ) U = u1 , . . . , u1 1 , . . . , up p ,
Using differential algebra to determine the structure of control systems
331
where σi is the highest derivative of ui . The system (4) can then be written
(7)
ν ν −1 ν −1 f1 Z11 , Z22 , Z33 , . . . , Znνn −1 , U ν ν −1 ν f2 Z11 , Z22 , Z33 , . . . , Znνn −1 , U ... ν1 ν2 ν3 fn Z1 , Z2 , Z3 , . . . , Znνn , U ν ν ν g1 Z11 , Z22 , Z33 , . . . , Znνn , U .. . ν1 ν2 ν3 νn gm Z1 , Z2 , Z3 , . . . , Zn , U ν ν ν h1 Z11 , Z22 , Z33 , . . . , Znνn , U .. . ν1 ν2 ν3 hq Z1 , Z2 , Z3 , . . . , Znνn , U
= 0, = 0, = 0, = 0, = 0, < 0,
< 0.
We now introduce the following sets of variables in two indices: (8)
Zi,ν = {zi,0 , zi,1 , . . . , zi,ν−1 , zi,ν },
(9)
V = {u1,0 , . . . , u1,σ1 , . . . , up,σp }.
Together with (7) we can then consider the purely algebraic system of equations, inequations and inequalities. f1 (Z1,ν1 , Z2,ν2 −1 , Z3,ν3 −1 , . . . , Zn,νn −1 , V ) = 0, f2 (Z1,ν1 , Z2,ν2 , Z3,ν3 −1 , . . . , Zn,νn −1 , V ) = 0, .. . fn (Z1,ν1 , Z2,ν2 , Z3,ν3 , . . . , Zn,νn , V ) = 0,
(10)
g1 (Z1,ν1 , Z2,ν2 , Z3,ν3 , . . . , Zn,νn , V ) = 0, .. . gm (Z1,ν1 , Z2,ν2 , Z3,ν3 , . . . , Zn,νn , V ) = 0, h1 (Z1,ν1 , Z2,ν2 , Z3,ν3 , . . . , Zn,νn , V ) < 0, .. . hq (Z1,ν1 , Z2,ν2 , Z3,ν3 , . . . , Zn,νn , V ) < 0.
To determine if this system has a real solution can be done using for instance the methods of [1,2]. Proposition 5. Let (11)
o o Z1,ν , . . . , Zp,ν ,Vo p 1
332
S.T. Glad
solve (10). Then locally around (11) the equations f1 = 0, . . . , fn = 0
of (10) are equivalent to a system
(12)
z1,ν1 = φ1 (Z1,ν1 −1 , . . . , Zn,νn −1 , V ), .. . zn,νn = φn (Z1,ν1 −1 , . . . , Zn,νn −1 , V ).
Proof. The Jacobian ∂fi /∂zj,νj is a lower triangular matrix from the structure of (10). Its diagonal elements are the separants, which are among the gi and thus nonzero at the point (11). The nonsingularity of the Jacobian ensures (12) via the implicit function theorem. 2 The main result can now be formulated. Theorem 1. Let (13)
o o Z1,ν , . . . , Zp,ν ,Vo p 1
solve (10). Then, for any set of real analytic functions u1 (t), . . . , up (t) with U (t0 ) = V o
there exists an ε > 0 such that, on the interval (t0 − ε, t0 + ε), there are real solutions z1 (t), . . . , zn (t) satisfying (7) with o Zi(ν−1) (t0 ) = Zi,ν−1 ,
i = 1, . . . , n.
Proof. From Proposition 5 the equations of (7) are locally equivalent to
(14)
(ν −1) (ν ) z1 1 = φ1 Z1 1 , . . . , Zn(νn −1) , U , .. . (ν −1) zn(νn ) = φn Z1 1 , . . . , Zn(νn −1) , U
with the initial conditions o Zi(ν−1) (t0 ) = Zi,ν−1 ,
i = 1, . . . , n,
which can be converted to a state space description using the standard state assignment (ν −1)
x1 = z1 , x2 = z˙ 1 , . . . , xν1 = z1 1
(15)
, (ν −1)
xν1 +1 = z2 , xν1 +2 = z˙ 2 , . . . , xν1 +ν2 = z2 2 .. .
,
Using differential algebra to determine the structure of control systems
333
The existence of a local solution to (14) then follows from standard results on ordinary differential equations, see, e.g., [23]. Since the inequations and inequalities are satisfied at t0 they are satisfied on a small enough interval. 2 3.3. A simple example Are there any real solutions to the following system? y˙12 + y˙22 − 1 = 0,
(16)
y1 y2 − 1 = 0, y¨1 + y¨2 < 0.
Running algorithm FG, for the ranking y1 < y˙1 < y¨1 < · · · < y2 < y˙2 < y¨2 < · · ·
together with reduction gives the following system 4 y1 + 1 y˙12 − y14 = 0, y1 y2 − 1 = 0, y1 = 0,
(17)
y14 y111
+ 1 = 0,
0, y˙1 = 2 4 3 y1 + 1 y1 + 1 < 0.
The other systems created by splitting in algorithm FG trivially lack solutions. We see that any negative initial value of y1 will lead to a solution satisfying the relations in (17). We thus have to solve the initial value problem y˙1 = ± y14 / y14 + 1 , y2 = 1/y1 with y1 (0) < 0. 4. T H E RGA The relative gain array (RGA) has been widely used as a measure of the interaction between control loops in multivariable systems, see, e.g., [6,7,18]. For a linear system with square transfer matrix G(s), the relative gain array is defined as (18)
GRGA = G ·∗ G−T
where “·∗ ” denotes elementwise multiplication of the matrices, and −T denotes the transpose of the inverse. Often the matrices are evaluated at s = 0 so that the static gains are considered but it is also possible to look at arbitrary frequencies s = iω . The element i , j in the RGA array can be interpreted as the gain from input uj to output yi when the other uk are zero (“all other loops open”), divided by the
334
S.T. Glad
corresponding gain when all other yk are zero (“all other loops have maximally tight control”). 4.1. A nonlinear static RGA Consider a dynamic system with input u and output y , both being mvectors. Assume that for each constant u there exists an equilibrium of the system and a corresponding constant y , so that we have (19)
y = H (u)
for some function H from R m to R m . We consider some reference point u0 and define y 0 = H (u0 ). We also define (20)
φij (uj ) = Hi u01 , . . . , u0j −1 , uj , u0j +1 , . . . , u0m ,
where Hi denotes the i th component of H . The function φij thus shows how yi depends on uj when all other outputs are kept at the nominal value u0 . If H has an inverse it is possible to define (21)
0 0 0 ψj i (yi ) = H −1 i y10 , . . . , yi−1 , yi , yi+1 , . . . , ym
which can be interpreted as the input uj which is needed to get the output yi , provided all other controls are chosen to keep yk , k = i , equal to the nominal values yk0 . It is now possible to define a nonlinear static RGA. Definition 1. For a steadystate input–output relationship (19), with H invertible, the relative gain array is defined as (22)
HijRGA = φij ψj i (yi ) .
The relative gain array is thus an m by m array of scalar functions. If, for each i , Hi depends only on ui – the perfectly decoupled situation – then
(23)
HijRGA
=
yi0 yi
i = i, i = j.
The extent to which H RGA differs from this is thus a measure of the extent of static coupling. 4.2. A nonlinear input–output description To be able to extend the concepts above to dynamic systems and to use differential algebra concepts we will assume that the system description is polynomial. It is also assumed that the system has an input u and an output y , both of which are
Using differential algebra to determine the structure of control systems
335
mdimensional vectors. To simplify notation we assume that we are interested in the 1, 1element of the RGA and that the system description has the form F1 (y1 , u1 , . . . , um ) = 0, F2 (y1 , y2 , u1 , . . . , um ) = 0, .. .
(24)
Fm (y1 , . . . , ym , u1 , . . . , um ) = 0.
Here we use the notation yi = yi , y˙i , . . . , yi(n) , ˜ ui = ui , u˙ i , . . . , u(i n) for some integers n, n˜ . (These integers will be different in the different Fi .) In some cases such a description is obtained directly from the physical equations describing a system. In other situations one might start with a system in state space form (25)
x˙ = f (x, u),
y = h(x).
From this description it is possible to calculate an input–output description according to [9,8]. If f and h are polynomials this calculation can be done explicitly using differential algebra tools, see, e.g., [10,16]. In fact, one possibility is to use the reduction algorithm described in Section 3. The triangular structure is then obtained naturally by using the following ranking of the variables. (j )
ui < y1 < y˙1 < · · · < y2 < y˙2 < · · · < ym < y˙m < · · · .
4.3. Computing the RGA for nonlinear dynamic systems As discussed above the system is assumed to be given by the input–output relation F1 (y1 , u1 , . . . , um ) = 0,
(26)
F2 (y1 , y2 , u1 , . . . , um ) = 0, .. . Fm (y1 , . . . , ym , u1 , . . . , um ) = 0,
where the Fi form an autoreduced set with respect to the proper ranking. When computing the RGA for a dynamic system it is usually assumed that the nonmanipulated signals are given values corresponding to an equilibrium. To simplify notation we assume that the system (26) has an equilibrium at the origin so that (27)
Fi (0, . . . , 0) = 0,
i = 1, . . . , m.
To exclude degenerate cases we assume that the variable yi actually is present in each Fi and that each ui is present in some Fi .
336
S.T. Glad
To get a nonlinear analogue of element 1, 1 of the RGA, we consider the pair (28)
F1 (y1 , u1 , 0, . . . , 0) = 0
showing the influence of u1 on the output y1 , when all other controls are zero, and the equations (29) (30)
F1 (y1 , u1 , . . . , um ) = 0, Fk (y1 , 0, . . . , 0, u1 , . . . , um ) = 0,
k = 2, . . . , m,
giving implicitly the influence of u1 on the output y1 when all other yi are zero. In order to consider nonsingular solutions of (28) the separant with respect to the highest derivative y (n) of Fi is also introduced. (31)
G1 (y1 , u1 , 0, . . . , 0) =
∂F1 ∂y1(n)
.
To get a more informative description of the RGA, we reduce the system of equations and inequations (32) (33) (34)
F1 (y1 , u1 , . . . , um ) = 0, Fk (y1 , 0, . . . , 0, u1 , . . . , um ) = 0,
k = 2, . . . , m,
G1 (y1 , u1 , 0, . . . , 0) = 0.
To do that the ranking (35)
y1 < y˙1 < · · · < u1 < u˙ 1 < · · · < um < u˙ m < · · ·
is introduced, i.e. y1 and all its derivatives are ranked lowest, then u1 and all its derivatives are ranked, while all other inputs and their derivatives are ranked higher. The result of using the algorithm of Section 3 will be a collection of systems of the form H1 (y1 , u1 ) = 0,
(36)
H2 (y1 , u1 , u2 ) = 0, .. . Hm (y1 , u1 , u2 , . . . , um ) = 0, S1 (y1 , u1 ) = 0, .. .
or possibly (37)
H0 (y1 ) = 0, .. .
Using differential algebra to determine the structure of control systems
337
In the latter case we have a degenerate situation where the output y1 has to satisfy an algebraic relation independent of the input. Consider sets of equations and inequations of the form (36). Let νi be the highest order of the derivative of ui occurring in Hi . Choose an arbitrary (but consistent with the inequations) analytic function y¯i (t). Choose a set of initial conditions (ν −1)
u1 (0), . . . , u1 1
(38)
(νm −1) (0), . . . , um (0)
consistent with the inequations. The nonvanishing of the separants in (36) makes it possible to solve locally for the highest derivatives of u1 , . . . , um . It is thus possible to generate a solution u¯ 1 corresponding to y¯1 . Using this function as an input in Fi (y1 , u¯ 1 , 0, . . . , 0) = 0
(39)
it is possible to calculate a function y1 corresponding to y¯1 . The mapping from y¯1 to y1 corresponds to the map (22) in the static nonlinear case and to the 1, 1element in the relative gain array in the linear case. 5. A
SIMPLE EXAMPLE
As a very simple example consider the system y¨1 + u2 y˙1 + y1 − u1 = 0, y˙2 − u2 − u1 = 0.
In this case we get u¯ 1 as a solution of y¨1 + u2 y˙1 + y1 − u1 = 0, −u2 − u1 = 0
giving u¯ 1 =
y¨¯ 1 + y¯1 1 + y˙¯ 1
to be substituted into (39) which in this case is y¨1 + y1 = u¯ 1 .
6. DAE
SYSTEMS
In simulation and numerical analysis DAE systems denote systems of differential equations and ordinary equations that are not trivially reducible to state space form. Usually it is assumed that only firstorder derivatives are used so a typical system would be described by F (y, ˙ y, t) = 0.
338
S.T. Glad
In many applications the equations and variables can be partitioned into socalled semiexplicit form: x˙ = f (x, y, t), 0 = g(x, y, t).
If the Jacobian of g with respect to y is nonsingular, the system is said to have index one. It is known that these systems can be solved numerically in a straightforward manner. For a discussion of DAEsystems and their numerical solution, see, e.g., [5]. 6.1. Connection to differential algebra Suppose that the algorithm of Section 3 is used to compute an autoreduced set (or in the general case a number of such sets) for the model F (˙z, z, t) = 0
under the ranking z1 < z2 < · · · < zn < z˙ 1 < z˙ 2 < · · · < z˙ n < · · · .
Then the result has the form p1 (z1 , . . . , zi1 ) = 0, p2 (z1 , . . . , zi2 ) = 0, .. . pm (z1 , . . . , zim ) = 0,
(40)
pm+1 (z, z˙ j1 ) = 0, pm+2 (z, z˙ j1 , z˙ j2 ) = 0, .. . pn (z, z˙ j1 , . . . , z˙ jnm ) = 0,
where the ik and jk are increasing sets of indices, together making up 1, 2, . . . , n. Putting the zik into the vector y and the zjk into the vector x gives g(y, x) = 0, f (y, x, x) ˙ = 0,
where g contains p1 , . . . , pm and f contains pm+1 , . . . , pn . This is close to the semiexplicit form. Moreover, since the algorithm of Section 3 gives an autoreduced set together with nonzero separants, the Jacobians gy ,
fx˙
Using differential algebra to determine the structure of control systems
339
are triangular with nonzero diagonal elements. It follows from the implicit function theorem that locally the form is equivalent to the semiexplicit one. Since gy is nonsingular it has index = 1. It follows that in this case differential algebra can be used to perform socalled index reduction. Example 1. Consider the equations describing a pendulum moving in a vertical plane. Assume that all mass is concentrated at the tip. Let z1 be the horizontal position, z2 the horizontal velocity, z3 the vertical position and z4 the vertical velocity of the tip of the pendulum. Assuming that the pendulum has length one and scaling to make the gravitational acceleration unity gives the following system of equations z˙ 1 = z2 , z˙ 2 = −z1 z5 , z˙ 3 = z4 , z˙ 4 = −1 − z3 z5 , 0 = z12 + z32 − 1,
where z5 is the internal force in the pendulum. This is a DAEsystem with index = 3 (meaning that three differentiations are needed to solve for all derivatives). Transforming to a description of the form (40) gives z12 + z32 − 1 = 0, z3 z4 + z2 z1 = 0, 2 2 −z3 z1 + z3 − z2 + 1 − z12 z5 = 0, z˙ 1 − z2 = 0, 1 − z12 z˙ 2 + z3 z12 − z3 z1 + z1 z22 = 0, z3 = 0, 1 − z12 = 0.
Solving for z˙ 1 and z2 in the last two equations gives a semiexplicit DAE. Since the Jacobian of the first three equations with respect to z3 , z4 and z5 is nonsingular, the system has index = 1. 2 ACKNOWLEDGEMENT The work on differential algebra and its applications has been supported by the Swedish Research Council for Engineering Sciences (TFR). R EFERENCES [1] Arnon D.S., Collins G.E., McCallum S. – Cylindrical algebraic decomposition i: The basic algorithm, SIAM J. Comput. 13 (1984) 865–889. [2] Arnon D.S., Collins G.E., McCallum S. – Cylindrical algebraic decomposition ii: An adjacency algorithm for the plane, SIAM J. Comput. 13 (1984) 878–889.
340
S.T. Glad
[3] Bochnak J., Coste M., Roy M.F. – Géometrie algébrique réelle, SpringerVerlag, 1987. [4] Boulier, F., Lazard, D., Ollivier, F., Petitot M. – Representations for the radical of a finitely generated differential ideal, in: Proceedings of ISSAC95, 1995, pp. 158–166. [5] Brenan K.E., Campbell S.L., Petzold L.R. – Numerical Solution of Initialvalue Problems in Differentialalgebraic Equations, Elsevier, 1989. [6] Bristol E.H. – On a new measure of interactions for multivariable process control, IEEE Trans. Automat. Control AC11 (1966) 133–134. [7] Campo P.J., Morari M. – Achievable closedloop properties of systems under decentralized control: Conditions involving the steadystate gain, IEEE Trans. Automat. Control 39 (1994) 932–943. [8] Conte G., Moog C.H., Perdon A.M. – Nonlinear Control Systems, Springer, 1999. [9] Conte G., Moog C.M., Perdon A. – Un théorème sur la répresentation entréesortie d’un système nonlinéaire, C. R. Acad. Sci. Paris, Sér. I 307 (1988) 363–366. [10] Diop S. – A state elimination procedure for nonlinear systems, in: Descusse J., Fliess M., Isidori A., Leborgne D. (Eds.), New Trends in Nonlinear Control Theory, in: Lecture Notes in Control and Inform. Sci., vol. 122, Springer, 1989, pp. 190–198. [11] Diop S. – Elimination in control theory, Math. Control Signals Systems 4 (1991) 17–32. [12] Fliess M. – Quelques définitions de la théorie des systèmes à la lumière des corps différentiels, C. R. Acad. Sci. Paris I 304 (1987) 91–93. [13] Fliess M. – Automatique et corps différentiels, Forum Math. 1 (1989) 227–238. [14] Fliess M. – Generalized controller canonical forms for linear and nonlinear dynamics, IEEE Trans. Automat. Control AC35 (1990) 994–1001. [15] Fliess M., Glad S.T. – An algebraic approach to linear and nonlinear control, in: Trentelman H.L., Willems J.C. (Eds.), Essays on Control: Perspectives in the Theory and its Applications, Birkhäuser, 1993, pp. 223–267. [16] Glad S.T. – Nonlinear state space and input output descriptions using differential polynomials, in: Descusse J., Fliess M., Isidori A., Leborgne D. (Eds.), New Trends in Nonlinear Control Theory, in: Lecture Notes in Control and Inform. Sci., vol. 122, Springer, 1989, pp. 182–189. [17] Glad, S.T. – Solvability of differential algebraic equations and inequalities: An algorithm, in: European Control Conference, ECC97 (Brussels, 1997). [18] Hovd M., Skogestad S. – Simple frequencydependent tools for control system analysis, structure selection, and design, Automatica 28 (1992) 989–996. [19] Knebusch M., Scheiderer C. – Einführung in die reelle Algebra, Vieweg studium, 1989. [20] Kolchin E.R. – Differential Algebra and Algebraic Groups, Academic Press, New York, 1973. [21] Ritt J.F. – Differential Algebra, Amer. Math. Soc., Providence, RI, 1950. [22] Seidenberg A. – An elimination theory for differential algebra, in: Wolf F., Hodges J.L., Seidenberg A. (Eds.), University of California Publications in Mathematics: New Series, University of California Press, Berkeley and Los Angeles, 1956, pp. 31–66. [23] Verhulst F. – Nonlinear Differential Equations and Dynamical Systems, SpringerVerlag, 1985.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
A note on computational algebra for discrete statistical models
Giovanni Pistone a , Eva Riccomagno b and Henry P. Wynn b a b
Departimento di Matematica, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy Department of Statistics, University of Warwick, Coventry CV4 7AL, UK
A BSTRACT Various different algebraic formulations of some discrete statistical models are given and shown to be equivalent. Use is made of computational algebraic geometry to switch between the different representations when conditions are imposed in a particular setting.
1. I N T R O D U C T I O N This chapter covers similar ground to that of Chapter 6 in [4], namely symbolic or computational commutative algebra in statistics. The idea of a toric ideal mentioned briefly there, is shown to be critical for certain discrete statistical models, particularly those with a loglinear representation. Various different algebraic formulations are given and shown to be equivalent. This leads naturally to the idea of an algebraic statistical model: where nontrivial algebraic restrictions are imposed. This is seen to cover very many typical models and modern complex factorizations arising in Bayes networks, in particular. 2. T O R I C
IDEALS, THEORY
In this section we briefly introduce the relevant notion from algebraic geometry and in particular toric ideals. References are [2] and [1]. Consider a finite set of points D in Rd . Let R[x] = R[x1 , . . . , xd ] be the ring of polynomials in the x1 , . . . , xd indeterminates and real coefficients. The ideal I Key words and phrases: Commutative algebraic geometry, Discrete statistical models, Toric ideal
342
G. Pistone et al.
of all polynomials whose zeros include the points in D , is uniquely associated to D . Consider a monomial ordering, that is a total, well ordering on the power a a products x α = x1 1 · · · xd d , ai ∈ Z0 for i = 1, . . . , d . Then there exists a finite set G of polynomials in I such that a polynomial f ∈ I can be decomposed as f = g∈G sg g where sg are polynomials. The set G is the reduced Gröbner basis of I with respect to the monomial ordering. Moreover a generic polynomial p ∈ R[x] can be decomposed in two parts: a polynomial in the ideal and a unique polynomial not in I sg g + r, p= g∈G
where the reminder r is a linear combination of the power products that are not divisible by the largest power product in g for all g ∈ G and with respect to the monomial ordering. This special set of power products is indicated by L: {x α , α ∈ L}. Thus r(x) = α∈L cα x α , cα ∈ R and x α is not divisible by LT(g ) for g ∈ G, where LT(g) is the largest power product in g with respect to the monomial ordering. The above gives a representation of the set of functions over D as the quotient space of all polynomials modulo I . Note that p(x) = r(x) for all x ∈ D . Moreover L and D have the same number of terms. Let R[ζ ] = R[ζ0 , . . . , ζs ] be a polynomial ring and for i = 1, . . . , n consider the a a power products ζ0 i0 · · · ζs is with aij ∈ Z0 for i = 1, . . . , n and j = 0, . . . , s . Let R[p] = R[p1 , . . . , pn ] be another polynomial ring. Consider the algebra homomorphism φ : R[p1 , . . . , pn ] → R[ζ0 , . . . , ζs ] a a pi → ζ0 i0 · · · ζs is for all i = 1, . . . , n. a
a
The toric ideal associated with the power products ζ0 i0 · · · ζs is , aij ∈ Z0 for i = 1, . . . , n and j = 0, . . . , s is defined as Ker(φ) = {f ∈ R[p], φ(f ) = 0}. a a Equivalently, consider the ideal J generated by the binomials pi − ζ0 i0 · · · ζs is in the extended polynomial ring R[p1 , . . . , pn , ζ0 , . . . , ζs ]. The elimination ideal I = J ∩ R[p] equals Ker(φ). 3. S T A T I S T I C A L
MODELS
Statistical models for discrete or categorical data can be given an algebraic structure in which toric and linear ideals and varieties play a part. Thus let X = (X1 , . . . , Xd ) be a vector of jointly distributed random variables on a product support S = S1 × · · · × Sd where Sj = {0, 1, . . . , nd − 1} so that n = S = dj=1 nj . Assume a probability mass function px > 0
for x ∈ S
and
x∈S
px = 1.
343
A note on computational algebra for discrete statistical models
We consider two algebraic representations based respectively on interpolation of px and of qx = log px on S . These interpolators are unique in the sense that the structure of S gives the unique set of monomials {x α , α ∈ S} and L = S in the special case of product support. This is because S is of product form and then {x α , α ∈ S} is the same for all monomial orderings. Let the interpolators be px = p(x) =
θα x α ,
α∈S
qx = log px = q(x) =
x ∈ S, ψα x α ,
x ∈ S.
α∈S
From the second we obtain a fully saturated exponential model px = exp q(x) = exp ψα x α , x ∈ S, α∈S
and the {ψα } are free except for the single condition stemming from A third formulation is given by defining tα = exp ψα ,
x∈S
px = 1 .
α ∈ S,
so that px =
α
tαx ,
x ∈ S.
α∈S
Since S is an integer grid every x a is integer and the px are power products in the ta . But power products lie on toric ideals. We shall return to this in the next section. We now have three equivalent polynomial representations px , θ α , t α with a onetoone mapping between each and each having n − 1 = dj=1 nj − 1 degrees of freedom, the −1 arising from x∈S px = 1. Following [4] we define an algebraic statistical model as any algebraic conditions imposed on the px , θα or tα , that is conditions arising out of algebraic equations. The algebraic complexity arises from the fact that seemingly simple conditions imposed in one setting, say the px , may give more or less complex conditions in the other settings, say the θα or tα . It is at this point that the computational commutative algebra becomes extremely useful, indeed essential, for a full development in the more difficult cases. The natural environment for probability is {px } but we typically require marginal distributions. If we partition x = (x (1) , x (2) ) then independence of the random variables X (1) and X (2) becomes px = px (1) px (2) ,
344
G. Pistone et al.
where px (1) =
px ,
px (2) =
x (2)
px .
x (1)
Similarly conditional independence with a triple partition x = (x (1) , x (2) , x (3) ), written X (1) X (2)  X (3) (X (1) and X (2) conditionally independent given X (3) ) becomes px (1) ,x (3) px (2) ,x (3) px = . px (3) Even in these simple cases we see a tension between the linear restrictions of marginality and the product restrictions of independence and conditional independence. We now look at some general classes of models. (i) Loglinear models arising from linear restrictions on the ψα . In a number of important cases independence and conditional independence assumptions can be transferred to conditions on ψα . Thus taking logarithm we see that independence gives qx = qx (1) = qx (2) .
Interpolation yields with simplified notation q(x) = q1 (x1 ) + q2 (x2 )
showing that ψα = 0 for any “interaction term” between x (1) and x (2) . Similarly for conditional independence becomes q(x) = q13 (x1 , x3 ) + q23 (x2 , x3 ) − q3 (x3 )
showing that no interactions between x1 and x2 are allowed. Note that to establish these results we are appealing to the uniqueness of interpolation on both sides of the equations. A pleasing and useful class of models in which the conditional independence structure can be captured by conditions on the ψα are trees. Consider a sixdimensional binary random vector X = (X1 , . . . , X6 ) and a tree where each node holds one of the random variables except for the root to which we simply attach the symbol 1. The conditional independence restrictions imposed by the tree are X1 X2 , X3 X4  X1 , X5 X6  X2
and the support is {0, 1}6 . Using the chain rule for condition probability over a tree, the joint density p(x) > 0 for x ∈ {0, 1}6 is written as p(x) = p31 (x3 x1 )p41 (x4 x1 )p52 (x5 x2 )p62 (x6 x2 )p1 (x1 )p2 (x2 ).
A note on computational algebra for discrete statistical models
345
The interpolators of a generic joint density for X involves all the multilinear monomials 1, x1 , . . . , x6 , x1 x2 , x1 x3 , . . . , x1 x2 x3 x4 x5 x6 .
Thus submodel obtained by imposing the tree structure is based only on the following monomials 1, x1 , x2 , x3 , x4 , x5 , x6 , x1 x3 , x1 x4 , x2 x5 , x2 x6 .
An important fact about models obtained from restrictions on the ψα is that they have a toric representation. We shall develop this in the next section. We must emphasize, however that not all useful models are obtained from linear restrictions on the ψα . (ii) More complex factorizations. Conditional independence and its generalizations in Bayes networks (see, e.g., [3]) do not cover all interesting factorizations. Here is a simple example p(x1 , x2 , x3 ) =
p12 (x1 , x2 )p23 (x2 , x3 )p13 (x1 , x3 ) . p1 (x1 )p2 (x2 )p3 (x3 )
(iii) Linear restrictions on the px . These fall normally into two classes. (a) Marginality restrictions additional to simple expression of margins. For example one might have p12 (x1 , x2 ) = p12 (x2 , x1 )
expressing a diagonal symmetry. Let X be a twodimensional vector of binary random variables on {0, 1} and impose p12 (x1 , x2 ) = p12 (x2 , x1 ) on the θ . This is equivalent to impose equal marginals. Then p00 = t00 = 1 − 2θ10 + θ11 , p10 = t00 t10 = 1 − θ11 , p01 = t00 t01 = 1 − θ11 , p11 = t00 t10 t01 t11 = 1 + 2θ10 + θ11 ,
where for example t11 = exp(ψ10 x1 x2 ) for (x1 , x2 ) ∈ {0, 1}2 and p(x1 , x2 ) = θ00 + θ10 x1 + θ01 x2 + θ11 x1 x2 for x1 , x2 ∈ {±1}2 and orthogonality can be exploited. The equivalent condition on the t becomes t10 − t01 = 0,
2 2t00 t10 − 4 + t00 + t00 t10 t11 = 0.
(b) Support restrictions. One of the features of the full algebraic treatment is that arbitrary discrete support can be handled because of the Gröbner basis construction for interpolators. Structural zeros in contingency tables provide a source of examples of nonstandard supports. But setting px = 0 for some x ∈ S ⊂ S is a linear restriction (although one would not treat it as a linear hypothesis to be tested).
346
G. Pistone et al.
Linear restrictions on the px transfer to linear restrictions on the θα under [px ] = Z[θα ],
where [·] refers to the full vector of terms and Z = x α x∈S,α∈S is the full n × n “design” matrix. However such linear restrictions do not in general translate to toric conditions on the px . Thus except for the class (i) above most statistical models as induced on px are expressible as intersections between a toric and a linear variety. 4. A
LOGLINEAR APPROACH TO TORIC IDEALS
Let us return to the formulation of models in Section 3(i), that is linear restrictions on the ψα . Suppose that they take the form Z2 [ψ] = 0,
where Z2 is a matrix with integer entries and [ψ] is the vector with entries ψα , α ∈ S . We may write the model then as
[p] = exp Z[ψ] , Z2 [ψ] = 0 or equivalently log[p] = Z[ψ],
Z2 [ψ].
Then since for the saturated model Z is nonsingular we have [ψ] = Z −1 log[p] = 0, Z2 [ψ] = Z2 Z −1 log[p] = 0.
Then since both Z2 and Z −1 are integer so is Z2 Z −1 . Inspecting the entries in Z2 Z −1 log[p] we can divide any row of Z2 Z −1 into positive and integer terms aα log[p] = bβ log[pβ ] α
β
with aα , bβ > 0, a = β . This exponentiates to α
pαaα =
b
pββ .
β
The set (intersection) of all such identities leads to a toric ideal for the p . We may write the model directly as
[p] = exp Z1 [ψ1 ]
A note on computational algebra for discrete statistical models
347
or log[p] = Z1 ψ1 ,
where Span(Z1 : Z2 ) = Span(Z) = Rd . If Z2 is orthogonal to Z1 then we may write this as Z2t log[p] = 0,
where Z2t is the transpose of Z2 . This generates a toric ideal in the same way. R EFERENCES [1] Bigatti A., Robbiano L. – Toric ideals, Mat. Contemp., submitted for publication. [2] Cox D., Little J., O’Shea D. – Ideals, Varieties and Algorithms, 2nd ed., SpringerVerlag, New York, 1996. [3] Lauritzen S.L. – Graphical Models, Oxford Science Press, Oxford, 1996. [4] Pistone G., Riccomagno E., Wynn H.P. – Algebraic Statistics, Chapman and Hall/CRC, Boca Raton, 2000.
Constructive Algebra and Systems Theory B. Hanzon and M. Hazewinkel (Editors) Royal Netherlands Academy of Arts and Sciences, 2006
Reduction and categories of nonlinear control systems ✩
Vladimir I. Elkin Russian Academy of Sciences, Computing Center, ul Vavilova 40, 119991 Moscow, Russia Email:
[email protected] (V.I. Elkin)
The solution of problems for multidimensional nonlinear control systems encounters serious difficulties, which are both mathematical and technical in nature. Therefore it is imperative to develop methods of reduction of nonlinear systems to a simpler form, for example, decomposition into systems of lesser dimension. Approaches to reduction are diverse, in particular, techniques based on approximation methods. In this work, we elaborate the most natural and obvious (in our opinion) approach, which is essentially inherent in any theory of mathematical entities, for instance, in the theory of linear spaces, theory of groups, etc. Reduction in our interpretation is based on assigning to the initial object an isomorphic object, a quotient object, and a subobject. In the theory of groups, for instance, reduction consists in reducing to an isomorphic group, quotient group, and subgroup. Strictly speaking, the exposition of any mathematical theory essentially begins with the introduction of these reduced objects and determination of their basic properties in relation to the initial object. Therefore, we can say that the theory of reduction of nonlinear control systems discussed in this work outlines the bases of the general theory of such systems. A formal definition of reduced objects that is suitable for any mathematical theory can be given within the framework of the theory of categories [2] or the Bourbaki theory of structures [1]. At the descriptive level every category (for example, the category of linear spaces or category of groups) is a class of objects, in which every object A is a set M with the same sort of predefined structure. This structure can be interpreted as a collection of a particular type of relationships between the elements of the set M . Along with objects, a category also contains morphisms that implement the object–object interrelations. If two objects A and B are defined on the sets A and B , respectively, then a morphism ψ of the object A into the object B is a mapping ψ : M → N that preserves the structure of a given kind ✩
This work was sponsored by the Russian Foundation for Basic Research, Project No. 050100940.
350
V.I. Elkin
(i.e., preserves the corresponding relationships between the elements of the sets). For example, in the category of linear spaces, morphisms are linear mappings, while in the category of groups they are homomorphisms. For nonlinear control systems, it is possible to construct a category along the following lines. Objects in this category, which is denoted by N S , are smooth nonlinear control systems of the form (1)
y˙ = f (y, u),
where states y ∈ M , a manifold of dimension n, controls u ∈ U , a subset of Rr . Morphisms are defined as follows. Along with some control system (1), let us consider a control system (2)
x˙ = g(x, v),
where states x ∈ N , a manifold of dimension m, controls v ∈ V , a subset of Rs . A morphism of a control system (1) to the control system (2) is called a mapping from M into N that carries the solutions (trajectories) of the system (1) into the solutions of the system (2). (Let us recall that a solution or a trajectory of the system (1) is defined to be a sufficiently smooth function y(t) for which there is an admissible control u(t) such that the functions y(t) and u(t) satisfy (1).) Isomorphism in any category is a morphism that is a oneone mapping, moreover, the inverse mapping is also a morphism. If for the objects A and B there exists an isomorphism of A into B , then the objects A and B are said to be isomorphic. Isomorphic objects have identical properties within the framework of the category. For example, in the category of linear spaces, isomorphisms are linear isomorphisms. Systems (1), (2) (where m = n) are isomorphic or, alternatively, equivalent systems if there is a diffeomorphism ψ : M → N such that the mappings ψ , ψ −1 are morphisms. The class of all systems (1) breaks up into disjoint subsets (equivalent classes) of equivalent systems. Vital properties of control systems like controllability, stability, and optimality of solutions are preserved in changing over to an equivalent system (2). Therefore, it is natural to solve any control problem for a simpler equivalent system and then carry over the results thus obtained to the initial system through the corresponding isomorphism. For example, a complicated nonlinear system (1) can be equivalent to a linear system (3)
x˙ = Ax + Bv,
where A, B are constant matrices. In this case, nonlinearity is a causal trait, which vanishes in an equivalent system. Consequently here the main problem is the classification problem which consists in describing the equivalence classes, i.e., describing systems to within equivalence. This problem includes, for example, the following problems: finding criteria for equivalence of two systems; constructing isomorphisms carrying systems into each other; and constructing canonical forms, i.e., representatives of equivalence classes
Reduction and categories of nonlinear control systems
351
(in simple form as possible). In general form the problem of classification is very complicated but even separate results concerning special cases may be greatly helpful. The concept of a subobject results from our desire to construct correctly the restriction of an object A defined on a set M to a subset N of M . It is generally not possible to restrict an object A to an arbitrary set. An object B defined on a subset N of M is called a subobject if the canonical injection from N to M is a morphism. For example, in the category of linear space, subobjects are linear subspaces. The need for restricting a control system (1), i.e., the need for changing over to a subsystem (2) on a subset N of M , is dictated by practical considerations when the elements of the set M are forced to obey certain constraints (initial conditions, boundary conditions, etc.). In such cases, it is natural to restrict a system (1) to some subset N of M for which these constraints are satisfied. The subsystem (2) on N defines a part of the solutions of the initial system (1) that lie within N and, in particular, satisfy the given constraints. Therefore, the solution of a control problem formulated for a system (1) can be reduced to the solution of an analogous problem for its subsystem (2) with phase space of diminished dimension. While simplification in restriction is achieved via passage to a subset N of M , in factorization it results from contraction of the set M , i.e., passage to the quotient set M/R by some equivalence relations R . In this passage, points belonging to one equivalence class are glued to the same point of the quotient set M/R . An object B defined on the quotient set M/R is called the quotient object of the object A on the set M if the canonical projection from M to M/R is a morphism. For example, in the category of linear spaces, factor objects are quotient spaces. The significance of factorization for the reduction of control systems (1) lies primarily in the decomposition of the initial system it generates. More exactly, if a system (1) has a quotient system (2) on some quotient set N = M/R , then system (1) is (in general locally) equivalent to the system (4)
x˙ = g(x, v),
(5)
z˙ = h(x, z, v),
which by its structure suggests that its solution x(t), z(t) can be determined as follows. First we solve of the quotient system (4) (for some control v(t)) and then, substituting x(t) into (5), we obtain x(t). This is the underlying principle of decomposition of algorithms designed for solving control problems. Let us note that many properties of a control system (observability, autonomity, etc.) are determined by its factor systems of a special type. Thus, for every system there exists an environment of reduced control systems. The concepts of an isomorphic object, a quotient object, and a subobject may be applied for reducing an object jointly and in any order. The solution of a given control problem is often based on construction of suitable sequence of reduced systems. For example, the first system of this sequence may be a subsystem of the initial system, the second system may be a quotient system of this subsystem, and
352
V.I. Elkin
so on. Such algorithms can be found in [3]. Research is under way for elaborating general universal methods of breaking into factor objects and subobjects suitable for the purpose of reducing mathematical objects of arbitrary nature [7,4]. A pivotal topic in reduction is the construction of a mathematical apparatus for determining reduced objects. The mathematical apparatus includes those concepts that are invariant to morphisms. Such concepts are purely differential geometric by nature. We now demonstrate this fact for the category SAS which is a subcategory of the category N S . Objects in SAS are nonlinear systems of the type (6)
y˙ = f (y)u,
y ∈ M, u ∈ Rr ,
where f is an n × r matrix whose columns fα , α = 1, . . . , r , are smooth vector fields, rank f (y) = const, and M is a domain in Rn . Morphisms in SAS are defined just as in N S (i.e., SAS is a complete subcategory of N S ). Certain differential geometric objects are associated with each control system (6) that aid in studying the reduction of the system (see [3] for details). First we can introduce a distribution F : y ∈ M → F (y) = span fα (y), α = 1, . . . , r ⊂ T My ,
where T My is the tangent space of M at the point y . Then we can introduce the dual codistribution F⊥ F⊥ : y ∈ M → F⊥ (y) = ω ∈ T ∗ My : ω(ξ ) = 0 ∀ξ ∈ F (y) ,
where T ∗ My is the cotangent space of M at the point y . Note that the codistribution F⊥ is generated by a system of Pfaffian equations (7)
ωj =
n
j
ωi (y) dy i = 0,
j = 1, . . . , q = n − rank f,
i=1
i.e., F⊥ (y) = span{ωj (y), j = 1, . . . , q}. This Pfaffian system can be found from expressions (6) by eliminating the variables u and then multiplying by dt . The distribution F , the codistribution F⊥ , the Pfaffian system (7) are called the associated distribution, the associated codistribution, the associated Pfaffian system of control system (6). Note that the associated Pfaffian system is not uniquely defined. More exactly, any Pfaffian system which can be derived from (7) through a linear nondegenerate transformation (with coefficients smoothly dependent on y ) is the associated Pfaffian system of control system (6). The structure of the distribution F and the codistribution F⊥ is preserved under the action of morphisms of the category SAS . Therefore concepts of the differential geometric theory of distributions and codistributions [5,6] are rather useful to research the reduction of control systems (6). Consider one of these concepts, namely, the concept of the characteristic codistribution. Let us construct
Reduction and categories of nonlinear control systems
353
the socalled characteristic Pfaffian system for Pfaffian system (7). This system consists of Eqs. (7) and the equations (8)
q
k ωi[j ωj11 · · · ωjq ] dy i = 0,
k = 1, . . . , q, 1 j < j1 < · · · < jq n.
Here k ωij =
∂ωjk ∂y i
−
∂ωik , ∂y j
and the square brackets denote that the indexes within them are alternated, i.e., (q + 1)! permutations are made over the indexes j, j1 , . . . , jq in the product q k 1 ωij ωj1 · · · ωjq and then the expressions thus found are summed, changing the sign of the expression for odd permutations. This operation is expressed as ωk ωk k . . . ωij ij ij1 q 1 1 1 ω j ωj1 . . . ωjq q k 1 ωi[j ωj1 · · · ωjq ] = . .. .. . .. . . . . . q q q ωj ωj1 . . . ωjq Eqs. (7), (8) generate a codistribution which is called the characteristic codistribution CF⊥ of the codistribution F⊥ . The quantity dim CF⊥ (y) is called the class of the codistribution F⊥ and the Pfaffian system (7) at the point y ∈ M . We suppose that the class is constant. Clearly, the class is equal to the maximum number of linearly independent equations (in each point y ) in the characteristic system (7), (8). Note that control systems of the type (6) are equivalent if and only if their associated Pfaffian systems are equivalent (this assertion and others cited below are local). Recall that Pfaffian systemes are called equivalent if one can be derived from the other through a diffeomorphism (or, in other words, a coordinate substitution), and a linear nondegenerate transformation (with coefficients smoothly dependent on coordinates). It is known that if the class of the Pfaffian system (7) is equal to m then there is a Pfaffian system of the type (9)
j =
m
j k x 1 , . . . , x m dx k = 0,
j = 1, . . . , q, k = 1, . . . , m,
k=1
which is equivalent to the Pfaffian system (7). Moreover, m is the minimal number of variables on which a Pfaffian system equivalent to (7) may depend [8]. It is not difficult to construct a control system of the type (10)
x˙ = g(x)v,
(11)
z˙ = w, x ∈ P ⊂ Rm ,
z ∈ L ⊂ Rn−m ,
354
V.I. Elkin
for which the Pfaffian system (9) is the associated Pfaffian system (columns gβ of j k the matrix g are defined from the condition m k=1 k (x)gβ (x) = 0). Thus, if the class of the associated codistribution of system (6) is less than n then we have decomposition (4), (5), but of the special type (10), (11). The representation of (6) in equivalent form (10), (11) is interesting in that it is actually decomposition of the system into a nontrivial part (10) and trivial part (11), which are the independent systems (the quotient systems of the system (6)). The class characterizes the degree of nontriviality of the system (6). Speaking differently, we can say that the maximum possible trivial part is separated from the system by CF⊥ . Accordingly, any control problem related to the system (6) admits decomposition into nontrivial and trivial problems, which in practice permits one to reduce the dimensionality of the original problem. An essential concept in classification study is the concept of an invariant, i.e., a quantity that does not vary in passing to an equivalent object. The class is a very important invariant of Pfaffian systems and, respectively, control systems (6). Consider, for example, systems (6) for which rank f (y) = n − 1. The associated Pfaffian system of every such control system contains only one equation (12)
n
ωi (y) dy i = 0.
i=1
In is known that the Pfaffian equation (12) is equivalent to one of the following equations: (13)
dx n = 0,
(14)
dx n − x 1 dx 2 − · · · − x 2k−1 dx 2k = 0,
where k = 1, . . . , (n − 2)/2 for even n 4 and k = 1, . . . , (n − 1)/2 for odd n 3. The number of variables in each of Eqs. (13) and (14) is equal to the class of the Pfaffian equation (12). By (13), (14), a control system (6) for which rank f (y) = n − 1 is equivalent to one of th following systems: x˙ i = v i , i = 1, . . . , n − 1, (15) x˙ n = 0, x˙ i = v i , i = 1, . . . , n − 1, (16) x˙ n = x 1 v 2 + · · · + x 2k−1 v 2k , where k = 1, . . . , (n − 2)/2 for even n 4 and k = 1, . . . , (n − 1)/2 for odd n 3. Note that the systems (15) and (16) have the form of decomposition (10), (11). In particular, in (16) the equations with indexes i = 2k + 1, . . . , n − 1 form the system (11), and the other equations form the system (10). Consider one more differential geometric concept which is important for the reduction of control systems (6). It is the concept of the integral manifold of a
Reduction and categories of nonlinear control systems
355
distribution. The integral manifolds of a distribution D (and the corresponding dual codistribution D⊥ and Pfaffian systems) are manifolds N for which T Ny ⊂ D(y)
∀y ∈ N,
where T Ny is the tangent space of N at the point y . Their existence is the essence of the socalled Pfaff problem, which is usually formulated in terms of Pfaffian systems. As demonstrated by E. Cartan, the existence of integral manifolds (for analytic distributions, i.e., distributions generated by analytical vector fields) can be established through algebraic operations and found by solving certain differential equations [8]. If a manifold N ⊂ M is an integral manifold of the associated distribution D of the control system (6), then any sufficiently smooth curve y(t) ∈ N is a trajectory of the system (6). (Note that in this case there is a subsystem of the system (6) on N of the type x˙ = v where x are coordinates on N .) This property of an integral manifold has a great bearing on control problems. Consider again a system (6) for which rank f (y) = n − 1 and suppose that class of the associated codistribution D⊥ is equal n = 2k + 1, k > 0. For convenience write the corresponding canonical form (16) in the following way x˙ j = v j ,
(17)
p˙ i = w i , z˙ = p 1 w 1 + · · · + p k w k ,
i, j = 1, . . . , k.
The Pfaffian base system of the system (17) consists of only one equation (18)
dz − p 1 dx 1 − · · · − p k dx k = 0.
The existence of integral manifolds for Eq. (18) is a resolved problem in differential geometry [8]. Namely, it is known that the largest possible dimension of an integral manifold is k . Such manifolds exist and are called the Legendre manifolds. Every integral manifold belongs to some Legendre manifold. What is important here that the Legendre manifold is described purely algebraically. More exactly, for any partition of the index set {1, . . . , k} into disjoint subsets I and J , and for any function S(x i , pj ) of k variables x i , i ∈ I , and pj , j ∈ J , the formulas p i = ∂S/∂x i ,
(19)
x j = −∂S/∂p j , z = S − p j ∂S/∂p j
define an integral manifold of Eq. (18). Moreover, any integral manifold of dimension k can be represented in the form (19). Example. Apply stated results to a standard control problem, namely, the terminal control problem which is defined as follows. Given the system (6) where rank f (y) = n − 1 and two points y0 and y1 ∈ M , find the solution y(t), t ∈ [t0 , t1 ],
356
V.I. Elkin
for which y(t0 ) = y0 and y(t1 ) = y1 . The concepts of equivalence, factorization, and restriction are effectively applied for solving this problem. Applying these concepts, we can reduce the terminal control problem for an initial system to an analogous problem for one or several systems of a simple type, in particular, of reduced dimension. The terminal control problem may be solved as follows. First, the system (6) should be reduced to the corresponding equivalent canonical form (15) or (16) by some isomorphism. For the system (15) the terminal control problem is trivial. In the case of the system (16) we can reduce the formulated problem to the similar problems for two independent quotient systems. One of these systems is a trivial system of the form (11) for which the terminal control problem is a trivial problem. Consider the terminal control problem for the second quotient system. Let us write this system in the form (17) where the initial point is (x0 , p0 , z0 ) and the terminal point is (x1 , p1 , z1 ). We can solve the terminal control problem with the help of the concept of restriction of control systems, more exactly, the concept of an integral manifold. As already noted, there are k dimensional integral manifolds, called the Legendre manifolds and described by the formulas (19). It can be shown that an Legendre manifold N passes through any two points (x0 , p0 , z0 ) and (x1 , p1 , z1 ), except for the points for which x0 = x1 , p0 = p1 , and z0 = z1 . The corresponding functions S can be found by elementary algebraic operations [3]. Taking any (sufficiently smooth) curve x(t), p(t), z(t) on the manifold N joining the points (x0 , p0 , z0 ) and (x1 , p1 , z1 ), we obtain the solution of the terminal control problem. If x0 = x1 , p0 = p1 , and z0 = z1 , then that there are no Legendre manifolds passing through the points (x0 , p0 , z0 ) and (x1 , p1 , z1 ). To solve the terminal control problem for this case, we can, for example, move the point (x0 , p0 , z0 ) into any point (x2 , p2 , z2 ) which do not lie on the line x = x0 , p = p0 , and then, using a Legendre manifold passing through the points (x2 , p2 , z2 ) and (x1 , p1 , z1 ), we can reach the point (x1 , p1 , z1 ). R EFERENCES [1] Bourbaki N. – Théorie des ensembles, Hermann, Paris, 1960. [2] Bucur I., Deleanu A. – Introduction to the Theory of Categories and Functors, John Wiley & Sons, New York, 1968. [3] Elkin V.I. – Reduction of Nonlinear Control Systems: A Differential Geometric Approach, Kluwer Academic Publishers, Dordrecht, 1999. [4] Elkin V.I., Pavlovskii Yu.N. – Decomposition of models of control processes, J. Math. Sci. 88 (5) (1998) 723–761. [5] Godbillon C. – Géométrie différetielle et analytique méchanique, Hermann, Paris, 1969. [6] Griffits P.A. – Exterior Differential Systems and the Calculus of Variations, Birkhäuser, Boston, 1983. [7] Pavlovskii Yu.N., Smirnova T.G. – Decomposition in Mathematical Modeling, Fazis, Moscow, 1998 [in Russian]. [8] Schouten I.A., Kulk W.V.D. – Pfaff ’s Problem and Its Generalizations, Clarendon Press, Oxford, 1949.