Pseudo-Orbit Data Assimilation. Part I: The Perfect Model Scenario

VOLUME 71 JOURNAL OF THE ATMOSPHERIC SCIENCES FEBRUARY 2014 Pseudo-Orbit Data Assimilation. Part I: The Perfect Model Scenario HAILIANG DU AND LEON...
Author: Meredith Gray
0 downloads 0 Views 791KB Size
VOLUME 71

JOURNAL OF THE ATMOSPHERIC SCIENCES

FEBRUARY 2014

Pseudo-Orbit Data Assimilation. Part I: The Perfect Model Scenario HAILIANG DU AND LEONARD A. SMITH Centre for the Analysis of Time Series, London School of Economics and Political Science, London, United Kingdom (Manuscript received 26 January 2013, in final form 21 October 2013) ABSTRACT State estimation lies at the heart of many meteorological tasks. Pseudo-orbit-based data assimilation provides an attractive alternative approach to data assimilation in nonlinear systems such as weather forecasting models. In the perfect model scenario, noisy observations prevent a precise estimate of the current state. In this setting, ensemble Kalman filter approaches are hampered by their foundational assumptions of dynamical linearity, while variational approaches may fail in practice owing to local minima in their cost function. The pseudo-orbit data assimilation approach improves state estimation by enhancing the balance between the information derived from the dynamic equations and that derived from the observations. The potential use of this approach for numerical weather prediction is explored in the perfect model scenario within two deterministic chaotic systems: the two-dimensional Ikeda map and 18-dimensional Lorenz96 flow. Empirical results demonstrate improved performance over that of the two most common traditional approaches of data assimilation (ensemble Kalman filter and four-dimensional variational assimilation).

1. Introduction The quality of forecasts from dynamical nonlinear models depends both on the model and on the quality of the initial conditions. Even under the idealized conditions of a perfect model of a deterministic chaotic nonlinear system and with infinite past observations, uncertainty in the observations can make identification of the exact state impossible (Berliner 1991; Lalley 2000; Judd and Smith 2001). Such limitations make a single ‘‘best guess’’ prediction a suboptimal approach to state estimation—an approach that would be frustrated even in ideal cases. Alternatively, an ensemble of initial conditions can better reflect the inescapable uncertainty in the observations by capturing the sensitivity of each particular forecast. A major application of state estimation is in the production of analyses of the state of the atmosphere in order to initialize a numerical weather prediction (NWP) model (e.g., Lorenc 1986; Keppenne and Rienecker 2002; Kalnay 2003; Anderson et al. 2005; Houtekamer et al. 2005). This paper is concerned with the identification of the current state of a nonlinear chaotic

Corresponding author address: Hailiang Du, Centre for the Analysis of Time Series, London School of Economics and Political Science, Houghton Street, London WC2A 2AE, United Kingdom. E-mail: [email protected]

system given a sequence of observations in the perfect model scenario (PMS). The use of imperfect models is discussed elsewhere (Du and Smith 2014, hereafter Part II). In PMS, there are states that are consistent with the model’s long-term dynamics and states that are not; in dissipative systems the consistent states are often said to ‘‘lie on the model’s attractor.’’ Intuitively, it makes sense to distinguish those states that are not only consistent with the observations but also consistent with the model’s long-term dynamics in state estimation. The problem of state estimation in PMS is addressed by applying the pseudo-orbit data assimilation (PDA) approach (Judd and Smith 2001; Judd et al. 2008; Stemler and Judd 2009) to locate a reference trajectory (a model trajectory consistent with the observation sequence; Gilmour 1998; Smith et al. 2010) and constructing an initial condition ensemble by using the model dynamics to sample the local state space. The PDA approach is shown to be more efficient1 and robust in finding a reference trajectory than four-dimensional variational assimilation (4DVAR). The differences between PDA and 4DVAR are discussed. Ensemble Kalman filter (EnKF) approaches provide an

In high-dimensional space, sampling on or near the relevant low-dimensional manifold would be much more efficient than sampling a sphere in the entire space.

DOI: 10.1175/JAS-D-13-032.1 Ó 2014 American Meteorological Society

1

469

470

JOURNAL OF THE ATMOSPHERIC SCIENCES

alternative to 4DVAR. Illustrated here both on a lowdimensional map and on a higher-dimensional flow, PDA is demonstrated to outperform one of the many EnKF approaches—that is, the ensemble adjustment Kalman filter (Anderson 2001, 2003). It is suggested that this is a general result. The data assimilation problem of interest is defined and alternative approaches are reviewed in section 2. A full description of the methodology of the PDA approach is presented in section 3. In section 4, the 4DVAR approach and the PDA approach are contrasted using the two-dimensional Ikeda map and the 18-dimensional Lorenz96 flow. Comparisons between the EnKF approach and the PDA approach for the same two systems are made in section 5. Section 6 provides a brief summary and conclusions.

2. Problem description The problem of state estimation is addressed within the perfect model scenario focusing only on nonlinear xt 2 Rm~ be the deterministic dynamical systems.2 Let ~ 3 state of a deterministic dynamical system at time t 2 Z. a): Rm / Rm The evolution of the system is given by F(~ xt , ~ with ~xt11 5 F(~xt , ~a), where F denotes the system dynamics that evolve the state forward in time in the system state space Rm and the system’s parameters are contained in the vector ~ a 2 Rl . In PMS, assume the mathematical structure of F is known while uncertainty in the values of ~a may remain. Note that PMS does not require knowing the parameters of the system. Robust approaches have been proposed to identify the parameter values (e.g., Schittkowski 1994; McSharry and Smith 1999; Pisarenko and Sornette 2004; Tarantola 2004; Smith et al. 2010; Du and Smith 2012). This paper, however, considers only the strong case where both the model class (i.e., the mathematical structure of F ) and the model parameter values ~ a are identical to those of the system. In PMS, the system state and the model state share the same state space and are thus ‘‘subtractable’’ (Smith 2006). Define the observation at time t to be st 5 h(~xt ) 1 ht , where h() is the observation operator that projects the true system state x~ into observation space. For simplicity, h() is taken to be the identity operator below. Full observations are made; that is, observations are available for all state variables

2 For linear systems, see Wiener (1949), Kalman (1960), and references therein. 3 In the perfect model scenario, the true state ~x is in the same ~ 5 m. Motivation for the use state space as the model state x and m of tildes in this context can be found in Smith (2002).

VOLUME 71

(all components of x) at every observation time.4 The ht 2 Rm represent observational noise (or ‘‘measurement error’’). The statistical characteristics of the observational noise (i.e., the noise model) for ht are known exactly. The problem of state estimation in PMS consists of forming an ensemble estimate of the current state x~0 given (i) the history of observations st, for t 5 2n 1 1, . . . , 0; (ii) a perfect model class with perfect parameter values; and (iii) knowledge of the observational noise model. The various approaches that have been developed to address this problem divide into two major categories: the sequential approaches and the variational approaches. Sequential approaches can be built on the foundation of Bayesian theory, which conceptually generates a posterior distribution of the state variables by updating the prior distribution with new observations (Cohn 1997; Anderson and Anderson 1999). Unfortunately, application of the complete approach is computationally prohibitive in state estimation (Hamill 2006). An approximation to the Bayesian approach, called the Kalman filter, introduced by Kalman (1960), is optimal only for linear models and a Gaussian observational noise. To better address the state estimation problem in nonlinear cases, the extended Kalman filter (Jazwinski 1970; Gelb 1974; Ghil and Malanotte-Rizzoli 1991; Bouttier 1994) uses tangent linear dynamics to estimate the error covariances while assuming linear growth and normality of errors. The extended Kalman filter performs poorly where nonlinearity is pronounced; more accurate state estimates are available from ensemble Kalman filter approaches (Evensen 1994; Houtekamer and Mitchell 1998; Burgers et al. 1998; Lermusiaux and Robinson 1999; Anderson 2001; Bishop et al. 2001; Hamill et al. 2001), which have been developed using Monte Carlo5 techniques. While these filters admit the non-Gaussian probability density function, only the mean and covariance are updated in these sequential approaches; information beyond the second

4 As shown elsewhere (Judd et al. 2008; Du 2009; Smith et al. 2010), various generalization to partial observations can be made. The approach could be applied in operational weather forecasting following Judd et al. (2008), using available 3DVAR analysis. The general case of partial observations will be considered elsewhere. In short, one could take a two pass approach to PDA, first using background information (e.g., the climatology distribution) of the unobserved state variables with the observations frozen to obtain initial estimates of unobserved state variables, and then applying full PDA as discussed below with those estimates of unobserved state variables and the original observed state variables. While interesting, discussion of this case is omitted here. Note there is some loss of generality in assuming full observations. 5 As stressed by a reviewer, not all ensemble Kalman filter approaches need be considered as Monte Carlo approaches.

FEBRUARY 2014

471

DU AND SMITH

moment is discarded. Stressing the impact of these linear assumptions, Lawson and Hansen (2004) demonstrated that inaccuracies in state estimation are to be expected if the dynamics are nonlinear owing to higher moments of the distribution of the state variables. Another wellknown sequential approach, the particle filter (Metropolis and Ulam 1944), is fully nonlinear in both model evolution and analysis steps. It, however, suffers from the so-called curse of dimensionality, which prevents the particles from staying ‘‘close’’ to the observations in a large-dimensional system (Snyder et al. 2008). Variational approaches [e.g., four-dimensional variational assimilation (Dimet and Talagrand 1986; Lorenc 1986; Talagrand and Courtier 1987; Courtier et al. 1994)], based upon maximum likelihood estimation, consist of shooting techniques that seek a set of initial conditions corresponding to a system trajectory that is consistent with a sequence of system observations; this is a strong constraint (Courtier et al. 1994; Bennett et al. 1996). In PMS, the variational approaches both are computationally expensive and are known to suffer from local minima owing to chaotic likelihoods (where the likelihood function of initial conditions is extremely jagged for chaotic systems). This was pointed out by Berliner (1991); see also Miller et al. (1994) and Pires et al. (1996). The approach presented in this paper applies the PDA approach to locate a reference trajectory and construct an initial condition ensemble by sampling the local state space. Khare and Smith (2011) applied a similar approach with a target of indistinguishable states (Q density) to form an initial-condition ensemble. More details about PDA and indistinguishable states can be found in Du (2009). State estimation outside PMS is discussed in Part II.

3. Pseudo-orbit data assimilation in sequence space The analytic intractability of the relevant probability distributions and the dimension of the model state space suggest the adoption of a (Monte Carlo) ensemble scheme (Lorenz 1965; see also Smith 1996 and Leutbecher and Palmer 2008) to account for the uncertainties of observations in state estimation approach. The ensemble approach is by far the most common for quantifying uncertainty in operational weather forecasting (e.g., Toth and Kalnay 1993; Leutbecher and Palmer 2008). An algorithm may generate an ensemble directly, as with the particle filter and ensemble Kalman filters, or an ensemble may be generated from perturbations of a reference trajectory. The approach presented in this paper belongs in the second category. Of course, the quality of the state estimates will vary strongly with the quality of the reference trajectory(s). The pseudo-orbit

data assimilation approach (Judd and Smith 2001; Ridout and Judd 2002; Judd and Smith 2004; Stemler and Judd 2009) provides a reference model trajectory given a sequence of observations. A brief introduction to the PDA approach is given in the following paragraph. Let the dimension of the model state space be m and the number of observation times in the window be n. The sequence space is an m 3 n dimensional space in which a single point can be thought of as a particular series of n states ut, t 5 2n 1 1, . . . , 0. Here each ut is an m-dimensional vector. Some points in the sequence space are trajectories of the model and some are not. Define a pseudo orbit, U [ fu2n11, . . . , u21, u0g, to be a point in the m 3 n dimensional sequence space for which ut11 6¼ F(ut) for any6 component of U. This implies that U corresponds to a sequence of model states that is not a trajectory of the model. Define the mismatch as an m 3 (n 2 1) dimensional vector (e2n11, . . . , e21), where the component of the mismatch at time t is et 5 jF(ut) 2 ut11j, t 5 2n 1 1, . . . , 21. By construction, model trajectories have a mismatch of zero. A gradient descent (GD) algorithm (details in the following paragraph) can be used to minimize the sum of the squared mismatch errors. Define the mismatch cost function to be C(U) 5 åe2t .

(1)

The pseudo-orbit data assimilation minimizes the mismatch cost function for U in the m 3 n dimensional sequence space. The minimum of the mismatch cost function can be obtained by solving the ordinary differential equation: dU 5 2$C(U) , dt

(2)

where t denotes algorithmic time. An important advantage of PDA is that the minimization is done in the sequence space: information from across the assimilation window is used simultaneously. Let the elements of U corresponding to the model state at a given time be called a component of the pseudo orbit. PDA optimizes all components simultaneously. The observations themselves projected into the model state space define a pseudo orbit—call this pseudo orbit the observation-based pseudo-orbit S [ fs2n11, . . . , s21, s0g; with probability 1 that it will not be a trajectory.

6 Technically, the inequality need hold only for one pair of consecutive components in the sequence space vector. Alternatively, one could define pseudo orbits so as to include trajectories; in this paper, this is not done.

472

JOURNAL OF THE ATMOSPHERIC SCIENCES

In practice, the minimization is initialized with the observation-based pseudo orbit; that is, 0U 5 S, where the presuperscript 0 on U denotes the initial stage of the GD. After every iteration of the GD minimization, U will be updated. Recall that the pseudo orbit is a point in the sequence space; it is updated in the sense that under the

GD algorithm it moves toward the manifold of trajectories. All points on the trajectory manifold have zero mismatch (each is a trajectory) and only points on the trajectory manifold have zero mismatch. To iterate the algorithm, one needs to differentiate the mismatch cost function given by

8 t 5 2n 1 1