KPSS test for functional time series

KPSS test for functional time series Piotr Kokoszka∗ Gabriel Young Colorado State University Colorado State University Abstract Econometric and fi...
Author: Hilary Hancock
25 downloads 1 Views 483KB Size
KPSS test for functional time series Piotr Kokoszka∗

Gabriel Young

Colorado State University

Colorado State University

Abstract Econometric and financial data often take the form of a collection of curves observed consecutively over time. Examples include, intraday price curves, term structure curves, and intraday volatility curves. Such curves can be viewed as a time series of functions. A fundamental issue that must be addressed, before an attempt is made to statistically model or predict such series, is whether they contain a random walk component. This paper extends the KPSS test to the setting of functional time series. We develop the form of the test statistic, and propose two testing procedures: Monte Carlo and asymptotic. The limit distributions are derived, the procedures are algorithmically described and illustrated by an application to yield curves and a simulation study.

JEL CLassification: C12, C32.

Keywords: Functional data, Integrated time series, Random walk, Trend stationarity, Yield curve.

1

Introduction

Many econometric and financial data sets take the form of a time series of curves, or functions. The best known and most extensively studied data of this form are yield curves. Even though they are observed at discrete maturities, in financial theory they are viewed as continuous functions, one function per month or per day. The yield curves can thus be viewed as a time series of curves, a functional time series. Other examples include intraday price, volatility or volume curves. Intraday price curves are smooth, volatility and volume ∗

Address for correspondence: Piotr Kokoszka, Department of Statistics, Colorado State University, Fort Collins, CO 80522, USA. E-mail: [email protected]

1

curves are noisy and must be smoothed before they can be effectively treated as curves. As with scalar and vector valued time series, it is important to describe the random structure of a functional time series. A fundamental question, which has received a great deal of attention in econometric research, is whether the time series has a random walk, or unit root, component. The present paper addresses this issue in the context of functional time series by extending the KPSS test of Kwiatkowski et al. (1992) and developing the required asymptotic theory. The work of Kwiatkowski et al. (1992) was motivated by the fact that unit root tests developed by Dickey and Fuller (1979), Dickey and Fuller (1981), and Said and Dickey (1984) indicated that most aggregate economic series had a unit root. In these tests, the null hypothesis is that the series has a unit root. Since such tests have low power in samples of sizes occurring in many applications, Kwiatkowski et al. (1992) proposed that trend stationarity should be considered as the null hypothesis, and the unit root should be the alternative. Rejection of the null of trend stationarity could then be viewed as a convincing evidence in favor of a unit root. It was soon realized that the KPSS test of Kwiatkowski et al. (1992) has a much broader utility. For example, Lee and Schmidt (1996) and Giraitis et al. (2003) used it to detect long memory, with short memory as the null hypothesis; de Jong et al. (1997) developed a robust version of the KPSS test. The work of Lo (1991) is crucial because he observed that under temporal dependence, to obtain parameter free limit null distributions, statistics similar to the KPSS statistic must be normalized by the long run variance rather than by the sample variance. Likewise, there have been dozens of contributions that enhance unit root testing, see e.g. Cavaliere and Xu (2014) and Chambers et al. (2014), and references therein. In the functional setting, the null hypothesis of trend stationarity is stated as follows: (1.1)

H0 :

Xn (t) = µ(t) + nξ(t) + ηn (t).

Note that the time index is n, it refers to day, month, quarter or year. The index t refers to “time” for each function. For example, for intraday price curves, t is time within a trading day, measured in minutes or at an even finer resolution. For yield curves, t does not correspond to physical time but to time until expiration, the maturity horizon of a bond. The functions µ and ξ correspond, respectively, to the intercept and slope. The errors ηn are also functions which model random departures of the observed functions Xn form a deterministic model. Under the alternative, the model contains a random walk component: (1.2)

HA :

Xn (t) = µ(t) + nξ(t) +

n X

ui (t) + ηn (t),

i=1

where u1 , u2 , . . . are mean zero identically distributed random functions. 2

Our approach to testing exploits the ideas of functional data analysis (FDA), mostly those related to functional principal component expansions; the monographs of Bosq (2000), Ramsay and Silverman (2002) and Horv´ath and Kokoszka (2012) explain them in detail. Section 2 contains the prerequisites needed to understand our theory. Application of FDA methodology in an econometric context is not new. Among others, Kargin and Onatski (2008) studied prediction of yield curves, M¨ uller et al. (2011) considered functional modeling of volatility, Kokoszka et al. (2014) used a regression type model to explains the shapes of price curves. A contribution most closely related to the present work is Horv´ath et al. (2014a) who developed a test of level stationarity. Incorporating a possible trend, changes the structure of functional residuals and leads to different limit distributions. It also requires the asymptotic analysis of the long run variance of these residuals. The remainder of the paper is organized as follows. Section 2 introduces the requisite background in FDA and states and explains the assumptions. Test statistics and their asymptotic distributions are described in Section 3. Section 4 contains algorithmic descriptions of the test procedures, their application to yield curves, and a small simulation study. Proofs of the theorems stated in Section 3 are developed in Section 5.

2

Definitions and assumptions

All random functions and deterministic functional parameters µ and ξ are assumed to be R1 elements of the Hilbert space L2 = L2 ([0, 1]) with the inner product hf, gi = 0 f (t)g(t)dt. This means that the domain of all functional observations, e.g. of the daily price or yield curves, has been normalized to be the unit interval. If the limits of integration are omitted, integration is over the interval [0, 1]. All random functions are assumed to be square integrable, i.e. E ||ηn ||2 < ∞, E ||un ||2 < ∞. Further background on random elements of L2 is given in Chapter 2 of Horv´ath and Kokoszka (2012); a more extensive theoretical treatment is presented in Bosq (2000). The partial sum process of the curves X1 (t), X2 (t), . . . , XN (t) is defined by (2.1)

bN xc 1 X SN (x, t) = √ Xn (t), N n=1

0 ≤ t, x ≤ 1,

and the partial sum process of the unobservable errors by (2.2)

bN xc 1 X √ VN (x, t) = ηn (t), N n=1

0 ≤ t, x ≤ 1.

Kwiatkowski et al. (1992) assumed that the errors ηn are iid, but subsequent research extended their work to errors which form a stationary time series, see e.g. Giraitis et al. 3

(2003) and references therein. In the case of scalar observations, temporal dependence can be quantified in many ways, e.g. via structural, mixing or cumulant conditions, and a large number of asymptotic results established under such assumptions can be used. For functional time series, the corresponding results are much fewer and fall into two categories: 1) those derived assuming a linear, ARMA type, structure, Bosq (2000); 2) those assuming a nonlinear moving average representation (Bernoulli shifts) with the decay of dependence specified by a moment condition. We will use a general nonparametric specification of the dependence structure which falls to the second category. It is quantified by the following assumption. Assumption 2.1 The errors ηj are Bernoulli shifts, i.e. ηj = g(j , j−1 , ...) for some measurable function g : S ∞ → L2 and iid functions j , −∞ ≤ j ≤ ∞, with values in a measurable space S. The functions (t, ω) 7→ ηj (t, ω) are product measurable. Eη0 = 0 in L2 and Ekη0 k2+δ < ∞ for some 0 < δ < 1. ∞ The sequence {ηn }∞ n=−∞ can be approximated by `-dependent sequences {ηn,` }n=−∞ in the sense that (2.3)

∞ X

(Ekηn − ηn,` k2+δ )1/κ < ∞ for some κ > 2 + δ,

`=1

where ηn,` is defined by ηn,` = g(n , n−1 , ..., n−`+1 , ∗n−` , ∗n−`−1 , . . .) where the ∗k are independent copies of 0 , independent of {i , −∞ < i < ∞}. Assumption 2.1 has been shown to hold for all known models for temporally dependent functions, assuming the parameters of these models satisfy nonrestrictive conditions, see H¨ormann and Kokoszka (2010, 2012), or Chapter 16 of Horv´ath and Kokoszka (2012). Its gist is that the dependence of the function g on the iid innovations j far in the past decays so fast that these innovations can be replaced by their independent copies. Such a replacement is asymptotically negligible in the sense quantified by (2.3). For scalar time series, conditions similar in spirit were used by P¨otscher and Prucha (1997), Wu (2005), Shao and Wu (2007) and Berkes et al. (2011), to name just a few references. In this paper, Assumption 2.1 is needed to ensure that the partial sums (2.2) can be approximated by a two–parameter Gaussian process. In particular, (2.3) is not used directly; it is a condition used by Berkes et al. (2013) to prove Theorem 5.2. To establish the results of Section 3, one can, in fact, replace Assumption 2.1 by the conclusions of Theorem 5.2 and the existence of an estimator cˆ(t, s) such that ZZ P [ˆ c(t, s) − c(t, s)]2 dtds → 0, as N → ∞, (2.4) 4

with the kernel c defined by (2.5). Assumption 2.1 is a general weak dependence condition under which these conclusions hold. To establish the consistency of the tests under HA (1.2), exactly the same assumptions must be imposed on the random walk errors ui . The error sequences ηn and ui thus need not be iid, but merely stationary and weakly dependent. In summary, the conditions stated in Assumption 2.1 provide a nonrestrictive, nonparametric quantification of weak dependence of a sequence of functions which implies all technical properties needed in the proofs. We now define the bivariate functions appearing in (2.4). The long–run covariance function of the errors ηn is defined as (2.5)

c(t, s) = Eη0 (t)η0 (s) +

∞ X

(Eη0 (t)ηi (s) + Eη0 (s)ηi (t)) .

i=1

The series defining the function c(t, s) converges in L2 ([0, 1] × [0, 1]), see Horv´ath et al. (2013). The function c(t, s) is positive definite. Therefore there exist eigenvalues λ1 ≥ λ2 ≥ ... ≥ 0, and orthonormal eigenfunctions φi (t), 0 ≤ t ≤ 1, satisfying Z (2.6) λi φi (t) = c(t, s)φi (s)ds, 0 ≤ i ≤ ∞. To ensure that the φi corresponding to the d largest eigenvalues are uniquely defined (up to a sign), we assume that (2.7)

λ1 > λ2 > · · · > λd > λd+1 > 0.

The eigenvalues λi play a crucial role in our tests. They are estimated by the sample, or empirical, eigenvalues defined by Z ˆ ˆ (2.8) λi φi (t) = cˆ(t, s)φˆi (s)ds, 0 ≤ i ≤ N, where cˆ(·, ·) is an estimator of (2.5). We use a kernel estimator similar to that introduced by Horv´ath et al. (2013), but with suitably defined residuals in place of the centered observations Xn . To define model residuals, consider the least squares estimators of the functional parameters ξ(t) and µ(t) in model (1.1): ! N X 1 N + 1 ˆ = (2.9) ξ(t) n− Xn (t) sN n=1 2 with (2.10)

sN =

N X n=1

N +1 n− 2 5

!2

and N + 1 ˆ ¯ − ξ(t) µ ˆ(t) = X(t) . 2

(2.11)

The functional residuals are therefore (2.12)

 N + 1 ˆ ¯ en (t) = (Xn (t) − X(t)) − ξ(t) n− , 2

1 ≤ n ≤ N.

Defining their empirical autocovariances by (2.13)

N 1 X γˆi (t, s) = ej (t)ej−i (s), N j=i+1

0 ≤ i ≤ N − 1,

leads to the kernel estimator (2.14)

cˆ(t, s) = γˆ0 (t, s) +

N −1 X i=1

  i (ˆ γi (t, s) + γˆi (s, t)). K h

The following assumption is imposed on kernel function K and the bandwidth h. Assumption 2.2 The function K is continuous, bounded, K(0) = 1 and K(u) = 0 if |u| > c, for some c > 0. The smoothing bandwidth h = h(N ) satisfies (2.15)

h(N ) → ∞,

h(N ) → 0, N

as N → ∞.

The assumption that K vanishes outside a compact interval is not crucial to establish (2.4). It is a simplifying condition which could be replaced by a sufficiently fast decay condition, at the cost of technical complications in the proof of (2.4). Recall that if {W (x), 0 ≤ x ≤ 1} is a standard Brownian motion (Wiener process), then the Brownian bridge is defined by B(x) = W (x) − xW (x), 0 ≤ x ≤ 1. The second–level Brownian bridge is defined by Z 1    2 2 (2.16) V (x) = W (x) + 2x − 3x W (1) + − 6x + 6x W (y)dy, 0 ≤ x ≤ 1. 0

Both the Brownian bridge and the second–level Brownian bridge are special cases of the generalized Brownian bridge introduced by MacNeill (1978) who studied the asymptotic behavior of partial sums of polynomial regression residuals. Process (2.16) appears as the null limit of the KPSS statistic of Kwiatkowski et al. (1992). We will see in Section 3 that for functional data the limit involves an infinite sequence of independent and identically distributed second-level Brownian bridges V1 (x), V2 (x), . . .. 6

3

Test statistics and their limit distributions

Horv´ath et al. (2014a) developed tests of level–stationarity of a functional time series, i.e. of the null hypothesis Xn (t) = µ(t) + ηn (t), using the partial sum process bN xc  1 X bN xc ¯ UN (x, t) = √ Xn (t) − X(t) = SN (x, t) − SN (1, t), N N n=1

where SN (x, t) is defined in (2.1). The process UN (x, t) has the form of a functional Brownian bridge. Their main statistic ZZ Z 2 TN = UN (x, t)dtdx = kUN (x, ·)k2 dx, 0 ≤ t, x ≤ 1, R 2 P∞ is asymptotically distributed, under the null, as Bi (x)dx, where λ1 , λ2 , . . . are i=1 λi eigenvalues of the long–run covariance function of the observations Xn , and B1 , B2 , . . . are iid Brownian bridges. In the case of trend stationarity, a different distribution arises; the Bi must be replaced by second level Brownian bridges, and the λi are defined differently. The remainder of this section explains the details. The test statistic for the trend-stationary case is based on the partial sum process of residuals (2.12). Observe that bN xc bN xc  1 X 1 X ˆ ¯ √ en (t) = √ (Xn (t) − X(t)) − ξ(t)(n − (N + 1)/2) N n=1 N n=1 bN xc bN xc xc X ˆ bN 1 X ξ(t) 1 X ¯ =√ Xn (t) − √ X(t) − √ (n − (N + 1)/2) N n=1 N n=1 N n=1 bN xc N 1 X bN xc 1 X √ Xn (t) − =√ Xn (t) − N N n=1 N n=1

 ˆ  ξ(t) √ bxN c(bxN c − N ) 2 N !! bN xc 1 3/2 ˆ bxN c bxN c = SN (x, t) − SN (1, t) − N ξ(t) −1 . N 2 N N A suitable test statistic is therefore given by ZZ Z 2 (3.1) RN = ZN (x, t)dtdx = kZN (x, ·)k2 dx,

0 ≤ t, x ≤ 1,

where (3.2)

ZN (x, t) = SN (x, t) −

 bN xc  bN xc  bN xc 1 ˆ SN (1, t) − N 3/2 ξ(t) −1 N 2 N N

ˆ are, respectively, defined in equations (2.1) and (2.9). The null limit and SN (x, t) and ξ(t) distribution of test statistic (3.1) is given in the following theorem. 7

Theorem 3.1 If Assumption 2.1 holds, then under null model (1.1), D

RN →

∞ X

Z λi

Vi2 (x)dx,

i=1

where λ1 , λ2 ,..., are eigenvalues of the long–run covariance function (2.5), and V1 , V2 , . . . are iid second–level Brownian bridges. The proof of Theorem 3.1 is given in Section 5. We now explain the issues arising in the functional case by comparing our result to that obtained by Kwiatkowski et al. (1992). If all curves are constant functions (Xi (t) = Xi for t ∈ [0, 1]), the statistic RN given by (3.1) is the numerator of the KPSS test statistic of Kwiatkowski et al. (1992), which is given by N 1 X 2 RN S = 2 , KPSSN = 2 2 N σ ˆN n=1 n σ ˆN 2 is a consistent estimator of the long-run variance σ 2 of the residuals. In the where σ ˆN R1 d scalar case, Theorem 3.1 reduces to RN → σ 2 0 V 2 (x)dx, where V (x) is a second–level 2 Brownian bridge. If σ ˆN is a consistent estimator of σ 2 , the result of Kwiatkowski et al. d R1 (1992) is recovered, i.e. KPSSN → 0 V 2 (x)dx. In the functional case, the eigenvalues λi can be viewed as long–run variances of the residual curves along the principal directions determined by the eigenfunctions of the kernel c(·, ·) defined by (2.5). To obtain a test analogous to the scalar KPSS test, with a parameter free limit null distribution, we must construct a statistic which involves a division by consistent estimators of the λi . We use only d largest eigenvalues in order not to increase the variability of the statistic caused by division by small empirical eigenvalues. A suitable statistic is

(3.3)

0 RN

Z d X 1 1 = hZN (x, ·), φˆi i2 dx, ˆ λi 0 i=1

ˆ i and eigenfunctions φˆi are defined by (2.8). Statistic (3.3) where the sample eigenvalues λ extends the statistic KPSSN . Its limit distribution is given in the next theorem. Theorem 3.2 If Assumptions 2.1, 2.2 and (2.7) hold, then under null model (1.1), 0 D → RN

d Z X i=1

1

Vi2 (x)dx,

0

with the Vi , 1 ≤ i ≤ d, the same as in Theorem 3.1.

8

Theorem 3.2 is proven in Section 5. Here we only note that that the additional Assumption P ˆi → 2.2 is needed to ensure that (2.4) holds which is known to imply λ λi , 1 ≤ i ≤ d. We conclude this section by discussing the consistency of the tests based on the above theorems. Theorem 3.3 implies that under HA statistic RN of Theorem 3.1 increases like N 2 . The critical values increase at the rate not greater than N . The test based on Theorem 3.1 0 is thus consistent. The exact asymptotic behavior under HA of the normalized statistic RN appearing in Theorem 3.2 is more difficult to study due to almost intractable asymptotics (under HA ) of the empirical eigenvalues and eigenfunctions of the kernel cˆ(·, ·). The precise asymptotic behavior under HA is not known even in the scalar case, i.e. for the statistic KPSSN . We therefore focus on the asymptotic limit under HA of the statistic RN whose derivation is already quite complex. This limit involves iid copies of the process Z x Z 1 Z 1 2 2 (3.4) ∆(x) = W (y)dy +(3x −4x) W (y)dy +(−6x +6x) yW (y)dy, 0 ≤ x ≤ 1, 0

0

0

where W (·) is a standard Brownian motion. Theorem 3.3 If the errors ui satisfy Assumption 2.1, then under the alternative (1.2), ∞

X 1 D R → τi N N2 i=1

Z

1

∆2i (x)dx,

0

where RN is the test statistic defined in (3.1) and ∆1 , ∆2 (x), . . . are iid processes defined by (3.4). The weights τi are the eigenvalues of the long-run covariance kernel of the errors ui defined analogously to (2.5) by (3.5)

cu (t, s) = E[u0 (t)u0 (s)] +

∞ X

Eu0 (t)ul (s) +

∞ X

Eu0 (s)ul (t).

l=1

l=1

The proof of Theorem 3.3 is given in Section 5.

4

Implementation and application to yield curves

We begin this section with algorithmic descriptions of tests based on Theorems 3.1 and 3.2. Algorithm 4.1 [Monte Carlo test based on Theorem 3.1] 1. Estimate the null model (1.1) and compute the residuals defined in equation (2.12). ˆ i φˆi , 1 ≤ 2. Select kernel K and a bandwidth h in (2.14) and compute the eigenvalues λ i ≤ N, defined by (2.8). 9

Table 1: Critical values of the distribution of the variable R0 (d) given by (4.1). d 10% Size 5% 1% d 10% Size 5% 1%

1 0.1201 0.1494 0.2138 6 0.5347 0.5909 0.6960

2 0.2111 0.2454 0.3253 7 0.6150 0.6687 0.7799

3 0.2965 0.3401 0.4257 8 0.6892 0.7482 0.8574

4 0.3789 0.4186 0.5149 9 0.7646 0.8252 0.9487

5 0.4576 0.5068 0.6131 10 0.8416 0.9010 1.0326

3. Simulate a large number, say G = 10, 000, of vectors [V1 , V2 , . . . , VN ] consisting of independent second level Brownian bridge processes Vi defined in (2.16). Find the 95th percentile, Rcritical , of the G replications of ? RN

=

N X

ˆi λ

Z

1

Vi2 (x)dx.

0

i=1

4. Compute the test statistic RN defined in (3.1). If RN ≥ Rcritical , reject H0 at the 5% significance level. ˆ i decay very quickly to zero, so if N is large, it can be replaced In most applications, the λ in Algorithm 4.1 by a smaller number, e.g by d = 20, and the empirical distribution of the ? RN can be replaced by that of the Rd? . In Algorithm 4.1 the critical value must be obtained via Monte Carlo simulations for each data set. In Algorithm 4.2, tabulated critical values can be used. These depend on the number d of the functional principal components used 0 . Typically d is a small, single digit, number. Table 1 lists selected to construct statistic RN critical values. They have been obtained by simulating G = 10, 000 vectors [V1 , V2 , . . . , Vd ] and finding the percentiles of the G replications of (4.1)

0

R (d) =

d Z X i=1

1

Vi2 (x)dx.

0

Algorithm 4.2 [Asymptotic test based on Theorem 3.2] 1. Perform steps 1 and 2 of Algorithm 4.1. P ˆ P ˆ 2. Choose the smallest d such that i≤d λ i/ i≤N λi > 0.85. 10

0 0 0 3. Calculate the statistic RN given by (3.3) and reject H0 if RN > Rcritical , with the critical value given in Table 1.

In some applications, Step 2 of Algorithm 4.2 may be replaced by a selection of d based on a visual fit of the truncated principal component expansion Xn(d) (t)

=µ ˆ(t) +

d D X

E Xn , φˆj φˆj (t)

j=1

to the observed curves Xn (t). In other applications, existing theory or experience may support certain choices of d. This is the case for the yield curves, which we use to illustrate the application of our tests. In both algorithms, testing at a fixed level can be replaced by ? the calculation of P–values based on the Monte Carlo distributions of RN or R0 (d). An important step is the choice of h needed to estimate the long run covariance function. A great deal of research in this direction has been done for scalar and vector time series. For functional time series, the method proposed by Horv´ath et al. (2014b) often gives good results. It uses the flat top kernel   0 ≤ t < 0.1  1, (4.2)

K(t) =

1.1 − |t|,   0,

0.1 ≤ t < 1.1 |t| ≥ 1.1

advocated by Politis and Romano (1996, 1999) and Politis (2011), and a data driven selection of h. This method performs well if the series length N is larger than several hundred, longer than the series we consider. In our data example, a deterministic bandwidth h = N 2/5 (combined with the flat top kernel) produced good size and power. The optimal selection of h is not a focus of this paper, this complex issue must be investigated in a separate work. Intended merely as an illustration rather than an extensive empirical study, we apply both tests to yield curves. We consider a time series of daily United States Federal Reserve yield curves constructed from discrete rates at maturities of 1, 3, 6, 12, 24, 36, 60, 84, 120 and 360 months. Yield curves are discussed in many finance textbooks, see e.g. Chapter 10 of Campbell et al. (1997) or Diebold and Rudebusch (2013). Figure 1 shows five consecutive yield curves. Following the usual practice, each yield curve is treated as a single functional observation, and so the yield curves observed over a period of many days form a functional time series. Figure 2 shows the sample period we study, which covers 100 consecutive trading days. It shows a downward trend in interest rates, and we want to test if these curves also contain a random walk component. The tests were performed using d = 2. The first two principal components of cˆ explain over 95% of variance and provide excellent visual fit. Our selection thus uses three principal shapes to describe the yield curves, the mean function 11

6.0

Figure 1: Five consecutive yield curves

Sept 16th

Sept 17th

Sept 18th

Sept 19th

Mon

Tue

Wed

Thu

Fri

2.0 -2.0

0.0

Rates

4.0

Sept 15th

and the first two principal components. It is in agreement with with recent approaches to modeling the yield curve, cf. Hays et al. (2012) and Diebold and Rudebusch (2013), which are based on the three component Nelson–Siegel model. We first apply both tests to the time series of yield curves shown in Figure 2, N = 100. The test based on statistic RN , Algorithm 4.1, yields the P–value of 0.0282 and the test 0 based on RN , Algorithm 4.2, 0.0483, indicating the presence of random walk in addition to a downward trend. Extending the sample by adding the next 150 business days, so that N = 250, yields the respective P–values 0.0005 and 0.0013. In all computation the bandwidth h = N 2/5 has been used. Examination of different periods shows that trend stationarity does not hold if the period under observation is sufficiently long. This agrees with the empirical finding of Chen and Niu (2014) whose method of yield curve prediction, based on utilizing periods of approximate stationarity, performs better than predictions based on the whole sample; random walk is not predictable. Even though our tests are motivated by the alternative of a random walk component, they reject any serious violation of trend stationarity. Broadly speaking, our analysis shows that daily yield curves can be treated as a trend stationary functional time series only over short periods of time, generally not longer than a calendar quarter. We complement our data example with a small simulation study. There is a multitude of data generating process that could be used. The following quantities could vary: shapes of the mean and the principal components functions, the magnitude of the eigenvalues, the distribution of the scores and their dependence structure. In this paper, concerned chiefly with theory, we merely want to present a very limited simulation study that validates the 12

Figure 2: The plot shows N = 100 yield curves for each business day from July 25 to December 14, 2006.

conclusions of the data example. We therefore attempt to simulate curves whose shapes resemble those of the real data, and for which either the null or the alternative holds. The artificial data is therefore generated according to the following algorithm. Algorithm 4.3 [Yield curves simulation under H0 ] ˆ and µ 1. Using real yield curves, calculate the estimates ξ(t) ˆ(t) defined, respectively, by (2.9) and (2.11). Then compute the residuals en (t) defined in (2.12). 2. Calculate the first two empirical principal components φˆ1 (t) and φˆ2 (t) using the empirical covariance function (4.3)

N 1 X γˆ0 (s, t) = (en (t) − e¯(t))(en (s) − e¯(s)). N n=1

This step leads to the approximation en (t) ≈ a1,n φˆ1 (t) + a2,n φˆ2 (t),

n = 1, 2, . . . , N,

where a1,n and a2,n are the first two functional scores. The functions φˆ1 (t) and φˆ2 (t) are treated as deterministic, while the scores a1,n and a2,n form random sequences indexed by n. 3. To simulate temporally independent residuals en , generate independent in n scores a01,n ∼ N (0, σa21 ) and a02,n ∼ N (0, σa22 ), where σa21 and σa22 are the sample variances of 13

the real scores, and set e0n (t) = a01,n φˆ1 (t) + a02,n φˆ2 (t),

n = 1, 2, . . . , N.

To simulate dependent residual curves, generate scores a01,n , a02,n ∼ AR(1), where each autoregressive process has parameter 0.5. ˆ and the simulated residuals e0 (t), 4. Using the estimated functional parameters µ ˆ(t), ξ(t) n construct the simulated data set (4.4)

ˆ ˆ(t) + ξ(t)n + e0n (t), Xn0 (t) = µ

n = 1, 2, . . . , N.

Table 2 shows empirical sizes based on 1000 replication of the data generating process described in Algorithm 4.3. We use two ways of estimating the eigenvalues and eigenfunctions. The first one uses the function γˆ0 defined by (4.3) (in the scalar case this corresponds to using the usual sample variance rather than estimating the long–run variance). The second uses the estimated long-run covariance function (2.14) with the bandwidth h specified in Table 2. The covariance kernel γˆ0 (t, s) is appropriate for independent error curves. The bandwidth h = N 1/3 is too small, not enough temporal dependence is absorbed. The bandwidth h = N 2/5 gives fairly consistent empirical size, typically within one percent of the empirical size. The bandwidth h is not relevant when the kernel γˆ0 is used. The different empirical sizes reflect random variability due to three different sets of 1000 replications being used. This indicates that with 1000 replications, a difference of one percent in empirical sizes is not significant. Table 2: Empirical sizes for functional time series generated using Algorithm 4.3.

N

Test statistic DGP iid normal Cov-kernel γˆ0 (t, s)

h = N 1/3 100 h = N 2/5 h = N 1/2 h = N 1/3 250 h = N 2/5 h = N 1/2 h = N 1/3 1000 h = N 2/5 h = N 1/2

6.3 5.6 5.1 5.0 5.5 5.5 4.8 6.1 5.8

RN iid normal cˆ(t, s)

AR(1) cˆ(t, s)

5.6 4.4 4.8 4.3 4.9 5.9 4.2 6.3 4.9

9.4 6.6 3.5 10.2 7.2 4.3 7.0 6.3 4.6

14

iid normal γˆ0 (t, s)

0 RN iid normal cˆ(t, s)

AR(1) cˆ(t, s)

5.9 5.8 4.5 5.8 4.5 4.8 5.9 6.0 5.6

5.2 3.6 5.1 5.2 4.1 3.4 5.6 5.1 4.7

9.1 6.5 2.9 9.4 5.6 3.5 7.1 5.7 3.9

To evaluate power, instead of (4.4), the data generating process is Xn0 (t)

(4.5)

ˆ =µ ˆ(t) + ξ(t)n +

n X

ui (t) + e0n (t),

n = 1, 2, . . . , N,

i=1

where the increments ui are defined by     ui (t) = aNi1 sin πt + aNi2 sin 2πt , with standard normal Nij , j = 1, 2, 1 ≤ i ≤ N, totally independent of each other. The scalar a quantifies the distance from H0 ; ia = 0, corresponds to H0 . For all empirical power simulations, we use a 5% size critical value and h = N 2/5 . The empirical power reported in Table 3 increases as the sample size and the distance from H0 increase. It is visibly higher for iid curves as compared to dependent curves. Table 3: Empirical power based on the DGP (4.5) and h = N 2/5 .

Test statistic DGP iid normal Cov-kernel γˆ0 (t, s) a = 0.1 a = 0.5

5 5.1

N N N N

= 125 = 250 = 125 = 250

100.0 100.0 100.0 100.0

RN iid normal cˆ(t, s)

AR(1) cˆ(t, s)

89.9 97.0 91.5 97.3

10.1 27.7 83.1 96.4

iid normal γˆ0 (t, s)

0 RN iid normal cˆ(t, s)

AR(1) cˆ(t, s)

100.0 100.0 100.0 100.0

87.9 96.0 89.7 97.4

10.4 21.9 71.2 92.4

Proofs of the results of Section 3 Preliminary results

For ease of reference, we state in this section two theorems which are used in the proofs of the results of Section 3. Theorem 5.1 is well-known, see Theorem 4.1 in Billingsley (1968). Theorem 5.2 was recently established in Berkes et al. (2013). Theorem 5.1 Suppose ZN , YN , Y are random variables taking values in a separable metric D P D space with the distance function ρ. If YN → Y and ρ(ZN , YN ) → 0, then ZN → Y .

15

In our setting, we work in the metric space D([0, 1], L2 ) which is the space of right-continuous functions with left limits taking values in L2 ([0, 1]). A generic element of D([0, 1], L2 ) is z = {z(x, t), 0 ≤ x ≤ 1, 0 ≤ t ≤ 1}. R For each fixed x, z(x, ·) ∈ L2 , so kz(x, ·)k2 = z 2 (x, t)dt < ∞. The uniform distance between z1 , z2 ∈ D([0, 1], L2 ) is nZ o1/2 ρ(z1 , z2 ) = sup |z1 (x, ·) − z2 (x, ·)k = sup (z1 (x, t) − z2 (x, t))2 dt . 0≤x≤1

0≤x≤1

In the following, we work with the space D([0, 1], L2 ) equipped with the uniform distance. P Theorem 5.2 If Assumption 2.1 holds, then ∞ i=1 λi < ∞, and we can construct a sequence of Gaussian processes ΓN (x, t) such that for every N D

{ΓN (x, t), 0 ≤ x, t ≤ 1} = {Γ(x, t), 0 ≤ x, t ≤ 1}, (5.1)

whereΓ(x, t) =

∞ X

1/2

λi Wi (x)φi (t),

i=1

and (5.2)

κN = sup kVN (x, ·) − ΓN (x, ·)k = op (1). 0≤x≤1

Recall that the Wi are independent standard Wiener processes, λi and φi are defined in (2.6) and VN (x, t) is defined in (2.2).

5.2

Proof of Theorem 3.1

The proof of Theorem 3.1 is constructed from several lemmas decomposing the statistic RN into a form suitable for the application of the results of Section 5.1, i.e. to leading and asymptotically negligible terms. Throughout this section, we assume that the null model (1.1) and Assumption 2.1 hold. Lemma 5.1 For the sN is defined in (2.10), sN 1 → , as N → ∞. 3 N 12 Proof: Identify the left–hand side with a Riemann sum. Lemma 5.2 For the functional slope estimate ξˆ defined by (2.9), N 1 X N + 1 ˆ ξ(t) − ξ(t) = n− ηn (t). sN n=1 2

16



Proof: Use the identities N  X N + 1 n− = 0, 2 n=1

N  X n=1

N

X N + 1 N + 1 2 n− n= n− = sN . 2 2 n=1 

Lemma 5.3 The following identity holds N N 3/2 X  N + 1 1 n − ηn (t) = −3 sN n=1 2 N sN

( ) N −1 N − 1 1 X k  VN (1, t) − ,t , VN 2N N k=1 N

where VN (x, t) is the partial sum process of the errors defined in (2.2). Proof: Notice that N N 3/2 X  N + 1 n− ηn (t) sN n=1 2 ( ) N N N + 1 1 X 1 1 1 X √ √ = −3 nηn (t) − ηn (t) . N sN N N n=1 2N N n=1

Using the relation N X n=1

nηn (t) = N

N X

ηn (t) −

k N −1 X X

ηn (t),

k=1 n=1

n=1

we have N N 3/2 X  N + 1 n− ηn (t) sN n=1 2 ( N N −1 X k  X 1 1 1  X √ = −3 N ηn (t) − ηn (t) N sN N N n=1 k=1 n=1 ) N N + 1 1 X √ − ηn (t) 2N N n=1 ( N N −1 k 1 X 1 1 X 1 X √ √ ηn (t) − ηn (t) = −3 N sN N k=1 N n=1 N n=1 ) N N + 1 1 X √ − ηn (t) 2N N n=1 ( ) N −1 N − 1 1 X k  1 VN (1, t) − VN ,t . = −3 N sN 2N N k=1 N

 17

Lemma 5.4 The process ZN (x, t) defined by (3.2) admits the decomposition bN xc VN (1, t) N ( ) ! N −1 N − 1 1 1 1 X  k  bN xc bN xc − VN (1, t) − ,t −1 . VN 2 N −3 sN 2N N k=1 N N N

ZN (x, t) = VN (x, t) −

Proof: Notice that !! bN xc 1 3/2 ˆ bN xc bN xc ZN (x, t) = SN (x, t) − SN (1, t) − N ξ(t) −1 N 2 N N !! bN xc N bN xc 1 X 1 3/2 ˆ 1 X bN xc bN xc √ Xn (t) − Xn (t) − N ξ(t) =√ −1 N 2 N N N n=1 N n=1 bN xc N bN xc 1 X 1 X √ (µ(t) + ξ(t)n + ηn (t)) − (µ(t) + ξ(t)n + ηn (t)) =√ N N n=1 N n=1 !! bN xc bN xc 1 ˆ − N 3/2 ξ(t) −1 . 2 N N

Therefore, bN xc bN xc 1 1 X 1 X ZN (x, t) = µ(t) √ bN xc + ξ(t) √ n+ √ ηn (t) N N n=1 N n=1 N N X bN xc X 1 bN xc bN xc 1 √ ξ(t) n− − µ(t) √ N − ηn (t) N N N N N n=1 !! n=1 bN xc bN xc 1 ˆ −1 − N 3/2 ξ(t) 2 N N bN xc N 1 X bN xc X =√ ηn (t) − ηn (t) N n=1 N n=1 1 1 + µ(t) √ bN xc − µ(t) √ bN xc N N bN xc N 1 X bN xc 1 X + ξ(t) √ n− ξ(t) √ n N N n=1 N n=1 !! bN xc bN xc 1 3/2 ˆ −1 . − N ξ(t) 2 N N

18

By Lemma 5.2, bN xc N 1 X bN xc X ZN (x, t) = √ ηn (t) ηn (t) − N n=1 N n=1

1 bN xc(bN xc + 1) bN xc 1 N (N + 1) + ξ(t) √ − ξ(t) √ 2 N 2 N N !! 1 bN xc bN xc ˆ − N 3/2 ξ(t) −1 2 N N bN xc N 1 X bN xc X =√ ηn (t) − ηn (t) N n=1 N n=1

!! 1 3/2 ˆ bN xc bN xc − N (ξ(t) − ξ(t)) −1 2 N N bN xc N bN xc X 1 X ηn (t) − ηn (t) =√ N n=1 N n=1 ( ) !! N  N + 1  bN xc bN xc 1 N 3/2 X  n− ηn (t) −1 . − 2 sN n=1 2 N N

By Lemma 5.3, bN xc VN (1, t) (N ) ! N −1 N − 1 1 X  k  bN xc bN xc 1 1 VN (1, t) − VN ,t −1 . − 2 N −3 sN 2N N k=1 N N N

ZN (x, t) = VN (x, t) −

 Lemma 5.5 The following convergence holds Z n X N k  Z 1 o2 1 P VN ,t − ΓN (y, t)dy dt → 0, N k=1 N 0 where the ΓN are the Gaussian processes in Theorem 5.2. Proof: Since VN is a step function with jumps at y =

k , N

Z 1 N 1 X k  VN ,t = VN (y, t)dy. N k=1 N 0 19

We must thus show that

Z

1

1

Z VN (y, ·)dy −

0

0

P ΓN (y, ·)dy → 0.

Using the contractive property of integrals and relation (5.2), we have Z 1

Z 1

Z 1

VN (y, ·)dy − ΓN (y, ·)dy ≤ kVN (y, ·) − ΓN (y, ·)kdy

0 0 0 Z 1 sup kVN (y, ·) − ΓN (y, ·)kdy ≤ 0 0≤x≤1

≤ κN = op (1) which proves Lemma 5.5.



Lemma 5.6 Consider the process Γ(·, ·) defined by (5.1) and set (5.3)



0

2





Γ (x, t) = Γ(x, t) + 2x − 3x Γ(1, t) +

− 6x + 6x

2

1

Z

Γ(y, t)dy.

0

Then Z

1 0

2

kΓ (x, ·)k dx =

∞ X

0

1

Z

Vi2 (x)dx.

λi 0

i=1

Proof: Expansion (5.1) implies that 

0

2



Γ (x, t) = Γ(x, t) + 2x − 3x Γ(x, t) +



− 6x + 6x

2

Z

1

Γ(y, t)dy

0 ∞ p ∞ p  X X 2 = λi Wi (x)φi (t) + 2x − 3x λi Wi (1)φi (t) i=1

+



i=1

− 6x + 6x2

Z

∞ p 1X

λi Wi (y)φi (t)dy

0

i=1

( ∞ p   X 2 = λi Wi (x) + 2x − 3x Wi (1) i=1

+



− 6x + 6x2

Z

)

1

Wi (y)dy φi (t)

0

=

∞ p X λi Vi (x)φi (t), i=1

20

where V1 , V2 , . . . are iid second–level Brownian bridges defined in (2.16). By the orthonormality of the eigenfunctions φi , ZZ Z 1 0 2 kΓ (x, ·)k dx = (Γ0 (x, t))2 dtdx 0 !2 ZZ ∞ p X = λi Vi (x)φi (t) dxdt =

∞ X

i=1 1

Z λi

Vi2 (x)dx.

0

i=1

 Lemma 5.7 For the processes ZN (·, ·) and Γ0N (·, ·) defined, respectively, in (3.2) and (5.3), P

sup kZN (x, ·) − Γ0N (x, ·)k → 0.

(5.4)

0≤x≤1

Proof: Using the decomposition given in Lemma 5.4, ZN (x, t) can be algebraically manipulated to be expressed as N −1 1 X k  VN ,t ZN (x, t) = VN (x, t) + aN (x)VN (1, t) + bN (x) N k=1 N

where (5.5)

aN (x) =

! N − 1  N − 1  bN xc 2 bN xc 1 −1 − 2N −3 sN 2N N 2N −3 sN 2N N 1

and bN (x) =

 bN xc  bN xc − 1 . 2N −3 sN N N 1

Notice that kZN (x, ·) −

Γ0N (x, ·)k



≤ VN (x, t) − ΓN (x, t)

 

+ aN (x)VN (1, t) − 2x − 3x2 ΓN (1, t) N −1

Z 1 1 X k  

2 VN + bN (x) , t − − 6x + 6x ΓN (y, t)dy . N k=1 N 0

Thus, Lemma 5.7 will be proven once we have established the following relations: (5.6)

P

sup kVN (x, ·) − ΓN (x, ·)k → 0; 0≤x≤1

21

(5.7)

P

sup kan (x)VN (1, ·) − (2x − 3x2 )ΓN (1, ·)k → 0; 0≤x≤1

(5.8)

Z 1 N −1 1 X k P sup bn (x) ΓN (y, ·)dy → 0. VN ( , ·) − (−6x + 3x2 ) N N 0≤x≤1 0 k=1

Relation (5.6) is the conclusion of Theorem 5.2. The verification of relation (5.7) follows next. Since kan (x)VN (1, ·) − (2x − 3x2 )ΓN (1, ·)k ≤ |an (x)|kVN (1, ·) − ΓN (1, ·)k + |aN (x) − (2x − 3x2 )|kΓN (1, ·)k, relation (5.7) will hold once we have verified that ( ) (5.9)

sup N ≥1

sup |aN (x)|

< ∞,

0≤x≤1

P

(5.10)

kVN (1, ·) − ΓN (1, ·)k → 0,

(5.11)

sup |aN (x) − (2x − 3x2 )| → 0, 0≤x≤1

(5.12)

kΓN (1, ·)k = OP (1).

Relation (5.10) follows from Theorem 5.2. By (5.1), 2 D

kΓN (1, ·)k =

∞ X

λi Zi2 ,

i=1

P where the Zi are independent standard normal and ∞ i=1 λi < ∞. Thus relation (5.12) holds trivially because the distribution of the left-hand side does not depend on N . To show (5.9), set N − 1 1 . dN = 2N −3 sN 2N By Lemma 5.1, dN → 3, and !2 bN xc bN xc an (x) = (dN − 1) − dN . N N

22

Since bN xc ≤ N , |aN (x)| ≤ |dN − 1| + dN , and (5.9) follows. To show relation (5.11) , first notice that !2 bN xc bN xc 2 2 |aN (x) − (2x − 3x )| ≤ (dN − 1) − 2x + dN − 3x . N N We must thus show that bN xc − 2x → 0, sup (dN − 1) N 0≤x≤1

(5.13) and

sup dN 0≤x≤1

(5.14)

bN xc N

!2

2 − 3x → 0.

To show (5.13), first notice that N x − 1 ≤ bN xc ≤ N x, which implies (dN − 1)x − 2x −

bN xc 1 (dN − 1) ≤ (dN − 1) − 2x ≤ (dN − 1)x − 2x. N N

Both sides of the inequality are linear in x which implies the extrema occurs at the boundaries. Thus for any x ∈ [0, 1], bN xc − 2x ≤ |dN − 3| → 0, (dN − 1) N which proves relation (5.13). To show (5.14), notice that (N x − 1)2 ≤ bN xc2 ≤ (N x)2 , implying !2   2x 1 bN xc − 2 dN ≤ dN dN x2 − 3x2 − − 3x2 ≤ dN x2 − 3x2 . N N N For any x ∈ [−1, 0], the quadratic functions on each side of the inequality are strictly decreasing. Hence, the extrema occurs at the boundaries implying for any x ∈ [0, 1], !2 bN xc − 3x2 ≤ |dN − 3| → 0. dN N This proves (5.14) and hence relation (5.11) holds true. Thus relations (5.9), (5.10), (5.11) and (5.11) hold true which proves relation (5.7). That is P

sup kan (x)VN (1, ·) − (2x − 3x2 )ΓN (1, ·)k → 0. 0≤x≤1

23

Next is the verification (5.8). As in the case of (5.7), it suffices to show that ( ) (5.15)

sup |bN (x)|

sup N ≥1

< ∞,

0≤x≤1

(5.16)

−1

1 N

k  Z 1

X

P ,· − ΓN (y, ·)dy → 0, VN

N k=1 N 0

(5.17)

sup |bN (x) − (6x2 − 6x)| → 0, 0≤x≤1

Z

(5.18)

1

0

ΓN (y, ·)dy = OP (1).

The verification of (5.15) and (5.17) uses the same arguments as the verification of (5.9) and (5.11) so we do not need to present the details. Relation (5.16) coincides with Lemma 5.5, while relation (5.18) follows from Lemma 5.6. This completes the proof of Lemma 5.7.  Using the above lemmas, we can now present a compact proof of Theorem 3.1. RR 2 Proof of Theorem 3.1: Recall that the test statistic RN is defined by RN = ZN (x, t)dxdt, where  bN xc 1 3/2 ˆ  bN xc  bN xc ZN (x, t) = SN (x, t) − SN (1, t) − N ξ(t) −1 N 2 N N ˆ are respectively defined in equations (2.1) and (2.9). Recall that with SN (x, t) and ξ(t) Γ0N (x, t)



2



= ΓN (x, t) + 2x − 3x ΓN (1, t) +



− 6x + 6x

2

Z

1

ΓN (y, t)dy,

0

and 

0

2



Γ (x, t) = Γ(x, t) + 2x − 3x Γ(1, t) +



− 6x + 6x

2

1

Z

Γ(y, t)dy.

0

From Lemma 5.7, we know that P

ρ(ZN (x, ·), Γ0N (x, ·)) = sup kZN (x, ·) − Γ0N (x, ·)k → 0. 0≤x≤1

D

By Theorem 5.2, Γ0N (x, t) = Γ0 (x, t). Thus, Theorem 5.1 implies that D

ZN (x, t) → Γ0 (x, t). 24

By Lemma 5.6, ZZ

0

2

d

(Γ (x, t)) dxdt =

∞ X

Z λi

1

Vi2 (x)dx.

0

i=1

Thus, by the continuous mapping theorem, ZZ Z ∞ X D 2 RN = (ZN (x, t)) dxdt → λi Vi2 (x)dx, i=1

which proves the desired result.

5.3



Proof of Theorem 3.2

ˆ i and eigenThe key fact needed in the proof is the consistency of the sample eigenvalues λ functions φˆi . The required result, stated in (5.19), follows fairly directly from (2.4). However, the verification that (2.4) holds for the kernel estimator (2.14) is not trivial. The required result can be stated as follows. Theorem 5.3 Suppose Assumption 2.1 holds with δ = 0 and κ = 2. If H0 and Assumption 2.2 hold, then relation (2.4) holds. Observe that assuming that relation (2.3) in Assumption 2.1 holds with δ = 0 and κ = 2 weakens the universal assumption that it holds with some δ > 0 and κ > 2 + δ. We first present the proof of Theorem 3.2, which uses Theorem 5.3, and then turn to a rather technical proof of Theorem 5.3. Proof of Theorem 3.2: If Assumptions 2.1, 2.2, condition (2.7) and H0 hold, then (5.19)

ˆ i − λi | = op (1) max |λ

1≤i≤d

and

max kφˆi − cˆi φi k = op (1),

1≤i≤d

where cˆ1 , cˆ2 , ..., cˆd are unobservable random signs defined as cˆi = sign(hφˆi , φi i). Indeed, Theorem 5.3 states that relation (2.4) holds under H0 and Assumptions 2.1 and 2.2. Relations (5.19) follow from (2.4) and Lemmas 2.2. and 2.3 of Horv´ath and Kokoszka (2012) which state that the differences of the eigenvalues and eigenfunctions are bounded by the Hilbert–Schmidt norm of the difference of the corresponding operators. Using (5.1), it is easy to see that for all N p D (5.20) {hΓ0N (x, ·), φi i, 0 ≤ x ≤ 1, 1 ≤ i ≤ d} = { λi Vi (x), 0 ≤ x ≤ 1, 1 ≤ i ≤ d}. We first show that (5.21)

P sup |hZN (x, ·), φˆi i − hΓ0N (x, ·), cˆi φi i| → 0.

0≤x≤1

25

By the Cauchy-Schwarz inequality and Lemma 5.7, we know sup |hZN (x, ·) − Γ0N (x, ·), φˆi i| ≤ sup kZN (x, ·) − Γ0N (x, ·)k = op (1). 0≤x≤1

0≤x≤1

Again by the Cauchy-Schwarz inequality and (5.19), we have sup |hΓ0N (x, ·), φˆi − cˆi φi i| ≤ sup kΓ0N (x, ·)kkφˆi − cˆi φi k = op (1). 0≤x≤1

0≤x≤1

Then using the triangle inequality and inner product properties, sup |hZN (x, ·), φˆi i − hΓN (x, ·), cˆi φi i| 0≤x≤1

= sup |hZN (x, ·), φˆi i − hΓ0N (x, ·), φˆi i + hΓ0N (x, ·), φˆi i − hΓ0N (x, ·), cˆi φi i| 0≤x≤1

≤ sup |hZN (x, ·), φˆi i − hΓ0N (x, ·), φˆi i| + sup |hΓ0N (x, ·), φˆi i − hΓ0N (x, ·), cˆi φi i| 0≤x≤1

0≤x≤1

= sup |hZN (x, ·) − 0≤x≤1

Γ0N (x, ·), φˆi i|

+ sup |hΓ0N (x, ·), φˆi − cˆi φi i| 0≤x≤1

= op (1), which proves relation (5.21). Thus by Theorem 5.1, (5.19), (5.21), (5.20) and the continuous mapping theorem, 0 RN

Z d d Z X X 1 d 2 ˆ = hZN (x, ·), φi i dx → Vi2 (x)dx. ˆ λi i=1

i=1

 Proof of Theorem 5.3: Recall definitions of the kernels c and cˆ given, respectively, in (2.5) and (2.14). The claim will follow if we can show that ZZ (5.22) {ˆ γ0 (t, s) − E[η0 (t)η0 (s)]}2 dtds = oP (1) and (5.23)

ZZ ( N −1 X i=1

)2   X i K γˆi (t, s) − E[η0 (s)ηi (t)] dtds = oP (1). h i≥1

These relations are established in a sequence of Lemmas which split the argument by isolating the terms related to the estimation of trend from those related to the autocovariances of the ηi . The latter terms were treated in Horv´ath et al. (2013), so the present proof focuses on the extra terms appearing in our context. 26

Lemma 5.8 Under model (1.1), the following relation holds N 1 X (ηi (t) − η¯(t))(ηi (s) − η¯(s)) N i=1 ( N )( N )     X X 1 N +1 N +1 − i− ηi (t) i− ηi (s) . N sN i=1 2 2 i=1

γˆ0 (t, s) =

Proof: Observe that

PN  i=1

N  X i=1

i−

N +1 2



= 0, and so N

X N + 1 N + 1 2 i− i= i− = sN . 2 2 i=1

Also recall that N N  1 X N + 1 1 X N + 1 ˆ i− (ηi (t) − η¯(t)) i − ξ(t) − ξ(t) = ηi (t) = . sN i=1 2 sN i=1 2

We can express the residuals en (t) as  N + 1 ˆ ¯ ei (t) = (Xi (t) − X(t)) − ξ(t) i− 2 N  X 1 N + 1 ˆ = µ(t) + iξ(t) + ηi (t) − (µ(t) + iξ(t) + ηi (t)) − ξ(t) i− N i=1 2   ˆ − ξ(t)) i − N + 1 . = (ηi (t) − η¯(t)) − (ξ(t) 2

27

Then, by the above relations, γˆ0 (t, s) =

N 1 X ei (t)ei (s) N i=1 "

N   1 X ˆ − ξ(t)) i − N + 1 (ηi (t) − η¯(t)) − (ξ(t) = N i=1 2

#"

  ˆ − ξ(s)) i − N + 1 (ηi (s) − η¯(s)) − (ξ(s) 2

N 1 X = (ηi (t) − η¯(t))(ηi (s) − η¯(s)) N i=1

ˆ − ξ(s)) 1 − (ξ(s) N

N  X N + 1 (ηi (t) − η¯(t)) i − 2 i=1

ˆ − ξ(t)) 1 − (ξ(t) N

N  X N + 1 (ηi (s) − η¯(s)) i − 2 i=1

ˆ − ξ(t))(ξ(s) ˆ − ξ(s)) 1 + (ξ(t) N

N  X i=1

i−

N + 1 2 2

N 1 X = (ηi (t) − η¯(t))(ηi (s) − η¯(s)) N i=1



N N  X N + 1 N + 1 1 X i− ηi (s) i− ηi (t) N sN i=1 2 2 i=1

N N  X 1 X N + 1 N + 1 − i− ηi (t) i− ηi (s) N sN i=1 2 2 i=1 N N N + 1 1 X N + 1 1 1 X i− ηi (t) i− ηi (s)sN . + N sN i=1 2 sN i=1 2

The claim thus follows because the last two terms cancel.



To lighten the notation, in the remainder of this section we set ki = i −

N +1 . 2

Lemma 5.9 Relation (5.22) holds under the assumptions of Theorem 5.3. P

Proof: We must show that kˆ γ0 − γ0 k → 0, where γ0 (t, s) = E[η0 (t)η0 (s)] and the norm is in L2 ([0, 1] × [0, 1]). We will use the decomposition of Lemma 5.8, i.e. γˆ0 (t, s) = γ˜0 (t, s) − vN (t, s) 28

#

where

N 1 X γ˜0 (t, s) = (ηi (t) − η¯(t))(ηi (s) − η¯(s)) N i=1

and 1 vN (t, s) = N sN

(

N X

)( ki ηi (t)

i=1

N X

) ki ηi (s) .

i=1

It will be enough to show that P

P

k˜ γ0 − γ0 k → 0 and ||vN || → 0. The first convergence is the consistency of the sample covariance function which was proven by Horv´ath et al. (2013). The remainder of the proof is devoted to the verification that P ||vN || → 0. Since Z nX N N o2 Z n X o2 1 2 ki ηi (t) dt ki ηi (s) ds, kvN k = 2 2 N sN i=1 i=1 we must show that 1 kvN k = N sN

(5.24)

2 N X P k η i i → 0, i=1

where the norm in (5.24) is in L2 ([0, 1]). Using diagonal summation, we get 2 N N X N X X ki ηi = ki kj hηi , ηj i i=1

i=1 j=1

=

N X

ki2 kηi k2 + 2

i=1

Since

PN

i=1

N −1 N −` X X

ki ki+` hηi , ηi+` i.

l=1 i=1

ki2 = sN , we obtain

(5.25)

E

N hX

ki2 kηi k2

i

= sN Ekη0 k2 .

i=1

We now turn to the expectation of the second term E

−1 N −` hN X X

−1 N −` i N X X ki ki+` hηi , ηi+` i = ki ki+` Ehηi , ηi+` i.

l=1 i=1

l=1 i=1

29

By stationarity, Ehη0 , η` i = Ehηn−` , ηn i. By Assumption 2.1, ηn can be approximated by ηn,` which is independent of ηn−` . Therefore Ehηn−` , ηn i = Ehηn−` , ηn,` i + Ehηn−` , ηn − ηn,` i = Ehηn−` , ηn − ηn,` i, because Ehηn−` , ηn,` i = 0. Observe that, using the Cauchy-Schwarz inequality twice, |Ehηn−` , ηn − ηn,` i| ≤ Ekηn−` kkηn − ηn,` k 1/2 1/2   . Ekηn − ηn,` k2 ≤ Ekηn−` k2

It follows that N −1 N −` −1 N −` hXX i N XX ki ki+` hηi , ηi+` i ≤ ki ki+` Ehηi , ηi+` i E l=1 i=1

l=1 i=1

2 1/2



≤ Ekηn−` k

−` N −1 N X X

 1/2 ki ki+` Ekηn − ηn,` k2 .

l=1 i=1

≤ CN 3

∞ X



Ekηn − ηn,` k2

1/2

.

`=1

Thus assuming

∞ X 

Ekηn − ηn,` k2

1/2

< ∞,

`=1

we obtain (5.26)

N −1 N −` hXX i ki ki+` hηi , ηi+` i ≤ O(N 3 ) = O(sN ). E l=1 i=1

Combining (5.25) and (5.26), we see that N

(5.27)

X 1 1 EkvN k = Ek ki ηi k2 = O(sN ) = O(N −1 ). N sN N sN i=1

This proves relation (5.24).



Lemma 5.10 Relation (5.23) holds under the assumptions of Theorem 5.3.

30

Proof: Under H0 , γˆi (t, s) =

N 1 X ej (t)ej−i (s) N j=i+1

N i h i 1 X h ˆ − ξ(t))kj · (ηj−i (s) − η¯(s)) − (ξ(s) ˆ − ξ(s))kj−i = (ηj (t) − η¯(t)) − (ξ(t) N j=i+1

=

N 1 X (ηj (t) − η¯(t))(ηj−i (s) − η¯(s)) N j=i+1 N N X 1 X − kl ηl (s) kj−i (ηj (t) − η¯(t)) N sN l=1 j=i+1



N N X 1 X kl ηl (t) kj (ηj−i (s) − η¯(s)) N sN l=1 j=i+1

N N N X X 1 X km ηm (t) kj kj−i . kl ηl (s) + N s2N l=1 m=1 j=i+1

Set

N 1 X γ¯i (s, t) = (ηj (t) − η¯(t))(ηj−i (s) − η¯(s)). N j=i+1

Then N −1 X i=1

    N −1 X i i γˆi (t, s) = K γ¯i (t, s) K h h i=1   X N N −1 N X 1 X i − kl ηl (s) K kj−i (ηj (t) − η¯(t)) N sN l=1 h j=i+1 i=1   X N N −1 N X 1 X i − kl ηl (t) K kj (ηj−i (s) − η¯(s)) N sN l=1 h i=1 j=i+1   X N N N N −1 X X 1 X i kj kj−i . + kl ηl (s) km ηm (t) K N s2N l=1 h j=i+1 m=1 i=1

Thus, in order to prove Lemma 5.10, we must establish all of the following relations: !2   ZZ N −1 X X i (5.28) K γ¯i (t, s) − E[η0 (s)ηi (t)] dtds = oP (1); h i=1 i≥1 31

ZZ (5.29)

ZZ (5.30)

ZZ (5.31)

!2   X N N N −1 X i 1 X kj−i (ηj (t) − η¯(t)) dsdt = oP (1); kl ηl (s) K N sN l=1 h j=i+1 i=1 !2   X N N N −1 X 1 X i kj (ηj−i (s) − η¯(s)) dsdt = oP (1); kl ηl (t) K N sN l=1 h j=i+1 i=1   X N N N N −1 X X i 1 X kj kj−i kl ηl (s) km ηm (t) K N s2N l=1 h j=i+1 m=1 i=1

!2 dsdt = oP (1).

Relation (5.28) has been established by Horv´ath et al. (2013), so it remains to deal with the remaining three relations which are due to the estimation of the trend. Relations (5.29) and (5.30) follow by application of similar arguments, so we display only the verification of (5.29). Observe that the left–hand side of (5.29) is equal to (5.32)

1 N sN

2 ! N X 1 k η l l N sN l=1

2 !   X −1 N N X i K k (η − η ¯ ) j−i j h i=1

j=i+1

A bound for the expectation of the first factor is given in (5.27). In the second factor, the centering by η¯ contributes asymptotically negligible terms, so this factor has the same order as N −1   N 2 i X 1 X kj−i ηj . K FN = N sN h j=i+1

i=1

Observe that N −1   N 2 N −1 N −1    N −1 N −1 X X XX i i i0 X X E kl kl0 Ehηi+l , ηi0 +l0 i K kj−i ηj = K h h h 0 0 i=1 j=i+1 i=1 i =1 l=1 l =1    N −1 N −1 N −1 N −1 XX i i0 X X 2 = Ekη0 k K kl kl0 h h l=1 l0 =1 i=1 i0 =1 ! N −1   2 X i K = O(N 3 ) h i=1 Notice that

N −1 X i=1

  N −1    Z 1  X 1 i K i = h K = O h |k(u)|du , h h h 0 i=1

32

we see that EFN = O(N −4 N 3 h2 ) = O(N −1 h2 ). Thus, (5.32) is of the order of (5.27),  2 h −1 −1 2 OP (N )OP (N h ) = OP = op (1). N2 We now turn to the verification of (5.31) whose left–hand side can be written as )2 )2 ( N −1   N N  1 X 2 X i 1 X kl ηl (t) dt kj kj−i K sN l=1 h N j=i+1 i=1 ( N −1   ) 2 N X i 1 X N2 K kj kj−i . = 2 kvN k2 SN h N i=1 j=i+1

(Z

Using (5.27), we see that the order of the above expression is  2  2 N h −2 2 2 −4 −2 2 4 O O (N ){O (hN )} = O(N )O (N )O(h N ) = O = oP (1). P P P P N6 N2 This completes the proof of Lemma 5.10.

5.4



Proof of Theorem 3.3

The proof of Theorem 3.3 is constructed from several lemmas. Lemma 5.11 Under the alternative (1.2), for the functional slope estimate ξˆ defined by (2.9), ( ) N −1 N − 1 k  X 1 1 ˆ − ξ(t)) = VN (1, t) − VN ,t N 3/2 (ξ(t) N −3 sN 2N N k=1 N ( N ) N n  X n  n  N + 1 X 1 YN ,t − YN ,t , + −3 N sN n=1 N N 2N N n=1 where VN (x, t) is the partial sum process of the errors ηn defined in (2.2) and YN (x, t) is the partial sum process of the random walk errors un defined by (5.33)

bN xc 1 X YN (x, t) = √ un (t). N n=1

33

Proof: Recall that kn = n − (N + 1)/2, alternative (1.2),

PN

n=1

kn = 0,

PN

n=1

nkn = sN . Therefore, under

N X ˆ − ξ(t) = 1 ξ(t) kn Xn (t) − ξ(t) sN n=1 N n X 1 X = kn µ(t) + nξ(t) + ui (t) + ηn (t) sN n=1 i=1 ( ) N N X X 1 = µ(t) kn + ξ(t) nkn sN n=1 n=1

! − ξ(t)

N N n 1 X X 1 X kn ηn (t) − ξ(t) + kn + ui (t) sN n=1 sN n=1 i=1 N N n 1 X 1 X X 1 ξ(t)sN + kn ηn (t) − ξ(t) + kn ui (t) = sN sN n=1 sN n=1 i=1 n N N 1 X 1 X X = kn ηn (t) + kn ui (t). sN n=1 sN n=1 i=1

Using definition (5.33), n N N −3/2 X X N −3/2 k u (t) = n i N −3 sN n=1 N −3 sN i=1

(

) n N N X n N + 1 X X X n ui (t) − ui (t) 2 n=1 n=1 i=1 i=1

(

N n N n N + 1 X X n 1 X 1 X √ √ ui (t) − ui (t) = −3 N sN n=1 N N i=1 2N N i=1 n=1 ( N ) N n  X n  n  N + 1 X 1 = −3 YN ,t − YN ,t . N sN n=1 N N 2N N n=1

1

)

The claim follows from the above relations and Lemma 5.3. 

34

Lemma 5.12 Under the alternative, ZN (x, t) defined in (3.2) can be expressed as bN xc VN (1, t) N ( ) !! N −1 N − 1 1 1 1 X k  bN xc bN xc − VN (1, t) − ,t −1 VN 2 N −3 sN 2N N k=1 N N N

ZN (x, t) = VN (x, t) −

bN xc

N  bN xc X n  ,t − YN ,t N N N n=1 n=1 ( N ) !! N  X n 1 1 n  N + 1 X  n  bN xc bN xc − YN ,t − YN ,t −1 . 2 N −3 sN n=1 N N 2N N N N n=1

+

X

YN

n

Proof: Under HA , the partial sum process SN (x, t) can be expressed as bN xc 1 X SN (x, t) = √ Xn (t) N n=1 bN xc n X 1 X µ(t) + nξ(t) + ui (t) + ηn (t) =√ N n=1 i=1

=

=

!

bN xc bN xc n 1 1 bN xc(bN xc + 1) X 1 X 1 X √ µ(t)bN xc + √ ξ(t) √ + ηn (t) ui (t) + √ 2 N N N i=1 N n=1 n=1 ! bN xc n  1 1 bN xc(bN xc + 1) X √ µ(t)bN xc + √ ξ(t) + YN , t + VN (x, t) . 2 N N N n=1

By Lemma 5.8, !! 1 3/2 ˆ bN xc bN xc bN xc SN (1, t) − N ξ(t) −1 ZN (x, t) = SN (x, t) − N 2 N N ! bN xc n  1 1 bN xc(bN xc + 1) X = √ µ(t)bN xc + √ ξ(t) + YN , t + VN (x, t) 2 N N N n=1 ! N bN xc 1 1 N (N + 1) X  n  √ µ(t)N + √ ξ(t) − + YN , t + VN (1, t) N 2 N N N n=1 !! 1 3/2 ˆ bN xc bN xc − N ξ(t) −1 . 2 N N

35

!

Therefore, !! bN xc bN xc 1 1 1 3/2 −1 ZN (x, t) = √ µ(t)bN xc − √ µ(t)bN xc + N ξ(t) 2 N N N N bN xc

N  bN xc X n  bN xc ,t − YN , t + VN (x, t) − VN (1, t) N N N N n=1 n=1 !! 1 3/2 ˆ bN xc bN xc − N ξ(t) −1 2 N N

X

+

bN xc

YN

n

N  bN xc X n  bN xc YN ,t − YN , t + VN (x, t) − VN (1, t) = N N n=1 N N n=1 !! 1 3/2 ˆ bN xc bN xc − N (ξ(t) − ξ(t)) −1 . 2 N N

n

X

ˆ − ξ(t)), we get the desired expression. Then reexpressing N 3/2 (ξ(t)  Since the ui satisfy Assumption 2.1, an analog of Theorem 5.2 holds, i.e. there exist Gaussian processes ΛN equal in distribution to (5.34)

Λ(x, t) =

∞ X

1/2

τi Wi (x)ψi (t),

i=1

where τi , ψi are, respectively, the eigenvalues and the eigenfunctions of the kernel (3.5). Moreover, for the partial sum process YN defined by (5.33), ln = sup ||YN (x, ·) − ΛN (x, ·)|| = op (1).

(5.35)

0≤x≤1

Lemma 5.13 Under the alternative, the following convergence holds P

sup kN −1 ZNA (x, ·) − ∆0N (x, ·)k → 0,

0≤x≤1

where the processes ZNA (·, ·) and ∆0N (·, ·) are respectively defined by bN xc

(5.36)

ZNA (x, t)

=

X

YN

n=1

1 1 − 2 N −3 sN

N  bN xc X n  YN ,t − ,t N N n=1 N

n

(

N N X n  n  N + 1 X  n  YN ,t − YN ,t N N 2N N n=1 n=1

)

!! bN xc bN xc −1 N N

and (5.37)

∆0N (x, t)

Z

x 2

Z

ΛN (y, t)dy + (3x − 4x)

= 0

2

Z

ΛN (y, t)dy + (−6x + 6x) 0

36

1

1

yΛN (y, t)dy. 0

Proof: Notice that N −1 ZNA (x, t) can be expressed as bN xc N N n 1 n 1 X X X 1 A n n 1 YN Z (x, t) = ,t + fN (x) YN ,t + gN (x) YN ,t N N N N N N N N N n=1 n=1 n=1

where

!  N + 1  bN xc bN xc bN xc fN (x) = −1 − −3 2N sN 2N N N N 1

and

! bN xc bN xc gN (x) = 1− . 2N −3 sN N N 1

By the triangle inequality, bN Z x xc   X 1 n −1 0 ΛN (y, ·)dy kN ZN (x, ·) − ∆N (x, ·)k ≤ ,· − YN N N 0 n=1 Z 1 N n 1 X YN + fN (x) ,· − (3x2 − 4x) ΛN (y, ·)dy N N 0 n=1 Z 1 N n 1 X n + gN (x) YN ,· − (−6x2 + 6x) yΛN (y, ·)dy . N N N 0 n=1 Thus, Lemma 5.1 will be proven once we have established the following relations: bN Z x xc n 1 X P (5.38) sup YN ΛN (y, ·)dy → 0; ,· − N N 0≤x≤1 0 n=1

(5.39)

Z 1 N n 1 X P 2 sup fN (x) YN ,· − (3x − 4x) ΛN (y, ·)dy → 0; N N 0≤x≤1 0 n=1

(5.40)

Z 1 N   X n 1 n P 2 sup gN (x) YN ,· − (−6x + 6x) yΓN (y, ·)dy → 0. N N N 0≤x≤1 0 n=1

Relations (5.39) and (5.40) follow from arguments fully analogous to those used in the proof of Lemma 5.7. The verification of (5.38) is not difficult either. Observe that bN xc

X n=1

Z x n 1 YN ,· = YN (y, ·)dy − rN (x), N N 0 37

where

Z

x

YN (y, ·)dy.

rN (x) = bN xc/N

Relation

Z

sup

0≤x≤1

x

Z YN (y, ·)dy −

0

0

x

ΛN (y, ·)dy = op (1)

follows from (5.35) and the contractive property of the integral, which also implies that Z x sup krN (x)k ≤ sup kYN (y, ·)kdy 0≤x≤1

0≤x≤1

bN xc/N

1 sup kYN (y, ·)k = Op (N −1 ). ≤ N 0≤x≤1 This completes the proof of Lemma 5.13.  Lemma 5.14 Under the alternative, the following convergence holds P

sup kN −1 ZN (x, ·) − ∆0N (x, ·)k → 0,

0≤x≤1

where the process ZN (·, ·) is defined in (3.2) and the process ∆0N (·, ·) in (5.37). Proof: Under the alternative, Lemma 5.12 implies that, ZN (x, t) can be expressed as ZN (x, t) = ZN0 (x, t) + ZNA (x, t), where bN xc VN (1, t) N ( ) !! N −1 N − 1 1 1 1 X k  bN xc bN xc − VN (1, t) − VN ,t −1 2 N −3 sN 2N N k=1 N N N

ZN0 (x, t) = VN (x, t) −

and ZNA (x, t) is defined by (5.36). Notice under the null hypothesis, Lemma 5.4 implies that P ZN0 (x, t) = ZN (x, t). Hence from Lemma 5.7, sup kZN0 (x, ·) − Γ0N (x, ·)k → 0, implying 0≤x≤1

1 0 1 sup ZN (x, ·) = Op (1). N 0≤x≤1 N Thus the claim follows from Lemma 5.13 and the triangle inequality.

38



Lemma 5.15 Consider the process Λ(·, ·) defined by (5.34) and set Z 1 Z Z x 2 2 0 Λ(y, t)dy + (3x − 4x) Λ(y, t)dy + (−6x + 6x) (5.41) ∆ (x, t) = 0

0

1

yΛ(y, t)dy.

0

Then 1

Z

0

2

k∆ (x, ·)k dx =

∞ X

0

1

Z

∆2i (x)dx,

τi

i=1

0

where τ1 , τ2 , . . . are eigenvalues of the long–run covariance function of the ui , i.e. (3.5), and ∆1 , ∆2 , . . . are independent copies of the process ∆ defined in (3.4). Proof: Expansion (5.34) implies that Z x Z 0 2 ∆ (x, t) = Λ(y, t)dy + (3x − 4x) 0

Z =

2

∞ xX

1

y 0



i=1

∞ 1X

0

+ (−6x2 + 6x) ∞ X

Z

i=1

(Z

∞ X

i=1

i=1

x 2

0

+ (−6x2 + 6x)

√ τi Wi (y)ψi (t)dy

√ τi Wi (y)ψi (t)dy Z

Wi (y)dy + (3x − 4x)

τi

yΛ(y, t)dy 0

√ τi Wi (y)ψi (t)dy + (3x2 − 4x) Z

1

Z

Λ(y, t)dy + (−6x + 6x)

0

0

=

1

1

Wi (y)dy 0

Z

1

) yWi (y)dy ψi (t)

0

=

∞ X √

τi ∆i (x)ψi (t).

i=1

The claim then follows from the orthonormality of the eigenfunctions φi .  RR 2 Proof of Theorem 3.3: Recall that the test statistic RN is defined by RN = ZN (x, t)dxdt, with ZN defined by (3.2). We want to show that under the alternative model (1.2), Z 1 ∞ X 1 D RN → τi ∆2i (x)dx, 2 N 0 i=1 where ∆1 , ∆2 , . . . are independent copies of the process defined by (3.4) and τ1 , τ2 , . . . are the eigenvalues of the long-run covariance kernel (3.5). By Lemma 5.14, P

ρ(N −1 ZN (x, ·), ∆0N (x, ·)) = sup kN −1 ZN (x, ·) − ∆0N (x, ·)k → 0. 0≤x≤1

39

d

d

By construction, ∆0N = ∆0 , so Theorem 5.1 implies that N −1 ZN → ∆0 . By Lemma 5.15, ZZ

0

2

d

(∆ (x, t)) dxdt =

∞ X

1

Z

∆2i (x)dx.

λi 0

i=1

Thus by the continuous mapping theorem, 1 RN = N2

ZZ (N

−1

2

d

ZN (x, t)) dxdt →

∞ X

Z λi

∆2i (x)dx.

i=1



References Berkes, I., H¨ ormann, S. and Schauer, J. (2011). Split invariance principles for stationary processes. The Annals of Probability, 39, 2441–2473. Berkes, I., Horv´ ath, L. and Rice, G. (2013). Weak invariance principles for sums of dependent random functions. Stochastic Processes and their Applications, 123, 385–403. Billingsley, P. (1968). Convergence of Probability Measures. Wiley, New York. Bosq, D. (2000). Linear Processes in Function Spaces. Springer, New York. Campbell, J. Y., Lo, A. W. and MacKinlay, A. C. (1997). The Econometrics of Financial Markets. Princeton University Press, New Jersey. Cavaliere, G. and Xu, F. (2014). Testing for unit roots in bounded time series. Journal of Econometrics, 178, 259–272. Chambers, M. J., Ercolani, J. S. and Taylor, A.M. R. (2014). Testing for seasonal unit roots by frequency domain regression. Journal of Econometrics, 178, 243–258. Chen, Y. and Niu, L. (2014). Adaptive dynamic Nelson–Siegel term structure model with applications. Journal of Econometrics, 180, 98–115. Dickey, D. A. and Fuller, W. A. (1979). Distributions of the estimattors for autoregressive time series with a unit root. Journal of the American Statistical Association, 74, 427–431. Dickey, D. A. and Fuller, W. A. (1981). Likelihood ratio statistics for autoregressive time series with unit root. Econometrica, 49, 1057–1074. Diebold, F. and Rudebusch, G. (2013). Yield Curve Modeling and Forecasting: The Dynamic Nelson–Siegel Approach. Princeton University Press. Giraitis, L., Kokoszka, P. S., Leipus, R. and Teyssi`ere, G. (2003). Rescaled variance and related tests for long memory in volatility and levels. Journal of Econometrics, 112, 265–294.

40

Hays, S., Shen, H. and Huang, J. Z. (2012). Functional dynamic factor models with application to yield curve forecasting. The Annals of Applied Statistics, 6, 870–894. H¨ormann, S. and Kokoszka, P. (2010). Weakly dependent functional data. The Annals of Statistics, 38, 1845–1884. H¨ormann, S. and Kokoszka, P. (2012). Functional time series. In Time Series (eds C. R. Rao and T. Subba Rao), Handbook of Statistics, volume 30. Elsevier. Horv´ath, L. and Kokoszka, P. (2012). Inference for Functional Data with Applications. Springer. Horv´ath, L., Kokoszka, P. and Reeder, R. (2013). Estimation of the mean of functional time series and a two sample problem. Journal of the Royal Statistical Society (B), 75, 103–122. Horv´ath, L., Kokoszka, P. and Rice, G. (2014a). Testing stationarity of functional time series. Journal of Econometrics, 179, 66–82. Horv´ath, L., Rice, G. and Whipple, S. (2014b). Adaptive bandwith selection in the long run covariance estimator of functional time series. Computational Statistics and Data Analysis, 00, 000–000; Forthcoming. de Jong, R. M., Amsler, C. and Schmidt, P. (1997). A robust version of the KPSS test based on indicators. Journal of Econometrics, 137, 311–333. Kargin, V. and Onatski, A. (2008). Curve forecasting by functional autoregression. Journal of Multivariate Analysis, 99, 2508–2526. Kokoszka, P., Miao, H. and Zhang, X. (2014). Functional dynamic factor model for intraday price curves. Journal of Financial Econometrics, 00, 000–000; Forthcoming. Kwiatkowski, D., Phillips, P. C. B., Schmidt, P. and Shin, Y. (1992). Testing the null hypothesis of stationarity against the alternative of a unit root: how sure are we that economic time series have a unit root? Journal of Econometrics, 54, 159–178. Lee, D. and Schmidt, P. (1996). On the power of the KPSS test of stationarity against fractionally integrated alternatives. Journal of Econometrics, 73, 285–302. Lo, A. W. (1991). Long-term memory in stock market prices. Econometrica, 59, 1279–1313. MacNeill, I. B. (1978). Properties of sequences of partial sums of polynomial regression residuals with applications to tests for change of regression at unknown times. The Annals of Statistics, 6, 422–433. M¨ uller, H-G., Sen, R. and Stadtm¨ uller, U. (2011). Functional data analysis for volatility. Journal of Econometrics, 165, 233–245. Politis, D. N. (2011). Higher-order accurate, positive semidefinite estimation of large sample covariance and spectral density matrices. Econometric Theory, 27, 1469–4360. Politis, D. N. and Romano, J. P. (1996). On flat–top spectral density estimators for homogeneous random fields. Journal of Statistical Planning and Inference, 51, 41–53.

41

Politis, D. N. and Romano, J. P. (1999). Multivariate density estimation with general fat-top kernels of infinite order. Journal of Multivariate Analysis, 68, 1–25. P¨otscher, B. and Prucha, I. (1997). Dynamic Non–linear Econonometric Models. Asymptotic Theory. Springer. Ramsay, J. O. and Silverman, B. W. (2002). Applied Functional Data Analysis. Springer. Said, S. E. and Dickey, D. A. (1984). Testing for unit roots in autoregressive–moving average models of unknown order. Biometrika, 71, 599–608. Shao, X. and Wu, W. B. (2007). Asymptotic spectral theory for nonlinear time series. The Annals of Statistics, 35, 1773–1801. Wu, W. (2005). Nonlinear System Theory: Another Look at Dependence. Proceedings of The National Academy of Sciences of the United States, volume 102. National Academy of Sciences.

42

Suggest Documents