Value-at-Risk scaling for long-term risk estimation L. Spadafora1,2
M. Dubrovich1 1
2
M. Terraneo1
UniCredit S.p.A.
Faculty of Mathematical, Physical and Natural Sciences, Universit` a Cattolica del Sacro Cuore, Brescia
XVI Workshop on Quantitative Finance, Parma, January 29-30, 2015
The views and opinions expressed in this presentation are those of the author and do not necessarily represent official policy or position of UniCredit S.p.A.
Main Reference: L. Spadafora, M. Dubrovich and M. Terraneo,Value-at-Risk time scaling for long-term risk estimation, arXiv:1408.2462, 2014. Spadafora, Dubrovich, Terraneo (UniCredit S.p.A.)
Value-at-Risk scaling for long-term risk estimation
XVI WQF, Parma, Jan 29-30, 2015
1 / 15
Outline
1
Introduction and Motivation
2
Value-at-Risk Scaling
3
Modelling P&L Distributions
4
Time Scaling
5
VaR scaling on a real portfolio
6
Summary and Conclusions
Spadafora, Dubrovich, Terraneo (UniCredit S.p.A.)
Value-at-Risk scaling for long-term risk estimation
XVI WQF, Parma, Jan 29-30, 2015
2 / 15
Introduction and Motivation
Introduction: Value-at-Risk vs Economic Capital Regulatory Capital: 99% Value-at-Risk at a short time-horizon (1 day) Economic capital (EC): capital required to face losses within a 1-year time-horizon at a more conservative percentile (we refer to 99.93%) Possible estimation approaches for EC:
1
1
Scenario generation (for the risk factors) and portfolio revaluation to obtain a 1-year profit-and-loss (P&L) distribution
2
Extension of the short-term market-risk measures to longer time-horizons/higher percentiles
The first approach has the following drawbacks: Assumptions are needed to generate scenarios at 1 year (both with historical simulation and with Monte-Carlo methods...) No rebalancing of portfolio (i.e. unrealistic assumption of freezing positions for 1 year)
2
The second approach bypasses such difficulties: Assumes hedging/rebalancing of the portfolio Relies on models already approved and used into day-to-day activities
Spadafora, Dubrovich, Terraneo (UniCredit S.p.A.)
Value-at-Risk scaling for long-term risk estimation
XVI WQF, Parma, Jan 29-30, 2015
3 / 15
Introduction and Motivation
Motivation: Economic Capital as a scaled Value-at-Risk
Main idea: follow the second approach and develop a scaling mechanism for computing efficiently Economic Capital out of Regulatory Value-at-Risk measures Model short-term P&L using iid RVs distributed according to some benchmark PDFs Apply convolution theorem to subsequent time-steps and interpret scaling in light of the Central Limit Theorem (CLT), to derive conditions needed for normal convergence
Main results: generalized VaR-scaling methodology to be used for calculating EC, in dependence of the short-term PDF’s properties: If the √ P&L distribution has exponential decay, VaR-scaling can be correctly inferred using T -rule, even if starting distribution is not Normal √ With power-law decay, T -rule can be applied naively only if tails are not too fat. Otherwise the long-term P&L distribution needs to be determined explicitly, and EC can be significantly larger than what would have been inferred under Normal assumptions Theoretical results are integrated by a numerical simulation performed on a test equity trading portfolio.
Spadafora, Dubrovich, Terraneo (UniCredit S.p.A.)
Value-at-Risk scaling for long-term risk estimation
XVI WQF, Parma, Jan 29-30, 2015
4 / 15
Value-at-Risk Scaling
The VaR-scaling approach Given x(t) P&L over time-horizon t (e.g. 1 day) and its PDF p(x(t)), VaR at confidence level (CL) 1 − α (e.g. 99%) is defined by: Z VaR(α,t) 1−α= p(x(t))dx(t)
(1)
−∞
General VaR-scaling approach: find h(·) such that, given α2 6= α and T 6= t VaR(α2 , T ) = h(VaR(α, t))
(2)
For EC, i.e. VaR at CL 1 − α2 = 99.93% √ over horizon T = 1y, commonly done assuming normality of PDF and applying T -rule: VaRN (99.93%, 1y) =
√ Φ−1 (0.01%) 250 N −1 VaRN (99%, 1d) ΦN (1%)
(3)
where ΦN denotes CDF of N(ormal) distribution We propose a generalization: 1 2 3
Fit short-term P&L distribution and choose PDF with best explanatory power Calculate long-term P&L distribution (analytically or numerically), given chosen PDF Compute EC as the desired extreme-percentile
Spadafora, Dubrovich, Terraneo (UniCredit S.p.A.)
Value-at-Risk scaling for long-term risk estimation
XVI WQF, Parma, Jan 29-30, 2015
5 / 15
Modelling P&L Distributions
Modelling real-world P&L distributions: candidates
Which theoretical PDF class better fits empirical P&L data? Benchmark the basic normal assumption using leptokurtic distributions: 1
Normal distribution (N): pN (x; µ, σ, T ) = √
2
(x − µT )2 exp − 2σ 2 T 2πσ 2 T 1
Student’s t-distribution (ST, power-law decay): " #− ν+1 2 ) ( x−µ )2 Γ( ν+1 2 σ pST (x; µ, σ, ν) = √ 1+ σ νπΓ( ν2 ) ν
3
(4)
(5)
Variance-Gamma distribution (VG, exponential decay):
θ(x−µT ) r|x−µT | 2σ 2 pVG (x;µ,σ,k,θ,T )= 2e T √ 2σ 2 +θ 2 σ πk k Γ( T ) k k √
Spadafora, Dubrovich, Terraneo (UniCredit S.p.A.)
T −1 r k 2 2σ 2 +θ 2 |x−µT | k KT 1 − σ2 k 2
Value-at-Risk scaling for long-term risk estimation
XVI WQF, Parma, Jan 29-30, 2015
(6)
6 / 15
Modelling P&L Distributions
Fit performances over time Test 1: fitting performance on a 250-day P&L strip (each P&L distribution made by N = 500 obs.) Test 2: fitting performance on a single P&L distribution with N = 8000 obs. 100
100
10-1
10-2 0.025
CDF
CDF
10-1
0.020
0.015 0.010 Return
Actual Data N ST VG 0.005 0.000
10-2 10-3 10-4
0.12
0.10
0.08
0.06 0.04 Return
0.02 0.00
0.02
N performs much worse than ST and VG in explaining empirical P&L data VG and ST: comparable performances when N = 500 Raising the number of observations clarifies which PDF better fits the P&L dataset: in the IBM example the winner is ST Takeaway: though a challenging task, the determination of the PDF to fit P&L data is crucial to implement any efficient VaR-scaling methodology Spadafora, Dubrovich, Terraneo (UniCredit S.p.A.)
Value-at-Risk scaling for long-term risk estimation
XVI WQF, Parma, Jan 29-30, 2015
7 / 15
Time Scaling
Convolution and the CLT (1) Long-term PDF can be calculated (analytically or numerically) by convoluting the short-term PDF p(xk (t)): Xk (t) RV (with values xk (t)) describing P&L over horizon t at time (day) k P&L over the long horizon T = nt is given by: P&L0→T = x1 (t) + x2 (t) + ... + xn (t) =
n X
xk (t)
(7)
k=1
The PDF of the sum of two independent (as we assume the P&Ls) RVs is given by: Z +∞ p(y ) = p(y − x1 (t))p(x1 (t))dx1 (t) (8) −∞
where RV Y = X1 + X2 with values y = x1 + x2 . Apply n times to the short-term PDF to obtain long-term PDF
What about our benchmark distributions? Normal: well-known
√
T -rule
VG: analytic expression (see Eq. (6)) ST: numerical convolution (we apply Eqs. (7) and (8) with FFT algorithm) Spadafora, Dubrovich, Terraneo (UniCredit S.p.A.)
Value-at-Risk scaling for long-term risk estimation
XVI WQF, Parma, Jan 29-30, 2015
8 / 15
Time Scaling
Convolution and the CLT (2) Is it possible to obtain an asymptotic behaviour? (n → ∞) Yes! Use CLT! Given RV X distributed as pD (x; ·), with E (X ) = µ∆t and Var(X ) = σ 2 ∆t, the n-times convoluted distribution satisfies (for all finite α and β): ( ) Z n β X (x−µn∆t)2 1 − √ lim P(α < xi < β) = e 2σ2 n∆t (9) n→+∞ 2πσ 2 n∆t α i=1 The above holds for n → ∞. For finite n it is understood that convergence takes place only in the central region of the PDF, which needs to be quantified somehow (see next slides) Therefore, we have the crucial result: If percentile xα (considered for VaR estimation) falls into central region of PDF in the sense of CLT after n = T /∆t convolutions, the normal approximation holds: √ VaRD (α, T ) ' Φ−1 (10) N (α; µT , σ T ) = VaRN (α, T ) Otherwise (convergence not achieved) long-term P&L distribution to be computed by explicit convolution (n times) Spadafora, Dubrovich, Terraneo (UniCredit S.p.A.)
Value-at-Risk scaling for long-term risk estimation
XVI WQF, Parma, Jan 29-30, 2015
9 / 15
Time Scaling
The Normal limit: ST distribution Which conditions to define the central region of the ST PDF ”Normal”? We propose a quantitative method following Bouchaud et al.1 1 2 3
4
Define critical value x ∗ beyond which the two PDFs become substantially different Intuitively take x ∗ as the point where the two PDFs intersect After some math we find that, as expected, region where CLT holds enlarges slowly: √ p (11) x ∗ = σ ν T log(T ) Using Eq. (11) we estimate the percentile at which convergence condition is satisfied after exactly 1 year, as a function of ν: (ν + 1)Γ ν+1 √ p 1 2 P(σ ν T log(T ) < x < +∞) = (12) √ ν−2 πΓ ν2 2 T (log(T ))ν/2 Imposing P = 0.07% and T = 250 days = 1 year, we obtain ν ∗ = 3.41
Using the above-defined criterion we have a discrimination: convergence regime (ν > ν ∗ ): the ST distribution becomes sufficiently ”normal” for our purposes2 non-convergence regime (ν < ν ∗ ) where the ST distribution cannot be approximated with a normal. Accordingly, the lower ν, the fatter the tails. 1 J. P. Bouchaud and M. Potters,Theory of Financial Risks - From Statistical Physics to Risk Management, Cambridge University Press, 1998 2 Recall that, for ν → ∞ the ST is a Normal. Spadafora, Dubrovich, Terraneo (UniCredit S.p.A.)
Value-at-Risk scaling for long-term risk estimation
XVI WQF, Parma, Jan 29-30, 2015
10 / 15
Time Scaling
The Normal limit: VG distribution 0.05
|FVG(x)−FN (x)| / FN (x)
0.04 0.03 0.02 0.01 0.00
5
4
3
x/σ
2
1
0
In the VG case, convergence takes place in much quicker way, due to exponential decay We present a proof just by numerical example: after convolving the VG PDF a number of times, we compare its CDF with the target Normal CDF Already for n = 50 iterations, the relative deviation at 4σ (corresponding to PN (x < 4σ) ' 0.006%) is smaller than 2% Spadafora, Dubrovich, Terraneo (UniCredit S.p.A.)
Value-at-Risk scaling for long-term risk estimation
XVI WQF, Parma, Jan 29-30, 2015
11 / 15
VaR scaling on a real portfolio
The methodology test: setup To assess our VaR-scaling methodology we built a test equity trading portfolio composed by 10 FTSE stocks and ATM european calls to achieve ∆-hedging: representative of real portfolios, convex and asymmetric. 1
Perform a (1-day) historical simulation to infer the short-term P&L distribution
2
Fit the P&L distribution with the benchmarks (N, VG and ST) VaR-scaling calculation:
3
Normal VaR: through application of CLT, the 1-year P&L distribution is Normal with µ(T ) = µ(t)T = 0 2
(by assumption) 2
σ (T ) = σ (t)T where µ(s) and σ 2 (s) are mean and variance of the PDF over horizon s Convoluted VaR: given the short-term fitted PDF pD (x; ·), convolve it n = 250 times to extract the long-term PDF: If pD is VG, the long-term PDF is given by Eq. (6)3 If pD is ST, the long-term PDF can be estimated numerically by explicitly convolving pD . 4
3
Repeat steps (1-3) for 10000 different (random) portfolio weight combinations to derive the statistical properties w.r.t. asset allocation As mentioned before, in our case it always reaches convergence to the normal limit.
Spadafora, Dubrovich, Terraneo (UniCredit S.p.A.)
Value-at-Risk scaling for long-term risk estimation
XVI WQF, Parma, Jan 29-30, 2015
12 / 15
VaR scaling on a real portfolio
600
5
500
4
400
VaRST / VaRN
Number of Observations (ν)
The methodology test: outcomes
300 200
2 1
100 0
3
2
3
4
5
ν
6
7
8
0 2.0
2.5
3.0 ν
3.5
4.0
As in the single-stock case, VG and ST provide comparable goodness-of-fit; again, N yields the worst performance In the VG case, Normal approximation always holds (as expected) The majority (∼ 70%) of fitted ν values for the ST case lies below the critical value ν ∗ = 3.41: the assumption of normal convergence is often unsafe When ν > ν ∗ , ST has reached Normal convergence and VaRST /VaRN ' 1 When ν < ν ∗ , the scaled ST-VaR is greater than the scaled N-VaR, and, in the ν → 2 limit, VaRST /VaRN ∼ 4: the assumption of normal convergence underestimates risk! Spadafora, Dubrovich, Terraneo (UniCredit S.p.A.)
Value-at-Risk scaling for long-term risk estimation
XVI WQF, Parma, Jan 29-30, 2015
13 / 15
Summary and Conclusions
Summary and Conclusions (1)
Derived a generalized VaR-scaling methodology for calculating Economic Capital (i.e. 1-year 99.93% VaR) Chosen as benchmarks for explaining empirical (daily) P&L data were Normal, Student’s t- (leptokurtic power-law) and Variance-Gamma (leptokurtic exponential) distributions Defined long-term P&L distribution by means of convolution and explored its asymptotic properties using the Central Limit Theorem (CLT) Theoretical results are a range of possible VaR-scaling approaches depending on PDF chosen as best fit given confidence level and given time horizon
Spadafora, Dubrovich, Terraneo (UniCredit S.p.A.)
Value-at-Risk scaling for long-term risk estimation
XVI WQF, Parma, Jan 29-30, 2015
14 / 15
Summary and Conclusions
Summary and Conclusions (2)
Main discriminant: reaching of Normal convergence (in the sense of CLT) by the chosen PDF If assuming exponential decay (Variance-Gamma case) CLT can be safely applied for the typical time-horizons and percentiles If assuming power-law decay (Student’s t- case), CLT can be applied only if number of degrees of freedom ν exceeds a critical value ν ∗ depending on chosen percentile and time horizon
Outcome of methodology test by portfolio simulation: In the VG case, Normal convergence is always reached and the scaling VaR (even if the short-term distribution is not Normal)
√
T -rule is safe for
In the ST case, Normal convergence √ is often not achieved. In this case, CLT cannot be applied. The naive usage of the T -rule in the non-convergence regime (ν < ν ∗ , the most likely in our simulation) can lead to severe underestimation of the risk measure
Spadafora, Dubrovich, Terraneo (UniCredit S.p.A.)
Value-at-Risk scaling for long-term risk estimation
XVI WQF, Parma, Jan 29-30, 2015
15 / 15