Spatial Statistics and Uncertainty Quantification on Supercomputers

The Centre for Numerical Algorithms and Intelligent Software Workshop on Spatial Statistics and Uncertainty Quantification on Supercomputers Departm...
Author: Douglas Rice
0 downloads 2 Views 1MB Size
The Centre for Numerical Algorithms and Intelligent Software

Workshop on

Spatial Statistics and Uncertainty Quantification on Supercomputers Department of Mathematical Sciences University of Bath May 19-21, 2014

This workshop is sponsored by NAIS (The Centre for Numerical Algorithms & Intelligent Software)

Aims of the Workshop Societal and environmental challenges, not least the potential dangers of climate change, have lead to a surge in the interest and in the development of novel statistical tools to deal with very big data sets on the one hand and of efficient uncertainty quantification (UQ) methods for complex engineering applications on the other. There are two main aims for this meeting. The first aim is to bring together key researchers from statistics and from applied mathematics working in spatial statistics and in UQ, but approaching similar questions from often quite different angles. The second aim is to go beyond the discussion and analysis of algorithms applied to toy problems, and to instead focus on real, large scale applications and on the scalability of novel statistical and UQ tools on modern supercomputers.

Workshop Organizers

(all University of Bath)

Finn Lindgren Rob Scheichl (Chair) Tony Shardlow Simon Wood

cover pictures of conditioned KL-modes for WIPP test case courtesy of Oliver Ernst, TU Dresden

Information Workshop Information The workshop will commence on Monday 19th May (at about 10.30am) and will conclude on Wednesday 21st May (at about 3.30pm) Registration. The workshop registration will be open from 9.30am in the Atrium on Level 1 of the Maths Building 4 West. Registration Fee. There is no registration fee. Campus plan. Our campus is located on Claverton Down, which sits on the south-east hilltop edge of the city of Bath. The campus is designed around a central pedestrian thoroughfare, known as the Parade. See next page for a campus map and a local area map. Seminar room. The workshop will be held in the Wolfson Lecture Theatre, Department of Mathematical Sciences, University of Bath. The room is situated on Level 1 of the Maths Building 4 West. Coffee breaks. The coffee breaks will be in the Atrium space directly outside of the Wolfson lecture theatre. Internet access. • If you are a visitor to the University of Bath and have an eduroam account at your home education establishment - please follow instructions for accessing eduroam from your home institution. • If you are a visitor to the University of Bath with no eduroam account please use our free wireless service The Cloud. Social Events. • There will be a Wine Reception on Monday evening from 5.30pm until 7pm in the Atrium on Level 1 of the Maths Building 4 West. This will be combined with a poster session. • There is also a dinner planned for the invited speakers and the organisers in Woods Restaurant in town. Invited speakers will receive a separate email with details and directions. Local Transport. As you exit Bath Spa railway station on Dorchester Street, turn left and walk 25 yards, you will see the modern bus station, the university pick up is on the opposite side of the road to this building. You can take the 18 or U18 to the University of Bath. For the invited speakers, there is also a bus stop closer to the hotel you are booked in. See the email you received concerning your accomodation. Local Taxi Firm. Abbey Taxis +44 (0)1225 444 444




d lan od Wo urt Co

Bale Haus


14 3

o mm co




4W 4W





ti da

mo w Ne com Ac












9 Sports Training Village


19 6 7

11 3SA

ck H ill













St Johns Sports Fields


to Sulis Club

CENTRAL ADMINISTRATION 4W Vice-Chancellor’s Office 4W University Secretary’s Office WH Admissions WH Registry WH Finance Office WH International Office WH Student Records & Examinations 4W Student Services Centre WH Human Resources 4W Graduate Centre East Building Corporate Communications Services East Building Development & Alumni Relations

DEPARTMENTS 4E 6E Architecture & Civil Engineering 4S Biology & Biochemistry 2S BUCS 9W Chemical Engineering 1S 3S Chemistry East Building Computer Science 3E Economics 1W 1WN Education 2E 4E Electronic & Electrical Engineering 1WN European Studies & Modern Languages 8W School of Management 4W Mathematical Sciences 4E 6E Mechanical Engineering FACULTY ADMINISTRATION (Sports Engineering) WH Faculty of Science 1S 3S Natural Sciences 2E Faculty of Engineering & Design 5W 7W Pharmacy & Pharmacology 1W 1WN Faculty of Humanities & Social Sciences 8W School of Management 3W Physics 2S Psychology 1W Eastwood 20-23 Health AMENITIES & SERVICES 3E Social & Policy Sciences 1 Library - Security and Enquiries 6WS Eastwood 20-23 Sport & Exercise Science 2 Chaplaincy Centre WH Division for Lifelong Learning 3 Chancellors’ Building 4 Founders Sports Hall 5 Athletics throws & jumps/Modern Pentathlon 6 Sports Training Village/Sports Café 7 50m Swimming Pool 8 New Arts Centre & Arts Lecture Theatre (ICIA) ACCOMMODATION 9 Arts Lecture Theatre 10 University Hall Norwood House 11 Computer Centre (BUCS) Polden Court 12 Medical Centre Westwood 13 Student Centre, Union Shop Brendon Court 14 Estates Office Eastwood 15 Central Stores, Goods Received, Landscape Marlborough Court 16 Residential Services Centre Solsbury Court 17 Westwood Nursery 18 Centre for Power Transmissions & Motion Control Woodland Court 19 Applied Biomechanics Suite Osborne House WH The Fresh Grocery Store 8W IDPS (Design,Imaging and Print) 6W Student Accommodation Services WH Post Office, Banks & Shops 2W The Parade Bar, Level 1 Café City Bus Service 2W Claverton Rooms Fire Assembly Points 4W 4West Café P Parking: Pay & Display and NH Careers Advisory Service Permit Holders 1E ICIA P Parking: Permit Holders only

Figure 1: Campus Map

Figure 2: Local Area Map


Program Monday, May 19th


Registration & Coffee


Stephen Sain (National Center for Atmospheric Research, Boulder) Impact of model resolution for regional climate experiments


Mike Christie (Petroleum Engineering, Heriot-Watt University) Multi-objective stochastic sampling algorithms for uncertainty quantification in porous media


Peter Challenor (Environmental Statistics, University of Exeter) Combining data and models to reconstruct the climate of the recent past


Lunch Break


Mark Girolami (Statistics, University of Warwick) Differential geometric simulation methods for uncertainty quantification in large scale PDE systems


Kody Law (Stochastic Numerics Group, KAUST, Saudi Arabia) Dimension-independent likelihood-informed MCMC sampling algorithms for Bayesian inverse problems


Tea/Coffee Break


Geoff Nicholls (Statistics, University of Oxford) Lateral transfer of traits on phylogenetics: likelihood evaluation via large sparse linear systems of DEs


Annika Lang (Mathematical Statistics, Chalmers University) Gaussian random fields on the sphere: one class of random fields on manifolds


Wine Reception & Poster Session (Atrium, Level 1 of 4 West)

Tuesday, May 20th


Tim Sullivan (Mathematics Institute, University of Warwick) Brittleness and robustness of Bayesian inference in complex systems


Coffee/Tea Break


Gavin Shaddick (Statistics, University of Bath) Challenges in the integration of air pollution estimates from multiple data sources and methods


Daniel Tartakovsky (Mechanical & Aerospace Engineering, University of California at San Diego) Uncertainty quantification in nonlinear models of multiphase flow


Lunch Break


Daniel Simpson (Statistics, NTNU Trondheim) Barriers to scalable spatial statistics


Roger Ghanem (Aerospace & Mechanical Engineering, University of Southern California) High performance algorithms for PDEs with spatially varying stochastic parameters


Tea/Coffee Break


Aretha Teckentrup (Scientific Computing, Florida State University) Multilevel Markov chain Monte Carlo algorithms for uncertainty quantification


Clayton Webster (Computer Science & Mathematics, Oak Ridge National Laboratory) A hierarchical, multilevel stochastic collocation method for adaptive acceleration of PDEs with random input data


Wednesday, May 21st


Paul Constantine (Applied Mathematics, Colorado School of Mines) Active subspace methods for high-dimensional sensitivity analysis


Coffee/Tea Break


Jocelyn Erhel (Environmental Simulation, INRIA Rennes Bretagne Atlantique) Uncertainty quantification and high performance computing for flow and transport in porous media


Bj¨ orn Gmeiner (System Simulation, University of Erlangen-Nuremberg) Massively parallel multilevel Monte Carlo


Lunch Break


Simon Cotter (School of Mathematics, University of Manchester) Parallel adaptive importance sampling: parallelism PAIS


Omar Ghattas (Computational Geosciences, ICES, University of Texas at Austin) Large-scale Bayesian inversion with application to antarctic ice sheet flow


Tea/Coffee & End of Workshop

Posters The poster session will take place in the Atrium on Level 1 of 4 West at 5:30pm, Monday, May 19th.

Georgios Arampatzis (University of Crete) “Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations” D. Bigoni, A.P. Engsig-Karup (both Technical University of Denmark, Lyngby) and Y.M. Marzouk (MIT) “Spectral tensor-train decomposition for low-rank surrogate models” Evangelos Evangelou (University of Bath) and Vasileios Maroulas (University of Tennessee) “Filtering and estimation for dynamic spatiotemporal processes” David Titley-Peloquin (CERFACS, Toulouse) and Serge Gratton (ENSEEIHT-IRIT and CERFACS) “Stochastic conditioning of matrix functions”


Abstracts Peter Challenor, University of Exeter Combining Data and Models to Reconstruct the Climate of the Recent Past If we are to successfully forecast future climate we need to understand the climate of the past. Although we have been taking measurements for the last century or so, until the last 20 years the data has been scarce and even in the modern era the system is seriously undersampled. However we have a good understanding of the equations that govern fluid flow on a sphere and these are implemented in a variety of numerical models. A reconstruction that is a solution to these equations conserves physical quantities through time and is known as “dynamically consistent”. To produce good simulations we need to run at high resolution and hence the computational expense is high so traditional calibration methods are infeasible. Data assimilation methods can be used but fail to produce dynamically consistent solutions. We show how methods developed for uncertainty quantification (emulators and history matching) can be used to combine data and model efficiently to produce both dynamically consistent reconstructions and calibrated models for prediction. Mike Christie, Heriot Watt University Multi-Objective Stochastic Sampling Algorithms for Uncertainty Quantification in Porous Media There are various approaches to uncertainty quantification for porous media simulations: gradient methods with a linearisation about the MAP, ensemble methods such as the ensemble Kalman Filter, and Markov Chain Monte Carlo methods. Another approach is the use of stochastic samplers such as genetic algorithms, differential evolution or particle swarm optimisation. In this talk, we will look at multi-objective versions of stochastic optimisers and compare the convergence rate of both single and multi-objective algorithms for synthetic and real problems. Paul Constantine, Colorado School of Mines Active Subspace Methods for High-Dimensional Sensitivity Analysis Science and engineering models typically contain multiple parameters representing input data, e.g., boundary conditions or material properties. The map from model inputs to model outputs can be viewed as a multivariate function. One may naturally be interested in how the function changes as inputs are varied. However, if computing the model output is expensive given a set of inputs, then exploring the high-dimensional input space is infeasible. Such issues arise in the study of uncertainty quantification, where uncertainty in the inputs begets uncertainty in model predictions. Fortunately, many practical models with high-dimensional inputs vary primarily along only a few directions in the space of inputs. I will describe a method for detecting and exploiting these directions of variability to construct a response surface on a low-dimensional linear subspace of the full input space; detection is accomplished through analysis of the gradient of the model output with respect to the inputs, and the subspace is defined by a projection. I will show error bounds for the low-dimensional approximation that motivate computational heuristics for building a kriging response surface on the subspace. As a demonstration, I will apply the method to a nonlinear heat transfer model on a turbine blade, where a 250-parameter model for the heat flux represents uncertain transition to turbulence of the flow field. I will also discuss the range of existing applications of the method – including the motivating application from Stanford’s NNSA PSAAP center – and the future research challenges. Simon Cotter, University of Manchester Parallel Adaptive Importance Sampling: Parallelism PAIS Standard MCMC algorithms can be easily parallelised, simply by executing many independent chains across a cluster. However, since a priori we do not have much information about the target distribution, each chain does not start in equilibrium, and as such must be ”burned-in”. Since it takes each chain the same amount of time to burn-in, this process takes the same amount of time no matter how many processors are used. This motivates a method which allows communication across the chains, allowing for better mixing properties. In this talk, we will introduce the Parallel Adaptive Importance Sampling (PAIS) algorithm, which incorporates function space MALA proposals, and optimal transport resampling methods from particle filtering in order to construct a proposal regime which incorporates information from the previous state of chain from all processors, leading to better mixing properties. We will present some very basic preliminary results. 5

Jocelyne Erhel, INRIA Rennes Bretagne Atlantique Uncertainty Quantification and High Performance Computing for Flow and Transport in Porous Media Stochastic models use random fields to represent heterogeneous porous media. Quantities of interest such as macro dispersion are then analyzed from a statistical point of view. In order to get asymptotic values, large scale simulations must be performed, using High Performance Computing. Non-intrusive methods are well-suited for two-level parallelism. Indeed, for each simulation, parallelism is based on domain decomposition for generating the random input and the flow matrix, parallel linear solvers and parallel particle tracker. Also, several simulations, corresponding to randomly drawn samples, can run in parallel. The balance between these two levels depends on the resources available. The software PARADIS implements flow and transport with random data and computation of macro dispersion. Simulations run on supercomputers with large 3D domains. Roger Ghanem, University of Southern California High Performance Algorithms for PDEs with Spatially Varying Stochastic Parameters The solution of PDEs with stochastic parameters typically yields systems of equations whose dimension grows exponentially with the resolution of the spatial statistics of the parameters. I will review standard approaches and present recent results to address this curse of dimensionality, stressing on algorithmic challenges and resolutions, and surveying software implementations that facilitate such explorations. Omar Ghattas, University of Texas at Austin Large-scale Bayesian Inversion with Application to Antarctic Ice Sheet Flow The flow of ice from the interior of polar ice sheets is the primary contributor to projected sea level rise. One of the main difficulties faced in modeling ice sheet flow is the uncertain spatially-varying boundary condition that describes the resistance to sliding at the base of the ice. Satellite observations of the surface ice flow velocity, along with a model of ice as a creeping incompressible non-Newtonian fluid, can be used to infer the uncertain basal sliding parameter field. We address this ill-posed inverse problem using Bayesian methods, which provide a systematic framework to infer not only the basal sliding parameters, but also their associated uncertainty.Unfortunately, solution of Bayesian inverse problems remains prohibitive for expensive models and high-dimensional parameters. However, observational data, while large-scale, typically provide only sparse information on the model parameters. We exploit this property to construct low rank approximations of the Hessian of the negative log likelihood, thus leading to tractable Gaussian approximations of the posterior probability for large-scale ice sheet flow inverse problems. These Gaussian approximations of the posterior probability serve as proposals for MCMC sampling. Moreover, they form the basis for solution of optimal experimental design problems that seek to find the optimal location of new observations to reduce the uncertainty in the inferred model parameters. The resulting computations scale independently of the parameter, state, and data dimensions. This work is joint with Alen Alexanderian, Tan Bui-Thanh, Tobin Isaac, James Martin, Noemi Petra, and Georg Stadler. Mark Girolami, University of Warwick Differential Geometric Simulation Methods for Uncertainty Quantification in Large Scale PDE Systems Characterising uncertainty, from a Bayesian perspective, in computer models comprised of large scale and stiff systems of Partial Differential Equations (PDE) becomes challenging when fine meshes and distributed parameters have to be defined and inferred in an inverse problem setting. This talk presents recent work which exploits the use of geodesic flows on the manifold of statistical models defined by the PDE sensitivity equations to sample from the desired Bayesian posterior distribution over all unknowns. The talk will consider the role that deterministic approximations has to play in this scheme and will illustrate the ideas presented by considering system models of elliptic and parabolic PDEs. Bj¨ orn Gmeiner, University of Erlangen-Nuremberg Massively Parallel Multilevel Monte Carlo The sampling in Monte-Carlo based methods is inherently parallel. However, for large scale finite element simulations, it is inevitable to parallelize additionally each sample computation. Multi-level Monte-Carlo (MLMC) methods operate on different resolutions of the problem, which leads to drastically different compuational cost for each sample. The cost of cost of a sample can often can be estimated in advance, 6

however, it is in general not a-priori known, how many samples have to be drawn on each level. This leads to a dynamic load-balancing problem. This presentation will first focus on the parallelization of large FE problems using hierarchical hybrid multigrid. This solver is well suited to implement an efficient parallel MLMC method. We will introduce and compare different scheduling strategies in order to solve the load balancing problem. The performance of the method is analysed by numerical experiments massively parallel High Performance Computing Clusters. Annika Lang, Chalmers University Gaussian Random Fields on the Sphere: One Class of Random Fields on Manifolds Hlder continuity and differentiability of random fields on manifolds is besides the mathematical theory on its own also of big interest in applications. In this talk we derive a version of the Kolmogorov-Chentsov theorem for random fields on manifolds and extend it to differentiability. A special emphasis will be given on the class of Gaussian random fields on the sphere. Besides regularity results, we discuss for this type of random fields convergence results for approximations and applications to stochastic partial differential equations. Kody Law, KAUST Dimension-Independent, Likelihood-Informed (DILI) MCMC Sampling Algorithms for Bayesian Inverse Problems Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting the intrinsic low dimensionality of the likelihood function, we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in a nonlinear inverse problem and a conditioned diffusion process are used to demonstrate the efficiency of these dimension-independent likelihood-informed samplers. Geoff Nicholls, University of Oxford Lateral Transfer of Traits on Phylogenetics: Likelihood Evaluation via Large Sparse Linear Systems of DEs We show how to carry out sample-based Bayesian inference for an inverse problem in which the parameters of a large sparse system of linear differential equations are a function of an unknown tree graph. The forward problem is an initial value problem structured by the tree. The tree must be reconstructed from noisy and indirect measurements of the system state at a snapshot in time. It is known that complex evolutionary traits may transfer between contemporary species. This ”lateral transfer” of traits contrasts with vertical inheritance. We extend the binary stochastic-Dollo traitevolution model to allow for the lateral transfer of traits on a tree with L leaves. The data are binary L-component vectors recording the presence or absence of traits across leaves. We show that the number of occurrences of each of these site patterns has a Poisson distribution. The O(2L ) component vector of site-pattern means is given as the solution of a coupled sequence of systems of sparse linear differential equations. The largest system has dimension O(2L ). For several problems of real interest, with up to around 20 taxa, brute force methods are sufficient. We illustrate our methods on language trees with possible word borrowing and discuss extensions. Stephen Sain, National Center for Atmospheric Research, Boulder, CO Impact of Model Resolution for Regional Climate Experiments Understanding the gain from increased model resolution and the interaction of resolution with model components is becoming an increasingly important aspect of climate modeling. In this talk, I will examine two different analyses of regional climate model experiments. The first focuses on monthly precipitation and understanding the interaction between model resolution and convective parameterizations. The second examines an approach for quantifying the added value of increased resolution associated with regional climate modeling and other downscaling approaches. In both cases, I will discuss connections to spatial statistics and opportunities for high-performance computing to facilitate such analyses.


Gavin Shaddick, University of Bath Challenges in the Integration of Air Pollution Estimates from Multiple Data Sources and Methods Air pollution is an important determinant of health. There is convincing, and growing, evidence linking the risk of disease and premature death with exposure to fine particulate matter (PM2.5 ) and ozone (O3 ). The public health burden of present exposure is substantial. Recently published Global Burden of Disease assessments indicated that ca. 3.2 million premature deaths per year and 3.1% of the global disease burden could be attributed to ambient particulate matter pollution, placing it among the top health risk factors globally. In order to estimate health risks and inform policy there is a need for accurate measures of air pollution that might be experiences by populations. Although ground level air pollution monitoring is prevalent in many areas such as Western Europe and the USA but information is sparse in many other parts of the world and alternatives must be developed. One approach that lends itself to this global setting is to integrate data from surface monitoring of air quality, atmospheric transport models and satellite remote sensing in order to produce integrated estimates of population exposures to air pollution. A Bayesian hierarchical model framework provides a natural setting in which to combine information from different sources, measured at varying spatial and temporal scales, together with associated uncertainties. This approach results in probability distributions for all predictions which provides astraightforward method of uncertainty quantification, which is particularly important for estimating PM2.5 concentrations in data-limited regions. Daniel Simpson, NTNU Trondheim Barriers to Scalable Spatial Statistics In this talk, I will discuss the barriers faced when trying to scale up current methods in spatial statistics. Tim Sullivan, University of Warwick Brittleness and Robustness of Bayesian Inference in Complex Systems The flexibility of the Bayesian approach to uncertainty, and its notable practical successes, have made it a increasingly popular computational tool for uncertainty quantification. The scope of application has widened from the finite sample spaces considered by Bayes and Laplace to very high-dimensional systems, or even infinite-dimensional ones such as PDEs. It is natural to ask about the accuracy of Bayesian procedures from several perspectives: e.g., the frequentist questions of well-specification and consistency, or the numerical analysis questions of stability and well-posedness with respect to perturbations of the prior, the likelihood, or the data. This talk will outline positive and negative results, both old and new, about the accuracy of Bayesian inference. There will be a particular emphasis on the consequences for high- and infinite-dimensional complex systems. In particular, for such systems, subtle details of geometry and topology play a critical role in determining the accuracy or deep instability of Bayesian procedures. Daniel Tartakovsky, University of California at San Diego Uncertainty Quantification in Nonlinear Models of Multiphase Flow Nonlinear parabolic equations with uncertain (random) coefficients play an important role in science and engineering, including geosciences where they are used to describe multiphase flow in heterogeneous porous media. We review several alternative probabilistic approaches for uncertainty quantification (UQ) in such problems, such as global stochastic collocation (SC) strategies – the current method of choice in the UQ community – and other techniques based on a spectral decomposition of state variables in the probability space. We demonstrate that the performance of SC is strongly tied to the way the stochastic properties of the random input parameters affect the regularity of the systems’ stochastic response in the probability space. If random input fields have low variance and large correlation lengths, SC strategies are competitive against alternative uncertainty quantification methods, such as Monte Carlo simulations (MCS). Increasing variance affects the regularity of the stochastic response, requiring higherorder quadrature rules to accurately approximate the moments of interest, thus increasing the overall computational cost beyond that of MCS. We develop reduced complexity models, which yield closedform semi-analytical expressions for the single- and multi-point probability density functions (PDFs) of the state variables. These solutions enable us to investigate the relative importance of uncertainty in various input parameters (e.g., hydraulic soil properties) and the effects of their cross-correlation. Our simulation results demonstrate the reduced complexity models provide conservative estimates of predictive uncertainty.


Aretha Teckentrup, Florida State University Multilevel Markov Chain Monte Carlo Algorithms for Uncertainty Quantification The parameters in mathematical models for many physical processes are often impossible to determine fully or accurately, and are hence subject to uncertainty. By modelling the input parameters as stochastic processes, it is possible to quantify the uncertainty in the model outputs. Based on the information available, a prior distribution is assigned to the input parameters. If in addition, some dynamic data (or observations) related to the model outputs are available, a better representation of the parameters can be obtained by conditioning the prior distribution on these data, leading to the posterior distribution. In most situations, the posterior distribution is intractable in the sense that exact sampling from it is unavailable, and Markov chain Monte Carlo (MCMC) methods are hence frequently used. However, in large scale applications, where the number of input parameters is typically very high and the computation of the likelihood very expensive, conventional MCMC methods quickly become infeasible. In this talk, we therefore develop a new multilevel version of a Metropolis-Hastings algorithm, based on a hierarchy of model discretisations. The new multilevel algorithm is generally applicable, and independent of the underlying mathematical model. For a typical problem in subsurface flow, we will demonstrate the gains with respect to conventional MCMC that are possible with this new approach, and provide a full convergence analysis of the new algorithm. Clayton Webster, Oak Ridge National Laboratory A Hierarchical, Multilevel Stochastic Collocation Method for Adaptive Acceleration of PDEs with Random Input Data The bulk of the computational cost of extreme-scale stochastic simulations is typically associated with linear or nonlinear iterative solvers, and the convergence of such methods can be dramatically improved by constructing hierarchical approximations that reuse/recycling information in the probabilistic domain. As such, one aim of this talk is to present a multilevel sparse grid stochastic collocation (MLSC) method that produces a hierarchical sequence of interpolants, where, at each level, one introduces new sample points. Taking advantage of the hierarchical structure, we build new iterates and improved preconditioners, at each level, by using the interpolant from the previous level. We also provide rigorous convergence analysis of the fully discrete problem and demonstrate the increased computational efficiency, as well as bounds on the total number of iterations used by the underlying deterministic solver.

Abstracts for Posters Georgios Arampatzis, University of Crete Goal-Oriented Sensitivity Analysis for Lattice Kinetic Monte Carlo Simulation We propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo. The novelty of our construction is that the sensitivity method depends on the targeted observables, hence called “goal-oriented”, and it is obtained as a solution of an optimization problem. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the BortzKalosLebowitz algorithms philosophy, where here events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples of diffusion-reaction lattice models that the proposed goal-oriented algorithm can be two orders of magnitude faster than existing algorithms for spatial KMC. Daniele Bigoni, AP Engsig-Karu, both Technical University of Denmark & Yousef Marzouk, MIT Spectral Tensor-Train Decomposition for Low-Rank Surrogate Models The construction of surrogate models is very important as a mean of acceleration in computational methods for uncertainty quantification. In particular, surrogates can be used for the forward propagation of uncertainty and the solution of inference problems, when the forward model is particularly expensive, as for example in computational fluid dynamics (CFD). The low-rank approximation of high-dimensional functions is becoming increasingly popular in recent years, thanks to advances in the field of tensor decomposition. We use the tensor-train decomposition as a compressed format for the representation 9

of discrete high-dimensional tensors. This is constructed using a deterministic sampling method, that leads to a linear scaling of the computational and memory complexity, thus addressing the curse of dimensionality. The theory of classical polynomial approximation is then used in order to construct the surrogate, attaining spectral convergence on smooth functions. The methods hinges on the theory on the decomposition of non-symmetric kernels in L2 spaces. The main results of this work highlights that the presented method will work for a wide set of functions, namely the H¨older continuous functions with exponent > 1/2. The efficiency of the method is strongly affected by the order of the input dimensions of the function. This is a well known problem in tensor-train decomposition and here we propose for the first time a strategy to tackle it. This leads to a robust, efficient and accurate algorithm for the construction of surrogate functions in high-dimensional spaces. The method has been tested on the Genz functions with d ∈ [10, 200], confirming its spectral convergence. Ongoing works are investigating its application on CFD problems for the construction of full-field surrogates in the fields of coastal engineering and computational geoscience. The open-source software is made available with examples at Evangelos Evangelou, University of Bath, and Vasileios Maroulas, University of Tennessee Filtering and Estimation for Dynamic Spatiotemporal Processes We consider a latent dynamic spatiotemporal process whose temporal dynamics evolve according to an autoregressive process of order one. Based on noisy data, we focus on the problem of finding online the best estimate of the process and estimating the associated parameters. For the process estimation, we employ a novel particle filtering algorithm based on an approximation of the optimal filtering distribution by a skewed normal density. Some of the model parameters are estimated by augmenting the aforementioned particle filter with a Gibbs sampler and updating certain sufficient statistics. These sufficient statistics depend on the spatial correlation matrix. The spatial correlation, in turn, is estimated by a novel online implementation of an empirical Bayes method which makes use of the particle filter and the Gibbs samples. Theoretical and simulation results verify the accuracy and the robustness of our algorithm. David Titley-Peloquin, CERFACS, Toulouse Stochastic Conditioning of Matrix Functions How sensitive is a matrix function F : Ω ⊂ Rm×n → Rp×q to perturbations in its input? This is a fundamental question in numerical linear algebra. Sensitivity analyses are usually performed using condition numbers. This leads to asympotic worst-case bounds on kF (A + H) − F (A)kF /kHkF that can be very pessimistic in practice. Here we are interested in random perturbations. We attempt to quantify some of the properties of kF (A + H) − F (A)kF , where F is Fr´echet differentiable, A ∈ Ω is fixed, and the elements of H are random variables following various distributions. We define a stochastic condition number that measures the typical sensitivity (in a well-defined sense) of F to random noise in its input. We obtain a bound on the on the stochastic condition number that can be computed efficiently by using small-sample estimation techniques. Numerical experiments show the effectiveness of the resulting stochastic error estimate. The poster is based on joint work with Serge Gratton (ENSEEIHT-IRIT and CERFACS, Toulouse).


List of Participants

Adhikari Anaya-Izquierdo Arampatzis Bigoni Challenor Chesterfield Christie Constantine Cotter Engsig-Karup Erhel Evangelou Freitag Ghanem Ghattas Girolami Gmeiner Gonzalez Graham Griffiths Johnson Kim Kumar Lang Law Lindgren Maroulas Milewski Nicholls Owen Pya Sain Scheichl Shaddick Shardlow Simpson Sullivan Tartakovsky Teckentrup Tim Titley-Peloquin Ullmann Walker Webster Wood Zammit Mangion Zhang

Sondipon Karim Georgios Daniele Peter Jon Mike Paul Simon Allan Jocelyne Evangelos Melina Roger Omar Mark Bj¨ orn Horacio Ivan Alex Samuel Tatiana Dinesh Annika Kody Finn Vasileios Paul Geoff Nathan Natalya Stephen Rob Gavin Tony Daniel Tim Daniel Aretha Dodwell David Elisabeth Alison Clayton Simon Andrew Guannan

Swansea University University of Bath University of Crete Technical University of Denmark University of Exeter University of Bath Heriot-Watt University Colorado School of Mines University of Manchester Technical University of Denmark INRIA Rennes University of Bath University of Bath University of Southern California The University of Texas at Austin University of Warwick University of Erlangen-Nuremberg University of Bath University of Bath University of Bath DNV-GL University of Bath Vrije Universiteit Brussels Chalmers University of Technology KAUST University of Bath University of Tennessee University of Bath University of Oxford University of Exeter University of Bath National Center for Atmospheric Research University of Bath University of Bath University of Bath NTNU Trondheim University of Warwick University of California at San Diego Florida State University University of Bath CERFACS Toulouse University of Bath University of Bath Oak Ridge National Laboratory University of Bath University of Bristol Oak Ridge National Laboratory


[email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]

Suggest Documents