Professor Hans Kunsch (ETHZ)
Title : State space models, filtering and environmental applications
A state space model consists of a latent Markov processes, called the state, together with partial and noisy observations. Such models are very flexible and have applications in many fields, from engineering (tracking problems) to finance (stochastic volatility), biology (stochastic reaction networks) and environmental sciences (atmospheric physics, petroleum reservoir modeling). The basic tasks one needs to be able to solve for these models are the computation of conditional distributions of the latent process given the observations (called data assimilation or filtering and smoothing) and the estimation of unknown parameters in the model.
In the first lecture, I will present some examples and introduce the basic recursions for the conditional distributions of interest and for the likelihood. I will discuss the linear Gaussian case where the Kalman filter provides fairly explicit recursions and introduce the particle filter which is a recursive Monte Carlo approximation of the conditional density of the state at some time point given observations up to the same time.
In the second lecture, I will present refinements of the basic particle filter and discuss why the particle filter typically performs poorly in high dimensions. I will then present the Ensemble Kalman filter which is typically inconsistent for non-Gaussian or non-linear Markovian dynamics, but suffers less from sample depletion. Recent proposals to provide a bridge between the particle and the Ensemble Kalman filter will close the second lecture.
In the third lecture, I will present some ideas for approximating the smoothing distribution which is the conditional distribution of the state at some time given all observation up to some later time point, and for computing parameter estimates. Finally, I will discuss more general sequential Monte Carlo methods which sample not from a fixed distribution, but a sequence of related distributions. and make the connection to approximate Bayesian computation (ABC).
Professor Art B. Owen (Standford University)
Title : Lectures on Empirical Likelihood
SLIDES ART OWEN :
PART1, PART2, PART3
Empirical likelihood is an inferential method that provides the benefits of a likelihood function without requiring the user to know a parametric family for the data. Because it is formed as a likelihood it can be used to pool multiple data sources and combined with prior distributions and prior constraints on the parameters.
Empirical Likelihood Part I: the basics
This lecture introduces the idea of nonparametric maximum likelihood and the empirical likelihood. It considers empirical likelihood for the mean of a vector random variable. Empirical likelihood provides confidence intervals comparable to those obtained by the bootstrap, but without requiring any resampling. Instead of Monte Carlo the computations require numerical optimization. The computation is convex optimization with a self-concordant criterion.
For multidimensional data, empirical likelihood chooses the shape of a confidence region not just the size, something which is difficult to achieve by resampling.
Empirical Likelihood Part II: estimating equations
This lecture describes some extensions of empirical likelihood. The main tool is the use of estimating equations. That allows us to consider regressions, GLMs, and some time series models. It considers shows how to make use of known constraints on the parameters. Use of estimating equations allows one to account for biased sampling. It also lets one take account of censoring and truncation.
Empirical Likelihood Part III: next steps
This lecture presents recent work and open problems in empirical likelhood. The emphasis is on methods for escaping the convex hull and for getting better calibration of confidence intervals. There are also connections to recent work in approximate Bayesian computation. It ends with some open problems.
Professor Simon Wood (University of Bath)
Title : Additive smooth modelling with reduced rank splines.
SLIDES SIMON WOOD :
PART1, PART2, PART3, PART4, PART5
Generalized additive models are generalized linear models in which the linear predictor depends linearly on unknown smooth functions of predictor variables, and the statistical interest lies in making inferences about those functions. GAMs are widely used where-ever it is useful to be able to specify a regression model in rather flexible terms. Relative to GLMs the main extra technical difficulty introduced by GAMs is the need to estimate the functions, including estimating how smooth they should be. These lectures will cover a framework for GAMs based on representing the smooth functions using reduced rank penalized regression splines. The term GAM will be taken to include varying coefficient models, structured additive regression models, geoadditive models, geographically weighted regression models, generalized additive mixed models etc, all of which fall within the same computational and inferential framework. The lectures will cover basis-penalty smoothing, penalized likelihood and empirical Bayes views of the smoothing process, cross validation and likelihood estimation of the degree of smoothing, effective degrees of freedom, confidence interval estimation, testing and model selection, as well as model checking.
* Introduction to GAMs via some motivating examples presented as live data analysis, using R package mgcv.
* Basis penalty smoothing: bases, penalties, estimation and the Bayesian model of smoothing, smoothing parameter selection via GCV and REML, interval estimation.
* Building GAMs from basis - penalty smoothers. Generalization from univariate to additive to GAMs and GAMMs. Identifiability constraints.
* Toolbox of smoothers. Spline type smoothers. Rank reduction approaches.
1-D smoothers. Isotropic and tensor product smoothing in higher dimensions.
Gaussian random effects as smoothers. Special smoothers for geographic data.
* Model checking and selection strategies.
* Advanced topics. Functional data analysis, correlated data, beyond exponential family.
Here is a biased selection of references. The first gives an overview up until 2006.
Wood, SN (2006) Generalized Additive Models: An Introduction with R. Chapman and Hall/CRC.
Wood, SN (2011) Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. Journal of the Royal Statistical Society (B) 73(1):3-36
Wood SN, MV Bravington, SL Hedley (2008) Soap film smoothing Journal of the Royal Statistical Society (B) 70(5):931-955
NH Augustin, M Musio, K von Wilpert, E Kublin, SN Wood, M Schumacher (2009)
Modeling spatiotemporal forest health monitoring data Journal of the American Statistical Association 104 (487), 899-911
G Marra, SN Wood (2011) Coverage Properties of Confidence Intervals for Generalized Additive Model Components. Scandinavian Journal of Statistics 39(1): 53-74
SN Wood, F Scheipl, JJ Faraway (2013) Straightforward intermediate rank tensor product smoothing in mixed models. Statistics and Computing, 23:341-360
SN Wood (2013) On p-values for smooth components of an extended generalized additive model. Biometrika 100 (1), 221-228
SN Wood (2013) A simple test for random effects in regression models. Biometrika (online early)
|Simon WOOD (Workshop)
| Art OWEN
Simon WOOD (Workshop)
|pause de midi
|pause de midi
|pause de midi
| Thé de bienvenue
|Hans KUNSCH 14h00- 15h30
| pause café
| pause café
|Réunion Commission Scientifique
|Repas du soir
|Repas du soir