Today saw the end of the workshop at MBI on Recent Advances in Statistical Inference for Mathematical Biology. It has been a very enjoyable and thought-provoking workshop – definitely well worth the visit. My own talk received a good number of questions and plenty of interesting discussion. It was definitely at the more ‘applied’ end of the talks given; many of the talks described new methodologies and it is these that were particularly useful.

Perhaps the most interesting feature to emerge from this workshop is the work on identifiability or estimability of the parameters: it is the four talks most focussed on this topic that I will review very briefly below. The difference between these two terms is non-identifiability of parameters is a structural issue: no amount of additional data could help; non-estimability is a feature of the model and the data: the parameters cannot be estimated from the data at hand, but perhaps with different data they could be. This issue has certainly become an important concern in our own work: situations in which the Markov chain is unable to provide meaningful estimates for one or more parameters. On one level, this is useful, indeed it is one of the reasons why we are using these approaches: if we cannot estimate two parameters but could estimate (say) the ratio of two parameters then we want to know that, and the joint posterior distributions give that information. But in other cases it is holding us back: we have inference schemes that do not converge for one or more parameters, limiting our capacity to make scientific inductions, and we need good methods both to diagnoze a problem and to suggest sensible resolutions.

Two talks discussed approaches to simulations based on the geometric structure of the likelihood space. Mark Transtrum’s talk considered Riemannian geometric approaches to search optimization. The solution space often describes a manifold in data coordinates that can have a small number of ‘long’ dimensions and many ‘narrow’ dimensions. The issue he was addressing a long canyons of ‘good’ solutions that are difficult for a classical MCMC or optimization scheme to follow. Interestingly, this leads to the classical Levenberg-Marquardt algorithm that allows optimal and rapid searching along the long dimensions – and Mark described an improvement to the algorithm. However, in discussions afterwards, he mentioned that following geodesics along the narrow dimensions to the manifold boundary can help identify combinations of parameters that cannot be estimated well from the data. Mark’s paper is Transtrum, M.K. *et al.* 2011. Phys. Rev. E. **83**, 036701.

Similar work was described by Ben Calderhead. He described work trying to do inference on models with oscillatory dynamics, leading to difficult multi-model likelihood functions. The approach was also to use a Riemannian-manifold MCMC combined with running a chain with parallel temperatures that give different levels of weight of the (difficult) likelihood relative to the (smooth) prior. The aim again is to follow difficult ridges in the solution space, while also being able to escape and explore other regions. Ben’s methodological paper is Girolami, M. and Calderhead, B. 2011. J. Roy. Stat. Soc. **73**: 123-214.

A very different approach was described by Subhash Lele. Here, the issue is diagnosing estimability and convergence of a chain using a simple observation: if you imagine ‘cloning’ the data, i.e. repeating the inference using two or more copies (N say) of your original data, then the more copies of the data you use, the more the process will converge to the maximum likelihood estimate. Fewer copies will weight the prior more. This means that if all is working well: (i) as N increases, the variance of the posterior should decrease; (ii) if you start with different priors, then as N increases, the posteriors should become more similar. If these do not happen, then you have a problem. The really nice thing about this approach is that it is very easy to explain and implement: methods based on Riemannian geometry are not for the faint-hearted and can only really be used by people with a strong mathematical background; data cloning methods are more accesible! Subhash’s papers on data cloning can be downloaded from his web site.

Finally, another approach to identifiability was described by Clemens Kreutz. He described ways of producing confidence intervals for parameters that involved following individual parameters and then re-optimizing for the other parameters. Although more computationally intensive, this looks useful for producing more reliable estimates both of parameter and model fit variability. Clemens’s work is available at http://arxiv.org/abs/1107.0013.

There were many more applied talks too, that I very much enjoyed, to a range of interesting applications and data. Barbel Finkenstadt gave a talk that included, in part, work carried out by Dafyd Jenkins, and I was filled with an up-welling of pride to see him doing so well! I also particularly appreciated Richard Boys’s honest attempt to build an inference scheme with a messy model and messy data and obtaining mixed results.

All-in-all, an enjoyable and interesting week, well worth the trip, and I look forward to following up on some interesting new methodologies.