# Evolution of Novelty and Diversity: an open research idea

Spurred by the conference I have been attending (see previous post), and specifically work on evolution of diversity and complexity, I have decided to post one of my unsuccessful research grant applications as an open research idea. This is an application to the Leverhulme Trust that did not get past the first stage a couple of years ago. I really like this idea, but will never have the time to do it by myself, nor am I likely to find a suitable funder. So I am putting the idea into the public domain: after all, it serves no good to anyone sitting on my hard disk.. If you or someone you know would like to pick up on the idea, collaborate with me, maybe even come to work with me (perhaps through a Marie Curie application?) or even work completely independently of me, here it is.

#### Evolution of unbounded novelty and diversity using computer models of metabolism

Evolution has led to a continuous emergence of novel species, resulting in the diversity and complexity of life that we observe today. It is commonly presumed that the conditions set out by Darwin, of diversity, heredity and selection, are sufficient to explain this emergence. However, this cannot be tested in laboratory timescales, and computer simulations of evolution have been unsuccessful in producing an unending progression of novel, diverse and increasingly complex species, referred to as open-ended evolution (1). The formulation and validation of necessary and sufficient conditions for open-ended evolution is one of the biggest unsolved problems in biology (2).

We aim to address this research gap. Our hypothesis is that to obtain open-ended evolution, there must be positive feedback from the development of novelty in one species leading to the construction of new niches for other species to exploit. We propose that this positive feedback is a missing component from the current formulation of the theory of evolution. We will test this hypothesis using a computer evolution approach, by building a simulation of evolvable artificial single-cell organisms.

The novelty of our proposed approach is the use of real biochemistry to provide a rich and varied context for evolution. In the simulations, different ‘species’ of ‘organism’ will be distinguished because they possess different sets of enzymes. To survive and reproduce, organisms will be required to produce certain quantities and proportions of key chemicals: fatty acids, amino acids, nucleic acids and carbohydrates. Organisms will live in spatial patches, grow on available nutrients, and die to release chemicals that can be reused by other organisms. Fitness in the simulations will be identical to fitness in real biology: species that are better able to survive and reproduce are fitter than those that are not. Diversity will be driven by the gain or loss of enzymes, enabling organisms to use different resources and/or manufacture different biochemicals.

Our approach is radically different from previous attempts to evolve open-ended novelty. Laboratory approaches, commonly using the bacterium Escherichia coli, have been successful in evolving novel carbon source utilization (3) and diversification into two sub-types (4, 5). These results are exciting. However, experimental approaches are fundamentally limited because it is impossible to study millions of years of evolution in a laboratory setting. We cannot make broader generalizations from these results.

Computer simulations allow study of evolution on longer timescales and more focussed conditions than possible in the laboratory. The most celebrated examples are Tierra (6) and Avida (7). However, these do not exhibit open-ended evolution, but convergence to a small number of species (8). Other approaches evolve models that already contain complex building blocks, such as neural controllers or locomotive mechanisms (9, 10). Even though these approaches have evolved novelty, they start with considerable granularity and complexity. It is not clear to what extent these systems allow for unbounded novelty beyond the complexity inherent in the components. Moreover, all of these approaches lack the reusability of compounds and functional plasticity of enzymes present in real chemistry, which underpins biological evolution.

Although it is tempting to perceive the diversity of life in terms of the range of plants and animals familiar to us, single-cell organisms (bacteria and archaea) account for over 95% of genetic diversity (11). Even the bacterial species E. coli contains more genetic diversity than that which distinguishes humans from social amoebae (12). Single-cell organisms differ not by their developmental complexity (limbs, organs etc.), but their metabolic complexity: their capacity to use different biochemical sources to provide energy and reproduce. Thus we propose that it is sufficient to consider the richness of the biochemical world of single-cell creatures to obtain open-ended evolution. The positive feedback required by our hypothesis arises because the emergence of a new metabolic pathway in one species, leading to biosynthesis of novel compounds, provides opportunities for other species to evolve to exploit those compounds.

The use of an appropriately rich model chemistry is central for a successful simulation environment. A common approach is to use an artificial chemistry, such as string chemistries (13), or rich chemistries using molecular graphs (14, 15). A novelty of our approach is the use of real biochemistry. This bridges the gap between previous unrealistic computational approaches (6,7), and real-life evolution (3-5). There are three advantages of moving closer to the biology. First, we know that biology has the open-ended evolution property that we seek to reproduce. Second, there are now considerable data available that we can utilize. And third, the results obtained will be more applicable to the biological world.

Preliminary research from DJS’s group on modelling evolution has focussed on the evolution of transcription regulation. We have found that networks evolve repressor functions and hierarchical regulation to control energy usage (16-19) and that basal expression is necessary to obtain realistic network evolution (17). These results will inform the architecture of the transcription regulatory elements in the proposed simulations.

Technical Programme of Work

1. Compilation of biochemical compound and enzyme reaction database, to include all known biochemicals and reactions, using relevant sources including KEGG (20) and MetaCyc (21).
2. Compilation of data of free energies of formation for each biochemical compound. Measured free energies are available in databases including BRENDA (22), XPDB (23). Where measurements are unavailable, the group contribution method will be used to make suitable estimates (24, 25).
3. Development of evolvable system for simulation of a lineage of organisms with a given set of enzymes, and given any input (local concentrations of biochemicals). Ordinary differential equations will be used to model both the metabolic reactions, and the transcription control mechanisms associated with expression of relevant enzymes.
4. Development of spatial array that includes organisms and biochemical concentrations in each compartment, and appropriate levels of mixing between neighbouring spatial compartments.
5. Development of evolutionary timescale simulation, to include mutations in enzyme sets and regulatory control, competition for resources, growth and death, and thus selection of most successful strains.
6. Analysis for diversity and novelty in simulation results, using evolutionary activity statistics (8).

References

1. Bedau, M.A. 2008. In S. Bullock et al. (eds.) Artificial Life XI: Proceedings of the Eleventh International Conference on the Simulation and Synthesis of Living Systems, p. 750. MIT Press, Cambridge, MA.

2. Bedau, M.A. et al. 2000. Artificial Life 6: 363–376.

3. Blount, Z.D. et al. 2008. Proc Natl Acad Sci USA 105:7899-7906.

4. Rozen, D.E. and Lenski, R.E. 2000. Am Nat. 155: 24-35.

5. Rozen, D.E. et al. 2009. Ecol Lett. 12:34-44.

6. Ray, T.S. 1992. In Langton, C. G. et al. (Eds.) (1992). Artificial life II. Redwood City, CA: Addison-Wesley. Pp 371–408.

7. Ofria, C. and Wilke, C.O. 2004. Artificial Life 10:191-229.

8. Bullock, S. and Bedau, M.A. 2006. Artificial Life 12: 1–5.

9. Channon, A. 2001. In J. Kelemen & P. Sosik (Eds.). Advances in Artificial Life 2159 pp. 417-426. Springer-Verlag.

10. Turk, G. 2010. In Hellerman, H. et al. (eds). Proceedings of the Twelfth International Conference on the Synthesis and Simulation of Living Systems. Pp496 – 503. MIT Press.

11. Ciccarelli F.D. et al. 2006. Science 311:1283-1287.

12. Lukjacenko, O. et al. 2010. Microb Ecol. 60: 708-20.

13. Hickinbotham, S. et al. 2010. In Hellerman, H. et al. (eds). Proceedings of the Twelfth International Conference on the Synthesis and Simulation of Living Systems. pp24-31. MIT Press.

14. Benko, G, et al. 2003. J Chem Inf Comput Sci, 43:1085–93.

15. Ullrich, A. et al. 2011. Artificial Life 17: 87-108.

16. Jenkins, D.J. and Stekel, D.J. 2010. Journal of Molecular Evolution 71: 128-40.

17. Jenkins, D.J. and Stekel, D.J. 2010. Journal of Molecular Evolution 70: 215-231.

18. Jenkins, D.J. and Stekel, D.J. 2009. Artificial Life 15: 259-91.

19. Stekel, D.J. and Jenkins, D.J. 2008. BMC systems biology, 2:6.

20. Kanehisa, M. et al. 2008. Nucleic Acids Res. 36, D480-484.

21. Caspi, R. et al. 2008. Nucleic Acids Res. 36: D623–631.

22. Scheer, M. et al. 2011. Nucleic Acids Res. 39: D670-676.

23. Goldberg, R.N. et al. 2004. Bioinformatics 20: 2874-2877.

24. Jankowski, M.D. et al. 2008. Biophys. J. 95: 1487-99.

# Modelling Biological Evolution 2013: Conference Highlights

Over the last couple of days I have been attending the Modelling Biological Evolution conference at the University of Leicester organized by Andrew Morozov.

For me, the most interesting theme to have emerged is work on evolutionary branching: conditions under which polymorphisms (or even speciation) might arise. These were all talked about in the context of mathematical models (ODE-type formulations based on generalized Lotka-Volterra systems). The best talk I attended was by Andrew White (Heriot Watt University). He described various system of parasite-host co-evolution, the most interesting of which demonstrated increases in diversity: a new host could emerge that was resistant to current parasites, following which a new parasite could emerge that would infect that host. He rather nicely linked that work to experimental work from Mike Brockhurst (University of York) on phage infections of bacteria showing similar patterns. The results could of course be interpreted at a speciation level, or, probably more fairly, at the level of molecular diversification (e.g. of MHC types in an immune system). What I really appreciated about this resut is that it spoke to the idea that increased diversity can result through a positive feedback mechanism: diversification leads to new niches and thus the potential for further diversification. I have thought for some time that this is the most important mechanism that drives diversification / speciation in natural systems and it was nice to see an example of the mechanism in action.

The other talk I particularly appreciated on the subject was by Claus Rueffler (University of Vienna). He spoke about a result on complexity and diversity in Doebeli and Ispolatov 2010 that also contains this feedback idea. This paper relies on a specific model to obtain its result on conditions for evolutionary branching. Rueffler demonstrated general conditions under which branching might take place that depend only upon the properties of the Hessian matrix associated with key parameters in model space. The important point is that the analysis is model-independent: it only considers the properties of the model forms needed to obtain the result.

Similar ideas were presented by Eva Kisdi (University of Helsinki). She focussed on models that include evolutionary trade-offs (e.g. between virulence and transmissibility): her point was that instead of choosing a function and analyzing its consequences, one could consider desired properties of a model (e.g. branching or limit cycles) and then use “critical function analysis” to derive conditions for possible trade-off functions that would admit the desired behaviour. Eva made the important point that many models make ad hoc choices of functions and thus lead to ad hoc results of little predictive value.

I think Eva’s point really touched on some of the weaknesses that emerged in many of the talks that I attended: there was a great deal of theory (some of which was very good), but very little interface with real biological data. I find this somewhat surprising: modelling in ecology and evolution has been around for very much longer that modelling in say molecular biology (where I currently work), and yet seems to be less mature. I think that the field would really benefit from far greater interaction between theoretical and experimental researchers. Ideally, models should be looking to generate empirically falsifiable hypotheses.

Perhaps the most entertaining talks were given by Nadav Shnerb and David Kessler (both Bar Ilan University). Nadav’s first talk was about power-law-like distributions observed in genus/species distributions. Core to his work is Stephen Hubbell’s neutral theory of biodiversity.
Nadav showed that distributions of number of species within genera could be explained by a neutral model for radiation and the genus and species level coupled with extinction. Nadav’s most important point was that if you wish to make an argument that a certain observed trait is adaptive, then you have to rule out the null hypothesis that it could arise neutrally through mutation/drift. I hope that is something we addressed with regards global regulators in gene regulatory networks in Jenkins and Stekel 2010. David spoke about biodiversity distributions also, showing that adaptive forces could explain biodiversity data (they are generally poor at this due to competitive exclusion that occurs in many models) if the fitness trait is allowed a continuous rather than discrete distribution.

Nadav’s second talk was about first names of babies. This was very interesting – especially as I have a young family (and a daughter with a very old-fashioned name). He looked at the error distribution (easily shown to be binomial-like noise proportional to square root of mean) that is superimposed on a deterministic increase and decrease in popularity of a name over a 60 year period. His thesis was that the error distribution due to external events would be proportional to mean (not root mean), and, as only 5 names in his data set (Norwegian names in ~ 20th Century) did not fit binomial noise, he ruled out external events (e.g. celebrity) as being a major driver. The problem I have with this is that he didn’t rule out external events in the deterministic part of the data (e.g. initiating a rise in popularity of a name that then follows the deterministic feedback law he proposed).

# Invitation to Contribute a Talk to Modelling Biological Evolution 2013: Recent Progress, Current Challenges and Future Directions

The International Conference Modelling Biological Evolution 2013: Recent Progress, Current Challenges and Future Directions will be held at the University of Leicester on May 1-3, 2013. The topics sound very interesting, and are:

• Evolutionary Epidemiology of Infectious Disease
• Models of Somatic Evolution of Cancer
• Evolutionary Population Ecology
• Models in Behavioural Ecology and Sociobiology
• Solving Social Dilemmas
• Models of Evolution of Language
• Population and Quantitative Genetics

I have been invited to contribute a talk and will give a talk with the following title and abstract:

Title:

Adaptation for protein synthesis efficiency in natural and artificial gene regulatory networks

Authors:

Dorota Herman, Dafyd Jenkins, Chris Thomas and Dov Stekel

Abstract:

In this talk, we will summarize work on the use of mathematical and computer models to explore the evolution and adaptation of gene regulatory network architectures.

First, we will look at a natural system, the korAB operon in RK2 plasmids, which is a beautiful natural example of a negatively and cooperatively self-regulating operon. We use a biologically grounded mechanistic multi-scale stochastic model to compare four hypotheses for the action of the regulatory mechanism: increased robustness to extrinsic factors, decreased protein fluctuations, faster response-time of the operon and reduced host burden through improved efficiency of protein production. We find that the strongest impact of all elements of the regulatory architecture is on improving the efficiency of protein synthesis by reduction in the number of mRNA molecules needed to be produced, leading to a greater than ten-fold reduction in host energy required to express these plasmid proteins.

Next, we summarize results from two different artificial gene regulatory network models that are free to evolve: a fine-grained model that allows detailed molecular interactions, and a coarse-grained model that allows rapid evolution of many generations. A similar theme emerges in these models too: the control of cell energy and resources is a major driver of gene network topology and function. This is demonstrated in the fine-grained model with the emergence of biologically realistic mRNA and protein turnover rates that optimize energy usage and cell division time, and the evolution of basic repressor activities, and in the coarse-grained model by emergence of global regulators keeping all cellular systems under negative control.o

So far as I am aware, the conference organizers have extended their registration deadline, so places are still available for the next few days.

# Westminster Food and Nutrition Forum Keynote Seminar

Yesterday I attended the Westminster Food and Nutrition Forum Keynote Seminar: meeting the challenges of food security: implementing the Green Food Project, innovation, biodiversity and land use. The forum was extremely interesting and well attended, and pointed to a wide range of potential research opportunities within my lab and school. In this post I will summarize the talks to the best of my notes. We will get transcripts and slides in due course, but I would rather get my thoughts into this blog in a timely fashion. Very brief summaries of the talks appear in black and I have added some follow-up thoughts, often relevant to my place of work, in blue.

The first session was chaired by Lord Cameron of Dillington, who is clearly passionate and committed to the Food Security cause. He introduced the food security agenda by stating that we could focus on national or local food security issues (different speakers did each), but wanted a global background. the main issues are:

• Current global population is 7B people predicted to raise to 9N by 2050 but there is a deficit in food production growth relative to population growth.
• Moreover, world GDP is set to raise by 400%, leading to likely changes in diet – less arable and more livestock results in extra draw on resources.
• Climate change is likely to have two impacts: drought in major continents; loss of good agricultural land close to the sea level.
• Pressure on world water supplies: 1.5M pa preventable deaths due to lack of sanitized water; unsustainable aquifer use, especially in South Asia; but only 2% of rainfall is used for food in Africa (vis 40% in SE Asia or 70% in California) leading to good opportunities in Africa.

The talks in the session followed very well from this introduction, addressing many of the issues raised in greater detail.

Michael Winter focussed on the UK agenda, drawing on the National Ecosystem Assessment (2011). His main focus was a plea for strong cross-disciplinarity between ecosystems and agriculture. He described how “ecosystem services” is the language of nature conservationists (with one set of agendas) and “food security” is the language of agriculturalists (with a different set of agendas), but actually the need for Green food and sustainable agriculture needs a broader vision that encompasses both.

Given Michael’s influence in Defra, I think that there is a good opportunity within my division (Agricultural and Environmental Sciences) and the University of Nottingham’s School of Biosciences as a whole to engage with this agenda in our research strategy.

John Ingram, who is NERC‘s food security leader, started by stating that agriculture is the largest driver of land-cover change, and with particular risks in rate of biodiversity loss, nitrogen cycle and climate change. He referred to the Feed the Future Report (2012). His main point was that the biggest impact could be through reduction of food waste. This occurs at all levels of the supply chain, but is difficult to measure because of many complicating factors. However, as an approximation, 4600 kcal of food are grown per person per day, but only 2000 kcal of food pppd  are consumed. Losses are at every level: harvest losses, animal feed (for meat/milk production), distribution losses and consumer waste. In the UK/US most losses are at the home / municipal level, whereas in the developing world, most losses are on farm. Thus he concludes that a major research priority is not in the sciences, but in social research, to identify public perceptions, attitudes etc that lead to the high level of food waste.

This last point is a clear research opportunity. What is interesting is that, with a few exceptions (see Tara Garnett and her analysis below), most GFS awareness and research activity is taking place within science departments (for example the one I work in), but we actually need people who work in sociology and policy more actively engaged in this research. The divides between these sorts of departments in most traditional universities is very large, but if we can cross these divides, then we have an opportunity for very high impact cross-disciplinary research.

Chris Fawcett from AMEC focussed his talk on global water security. Water stress impacts on 40% of the global population. The key challenge is to balance the needs for water between the competing demands of domestic, industrial and agricultural use, the latter using 70% of blue water globally. He referred to statistics on http://www.waterfootprint.org,  that 1kg of beef requires 15400 litres of water, whereas 1kg of wheat requires 1800 litres of water. From the UK’s perspective, while we have sufficient water resources for our domestic use, we are the world’s 5th largest importer of “virtual water”, with 62% of our water footprint being from imported goods.

Chris mentioned a number of technology-based solutions, including catchment water stewardship, balance distribution (transfers and storage), re-use of treated waste water, urban storm water, surface-drip irrigation and drought- and salt- tolerant crops.

We have no water expertise within AES, and maybe this is a gap both from a research and teaching perspective, given that AMEC and other such institutions are potential employers for our graduates.

Yuelai Lu spoke for the UK-China Sustainable Agriculture Innovation Network, SAIN. He spoke about the situation in China, which has seen continuous growth in grain production and especially livestock production over the last 30 years. He stated that he did not foresee a food security risk in China over the next 20/30 years (!), following which the Chinese population size would stabilize, and saw growth in that period coming through intensification. The challenges facing China are with an ageing labour force, loss of farmland, resource use inefficiency (especially N and P), greenhouse gas emissions and “institutional constraints”.

The next talk, from Phil Bloomer at Oxfam, provided a completely different perspective. He spoke about food security in terms of unsustainable inequality. His first point was that increased food prices has led to increased land prices, which in turn has led to an unprecedented scale of international land purchase. This has tended to be in countries with the least governance, and has led to large-scale dispossession leading to poverty and food insecurity. He stressed the need for global policy change from consultation of local people to consent of local people.

In terms of impact of climate change, he said that there was need for financial assistance.

Impact of biofuels is on food price stability and climate change, the latter because biofuels have a polluting impact due to food displacement. He believes that biofuel cut-backs are essential, including a complete phase-out of biofuels that compete with food.

He also mentioned the importance of small-holder farms. 2B people depend on small-holdings, with a labour of 500M people, mostly women. He spoke of the efficiency of small-holdings (presumably in terms of yield per hectare?) because of the high level of labour required.

The last talk in the first session was from Nick von Westenholz of the Crop Protection Association. Nick’s main point was about the importance of policy makers taking an evidence-based approach to decisions. This is particularly the case with GM, where the technologies are not even being investigated due to non-evidence-based concerns.

The second session was chaired by Barry Gardiner, MP. His emotive opening statement focussed on the mix of food, energy and water security and its impact on global justice.

James Marsden from Natural England spoke mainly about biodiversity loss. He referred to the Natural Environment White Paper (2011) and the EU target to halt biodiversity loss by 2020. He focussed on two measures: that at least 50% of SSSIs should be in favourable condition (currently 37.6%); and that farmland birds should recover, this being an important measure of ecosystem health. On the latter, he collated data from RSPB, BTO, JNCC and Defra showing that while numbers of generalist birds had remained stable, the number of specialist birds is in steep decline.

This did lead me to wonder what mathematical / computer modelling has been carried out on farm bird populations in the UK or elsewhere. This is an area where model predictions could clearly be beneficial, impacting on policy and practice. Something to investigate.

Andrea Graham from the NFU spoke briefly about NFU programmes and industry-led initiatives. Her most interesting point was the use of the term “knowledge exchange” instead of “knowledge transfer”, which I think best reflect the sorts of relationships that we as university academics with an interest in agriculture and environment would want to develop with the farming community and industry.

Daniel Crossley of the Food Ethics Council spoke briefly about the major challenges of hunger and unsustainable production methods. He stated that there was progress in shifting the discourse away from greater global productivity and towards other factors, including  demand (consumption), food waste and issues of profit as the sole motive in affordability, justice, access to foods and food fairness.

Jim Kirke from British American Tobacco has some interestingly different perspectives. He stressed the need for robust ecosystem services supported by resilient biodiversity to meet the demands of society, and emphasised that agriculture is not just about food but also other products. Specifically, he stated that this requires effective stewardship, and, somewhat controversially in my view, said that subsistence farmers do not have the resources to carry this out. He also stated that cropping and resource management depends on farmers’ livelihoods, and that this is an important factor in prioritisation. He stated that we need improvements in the way that demand volume and market prices can feed into cropping decisions in order to avoid chronic waste.

Tim Benton, who is RCUK’s GFS champion, focussed his talk on the importance of spatial scales beyond the farm. Sustainability is a global issue: the ecology of a field does not just depend on the management of the field, but on a larger scale encompassing landscape, national and global scales.

An important point he made is that for a set level of global demand, a reduction of yield in one location would lead to increased yield in another location, and so no net environmental benefit. Thus he criticised organic farming, stating that increased sustainability could be achieved either through maintaining yields while increasing environmental benefits, or by increasing yields for level environmental benefits. Reduction in yield for increased environmental benefit is not sustainable, in his view.

It may be politically unwise to disagree too strongly with RCUK’s GFS champion, but I do not agree with his last point. This analysis is predicated on sustainability given a certain level of global demand. There could be a role for reducing yield while increasing environmental benefits if this were coupled with decreased demand, for example by reducing food waste (as per John Ingram’s talk), a shift from consumption of animal-based to plant-based foods, or a concerted effort to reduce human population growth.

Tara Garnett, of the Food Climate Research Network, gave what in my view was the most impressive talk of the day. It surprised me, as a scientist, to take this view, given that Tara is carrying out social research, but actually, and importantly, her analysis demonstrated with utter clarity that GFS is as much a social issue as a scientific one, and thus that the solutions are going to be as much social as scientific. Moreover, Tara’s analysis provided a conceptual framework into which all of the previous talks could be integrated.

Tara started by emphasising that food production and consumption are at the centre of multiple societal concerns, including economics, health, environment and ethics. She then stated that there were three broad approaches to food sustainability, which could be analysed in terms of several concerns, including food and environmental sustainability, animal welfare and nutrition, as well as key stakeholders and values. She covered a great deal of material at quite a rapid rate, so my synopsis is patchy at best – although I can see that she has published this analysis here. The three approaches are:

1. The supply-side challenge (efficiency): this is a macro-level, and the key stakeholders tend to be governments, corporations, scientists etc. The focus is on delivering more food, or greater environmental sustainability, using better technologies. Food security is met through increased supply. Animal welfare is considered from a scientific perspective. Nutritional concerns are about making ‘inevitable’ consumption more healthy.
2. The demand-side challenge: needing more sustainable food. The key stakeholders tend to be NGOs. Much of the focus is targetted at the high impacts of meat and dairy products, and so the need to eat a more sustainable diet. Nutritional benefits of meat and dairy are often ignored. Animal welfare focusses on a more romantic view: “cows belong in fields”.
3. The equity challenge: not so much on production and consumption, but on the inequity of power structures. The spectrum of stakeholders is broader, and the focus of food security is on socio-economic systems and small-holders. Sustainability is assumed to be an outcome of greater equity, but there is little engagement with environmental metrics or animal welfare.

People strongly aligned with one of these positions disagree with each other because they have different views on how the world works, what things might be possible as opposed to be inevitable, and what things might be desirable.

Tara’s conclusion is that each perspective alone is too simplistic, and that the challenge of GFS needs all three perspectives. Ultimately, GFS is not just a scientific or technical problem, but values matter, and we need to be open to different values.

As I said above, I particularly enjoyed Tara’s analysis, and again it highlights the importance of us scientists building appropriate research collaboration with social researchers. What is also interesting about Tara’s analysis is that each of the challenges is progressively harder to meet. With regards the supply-side challenge, we are actually very good at using science and technology to improve outcomes. With regards the demand-side challenge, this is much harder, but there are precedents of success in large-scale behavioural change. Recycling is an excellent example, where, combined with appropriate infrastructure, most people and businesses are happy to dispose of recyclable waste in more sustainable ways. The equity challenge is much the hardest of the three, and something that as a human society we have not yet met.

The last talk of the day was a brief update from Karen Morgan of Defra on implementing the Green Food Project. Karen emphasised the importance of working in partnership with key organizations.

# Inferring the error variance in Metropolis-Hastings MCMC

One of the great joys of working with two talented post-docs in the research group – Mike Stout and Mudassar Iqbal – as well as a great collaboration with Theodore Kypraios, is that they are often one step ahead of me and I am playing catch-up. Recently, Theo has discussed with them how to estimate the error variance associated with the data used in Metropolis-Hastings MCMC simulations.

The starting point, usually, is that we have some data, let us say $y_i$ for $i=1, \cdots, n$, and a model – usually, in our case, a dynamical system – which we are trying to fit to the data. For any given set of parameters $\theta$, our model will provide estimates for the data points that we will call $\hat y_i$. Now, assuming uniform Gaussian errors, our likelihood function $L(\theta)$ looks like:

$L(\theta) = \prod_{i=1}^n \frac{1}{\sqrt{2 \pi\sigma^2}}e^{-\frac{1}{2}(\frac{y_i - \hat y_i}{\sigma})^2}$

where $\sigma^2$ is the error variance associated with the data. Now, when I first started using MCMC, I naively thought that we could use values for $\sigma^2$ provided by our experimental collaborators, and so we could use different values of $\sigma^2$ according to how confident our collaborators were in the measurements, equipment etc. What I found in practice was that these values rarely worked (in terms of convergence of the Markov chain) and we have had to make up error variances using trial and error.

So I was delighted when I heard that Theo had briefed both Mike and Mudassar about a method for estimating the error variance as part of the MCMC. Since I have not tried it before, I thought I would give it a go. I am posting the theory and some of my simulations, which are helpful results.

#### Theory

The theory behind estimating $\sigma^2$ is as follows. First, set

$\tau = \frac{1}{\sigma^2}$

We can then re-write the likelihood, now for the model parameters $\theta$ and also the unknown value $\tau$, as

$L(\bf{\theta}, \tau) = \frac{\tau^{(n/2)}}{\sqrt{2 \pi}^n}e^{-\frac{\tau}{2}\sum_{i=1}^n(y_i - \hat y_i)^2}$

Now observe that this has the functional form of a Gamma distribution for $\tau$, as the p.d.f. for a Gamma distribution is given by:

$f(x; \alpha, \beta) = \frac{\beta^\alpha}{\Gamma(\alpha)}x^{\alpha-1}e^{-\beta x}$

So if we set a prior distribution for $\tau$ as a Gamma distribution with parameters $\alpha$ and $\beta$, then the conditional posterior distribution for $\tau$ is given by:

$p(\tau | \theta) \propto \tau^{(n/2)+ \alpha - 1}e^{-\tau(\frac{1}{2}\sum_{i=1}^n(y_i - \hat y_i)^2+\beta)}$

We observe that this is itself a Gamma distribution, with parameters $\alpha \prime = \alpha + n/2$ and $\beta \prime = \beta + \frac{1}{2} \sum_{i=1}^n (y_i - \hat y_i)^2$. Thus the parameter $\tau$ can be sampled with a Gibbs step as part of the MCMC simulation (usually using Metropolis-Hastings steps for the other parameters).

#### Simulations

The simulations I have run are with a toy model that I use a great deal for teaching. Consider a constitutively-expressed protein that is produced at constant rate $k$ and degrades (or dilutes) at constant rate $\gamma$ per protein. A differential equation for protein concentration $P$ is given by:

$\frac{dP}{dt} = k - \gamma P$

This ODE has the closed form solution:

$P = \frac{k}{\gamma} + (P_0 - \frac{k}{\gamma}) e^{-\gamma t}$

where $P_0$ is the concentration of protein at $t=0$. For the purposes of MCMC estimation, mixing is improved by setting $P_1 = \frac{k}{\gamma}$ so that the closed form solution is:

$P = P_1 + (P_0 - P_1) e^{-\gamma t}$

Some data I have used for teaching purposes comes from the paper Kim, J.M. et al. 2006. Thermal injury induces heat shock protein in the optic nerve head in vivo. Investigative ophthalmology and visual science 47: 4888-94. The data is quantitative Western blots of Hsp70 in the optic nerve of rats, as induced by laser damage. (Apologies for the unpleasantness of the experiment):

 Time / hours Protein / au 3 1100 6 1400 12 1700 18 2100 24 2150

The aim is to use a Metropolis-Hastings MCMC, together with a Gibbs step for the $\tau$ parameter, to fit the data. The issue that immediately arises is how to set the parameters $\alpha$ and $\beta$. This may seem arbitrary, but it is already better than choosing a value for $\sigma^2$, as the Gamma distribution will exploring of that parameter. For my first simulation, I thought that $\sigma = 100$ would be sensible (this turned out to be a remarkably good choice, as we will see). So I set $\alpha = 0.01$ and $\beta = 100$ and lo and behold, the whole MCMC worked beautifully. (Incidentally, I used independent Gaussian proposals for the other three parameters, with standard deviations of 100 for the $P_0$ and $P_1$ and standard deviation of 0.01 for $\gamma$. These parameters were forced to be positive – Darren Wilkinson has an excellent post on doing that correctly. Use of log-normal proposals in this case leads to very poor mixing, with the chain taking some large excursions for the $P_1$ and $\gamma$ parameters).

The median parameter values are $P_0 = 786$, $P_1 = 2526$, $\gamma = 0.0686$ and $\tau = 0.000122$. The latter corresponds to $\sigma = 90.6$. With these values, we can see a good fit to the data: below are plotted the data points (in red), the best fit (with median parameter values) in blue, and model predictions from a random sample of 50 parameter sets from the posterior distribution in black.

#### Considerations

However, some questions obviously arise: how sensitive is this procedure to choices of $\alpha$ and $\beta$? I will confess: I use Bayesian approaches fairly reluctantly, being more comfortable with classical frequentist statistics. What I like about Bayesian approaches are firstly the description of unknown parameters with a probability distribution, and secondly the availability of highly effective computer algorithms (i.e. MCMC). What makes me uncomfortable is the potential for introducing bias through the prior distributions. So I have carried out some investigations with different values of $\alpha$ and $\beta$. In particular, I wanted to know: (i) what happens if I keep the mean (equal to $\alpha / \beta$) the same but vary the parameters? (ii) what happens if I vary the mean of the distribution? The table below summarizes positive results:

 alpha beta P0 P1 gamma sigma 0.01 100 786 2526 0.0686 90.6 1 10000 747 2428 0.0795 98.0 0.0001 1 797 2533 0.0681 96.3 0.1 10 822 2603 0.0623 97.9 0.001 1000 760 2455 0.0758 94.8 1 1 792 2539 0.0676 64.9 0 0 805 2565 0.0653 98.3

As you can see (please ignore the last line for now), the results are robust to a very wide range of $\alpha$ and $\beta$, even producing a good estimate for $\sigma$ when that estimate is a long way from the mean of the prior distribution. But then we can make the following observation. Consider the sum of squares for a ‘best-fit’ model, for example using the parameters for the first row (this is 12748). So as long as $\alpha \ll n/2$ and $\beta \ll 12748/2$, the prior will introduce very little bias. But if you try to use values of $\alpha$ and especially $\beta$ very much larger than an estimated sum of squares from well-fitted model parameters, then things might go wrong. For example, when I set $\alpha = 1$ and $\beta = 10^6$ then my MCMC did not converge properly.

This leads to my final point, and the final row in the table. Would it be possible to remove prior bias altogether? If you look at the marginal posterior for $\tau$, we observe that if we set $\alpha = \beta = 0$, we obtain a Gamma distribution, whose mean is precisely the error variance, as, in this case,

$\frac{\beta \prime}{\alpha \prime} = \frac{\sum_{i=1}^n(y_i - \hat y_i)^2}{n}$

The algorithm should work perfectly well sampling from this Gamma distribution, and indeed it does, producing comparable results to when an informative prior is used.

#### Conclusions

In summary, I am happy to conclude that this method is good for estimating error variance. Clear advantages are:

1. It is simple to implement and fairly fast to run – adding a Gibbs step is no big deal.
2. It is clearly preferable to making up a fixed number for the error variance – which was what we were doing before.
3. The prior parameters allow you to make use of information you might have from experimental collaborators on likely errors in the data.
4. The level of bias from the priors is relatively low, and can be eliminated altogether.

# Some thoughts on Thomas et al. 2012. Directional Migration of Recirculating Lymphocytes through Lymph Nodes via Random Walks.

This article [1] was published just over 4 months ago by Benny Chain’s laboratory in UCL, based on work carried out for Niclas Thomas’s PhD. It is a rare privilege to read an article that so clearly relates itself to work that I carried out – indeed work carried out during my own PhD. Therefore I have decided to post some thoughts on this (and related) articles, which I will also post on the PLoS ONE web site.

Thomas et al.’s work contains new data on the distribution of lymphocyte transit times, together with rigorous fitting of mathematical models to their data. Importantly, they show that their data can be fitted by a random walk model that allows for motion orthogonal to the main direction of motion (i.e. through the lymph node tissue). This random walk model, although implemented in one dimension, is intended to reflect a three dimensional motion in which the cells move either along the main axis of motion, or in dimensions perpendicular to it.

There were two models for lymphocyte recirculation that I proposed for my PhD, both of which were implemented in a one dimensional domain (along the lymph node). The first was a convection-diffusion model [2], which could be thought of as a biased random walk, although could be sufficiently general to include other mechanisms. The second was a model that proposed that lymphocyte migration was halted due to encounters with dendritic or other cell types [3, 4]. Both models could explain the available data on lymphocyte recirculation transit times.

Following my own PhD, two photon microscopy technology developed to the point where the motion of individual lymphocytes could be tracked [5, 6, 7]. This has led to the discovery that lymphocyte migration is essentially random, and that the hypothesis I set forward in [3, 4] is false. Thomas et al.’s work, following work by Textor et al. [7], has shown that indeed the distribution of lymphocyte recirculation times can be explained well by a three-dimensional diffusion model.

So, how do I feel to have had some of my research empirically falsified? Well, actually, it makes me rather happy! Now don’t get me wrong: I would much rather that the two photon microscopy had shown T cells moving along the lymph nodes, stopping at dendritic cells for a while, and then moving along again, as my work of [3, 4] proposed. That is not the case. I take solace in two things. First, the convection-diffusion model of [2] is essentially correct in its most naive form. But more significantly, I can hear Karl Popper cheering me on from the side-lines: my hypotheses were good science, even if incorrect. The important point to learn from [2, 3, 4] is that the observed distribution of lymphocyte transit times is non-trivial: it demands a mechanistic explanation, and, at that time, no such explanations had been proposed. The dendritic cell hypothesis was perfectly plausible (in fact, I came up with it through discussions with Benny Chain and David Katz, with whom Benny Chain shared a lab) – and worthy of experimental testing. (Parenthetically, this was in the bad old days of “mathematical biology”, when theoreticians generally worked independently of experimentalists. I remember once, at that time, a theoretician proudly stating at a conference that no experimental data could falsify their model. Thankfully things are much better today under the “systems biology” paradigm, and Thomas et al’s article is an excellent example of experiment and theory working so well together).

To be honest, the thing that annoys me a little is that I didn’t think to check myself, during my PhD, whether three dimensional diffusion could explain the distribution of recirculation times (and full credit to Niclas Thomas and others for investigating this!). At the time, I was unhappy with the value of the one-dimensional diffusion coefficient that my first model needed to fit the data: based on a Brownian motion calculation for T-cell diffusivity, I thought that it was two orders too fast to be realistic as random motion, and so looked to other explanations. Indeed, this was discussed at my PhD viva, and my examiners (an eminent modelling-friendly immunologist and an eminent mathematical biologist) made me correct my thesis to include the Brownian motion calculation. But the assumptions behind the calculation were completely wrong, for reasons that my examiners and I should have known. First, on biological grounds, the calculations were based on Brownian motion of T-cells – when of course we knew that T-cells move actively – and even then there was some data available on speed of such movement (e.g. from Tim Springer’s lab). Second, the calculations were based on a 1-D diffusion – when of course we know that 3D diffusion is qualitatively and quantitatively different (e.g. the recurrence / transience of 1D / 3D random walks taught in undergraduate Markov chain courses). I had even written a 3D diffusion simulator (for another part of my PhD) which could have easily been used to test the hypothesis. Hind-sight is a wonderful thing!

All-in-all, I congratulate Niclas Thomas on this work. I think it is wonderful and enhances our knowledge of this extremely important field.

1. Thomas N, Matejovicova L, Srikusalanukul W, Shawe-Taylor J, Chain B. 2012. Directional Migration of Recirculating Lymphocytes through Lymph Nodes via Random Walks. PLoS ONE 7(9): e45262.

2. Stekel, D.J., Parker, C.E. and Nowak, M.A. 1997. A model of lymphocyte recirculation. Immunology Today, 18:216-21.

3. Stekel, D.J. 1997. The role of inter-cellular adhesion in the recirculation of T lymphocytes. Journal of Theoretical Biology, 186:491-501.

4. Stekel, D.J. 1998. The simulation of density-dependent effects in the recirculation of T lymphocytes. Scandinavian journal of immunology47:426-30.

5. Miller MJ, Hejazi AS, Wei SH, Cahalan MD, Parker I. 2004. T cell repertoire scanning is promoted by dynamic dendritic cell behavior and random t cell motility in the lymph node. Proceedings of the National Academy of Sciences of the United States of America 101: 998.

6. Beltman JB, Mare AFM, Lynch JN, Miller MJ, de Boer RJ. 2007. Lymph node topology dictates t cell migration behavior. The Journal of Experimental Medicine 204: 771–780.

7. Textor J, Peixoto A, Henrickson SE, Sinn M, von Andrian UH, Westermann, J. 2011. Defining the quantitative limits of intravital two-photon lymphocyte tracking. Proceedings of the National Academy of Sciences of the United States of America 108: 12401–6.

# Job opportunity: two-month postdoctoral position in mathematical modelling / inference

## Research Associate/ Fellow

Reference
SCI10709
Closing Date
Friday, 8th February 2013
Job Type
Research & Teaching
Department
School of Biosciences – Division of Agricultural & Environmental Science, Multidisciplinary Centre for Integrative Biology
Salary
£24766 to £29541 per annum depending on skills and experience, minimum £27,854 per annum with relevant PhD.

This full-time post is available on a fixed term contract for a period of two months.

Applications are invited to join a highly motivated multi-disciplinary team of research scientists working the Universities of Nottingham and Birmingham. The successful candidate will join a jointly funded project to carry out modeling of occludin trafficking during epithelial polarization and wound healing. The post could be located either in the School of Biosciences at the University of Nottingham’s Sutton Bonington Campus, or at the School of Biosciences at the University of Birmingham.

The work will include (i) developing a mathematical models (using ODEs) to describe the turnover of occludin protein in the cell as well as the kinetic trafficking of occludin between cellular compartments; (ii) to estimate model parameter values from experimentally derived data using Monte Carlo Markov Chain approaches; and (iii) to iteratively improve the model, with cycles of model and data comparison, in order to provide greater certainty about the important mechanisms involved that can explain the experimental data. Other duties will include contributing to publication of this research in peer-reviewed journals, contributing to writing of research grant applications, and generally collaborating between disciplines and institutions.

The successful candidate must have a PhD or equivalent in mathematical modelling or statistics or a related area. Research experience within a mathematical biology or systems biology research area would be desirable but not essential. Candidates must to be able to demonstrate excellent mathematical ability, especially in the areas of ordinary differential equations and statistical analysis of data; experience of application of these skills to biological research would be desirable. Candidates must also be able to evidence excellent computing skills in a suitable environment (e.g. R or Matlab). Excellent English language oral and written communication skills are also essential. This post will require the person appointed to be able to work independently and as part of a multi-disciplinary team, to be motivated, flexible and willing to learn.

Full details, including how to apply, can be found on the University of Nottingham’s vacancy system.

Informal enquiries may be addressed to Dr Dov Stekel, email: dov.stekel@nottingham.ac.uk or Dr Josh Rappoport, email: j.rappoport@bham.ac.uk.