I am pursuing this strand of my research at the London Mathematical Laboratory. (They do lots of other cool stuff there too.)
INFERENCE FROM MODELS
Models of complex systems are complex beasts themselves. The aim of this project is to explore the epistemic foundations of modelling methods, model evaluation metrics, and the expert judgement that is used both to construct and calibrate models. How can we robustly estimate uncertainty in decision-relevant model output? Is there a wider paradigm for uncertainty assessment which reflects the formal inapplicability of Bayesian methods but does not revert solely to arbitrary expert judgements? And how can we account for the co-development of expert judgement with the models themselves (if my model does not do what I expect, then I fix it; but my expectations have also been constructed partly by experience of the model)?
Model land is not the real world
The map is not the territory: we make models precisely because the domain itself is not amenable to certain kinds of explorations. An economist may make a model to explore parallel worlds of policy intervention; a climate scientist may make a model to explore uncharted territory of greenhouse gas forcing; a neurophysiologist may make a model to avoid ethical constraints of in vivo experimentation. In making a model, we rely on judgements about which features are important to the situation and which can be neglected. Many different model structures are possible. If models are to be used to inform real-world decision-making, then it is important to have some idea of how good the model is at reproducing behaviours of the system, perhaps even to assess this quantitatively. “All models are wrong, but some are useful”.
Every statistical method is an epistemology
Formal statistical analyses of model outputs imply certain assumptions about the nature of the model, its relationship with the real world. For example, Bayesian methods of model analysis assume that the system itself is within the class of model structures considered. The use of ensemble or Monte Carlo multi-model methods to generate probability distributions assumes that “parallel universes” in model space are in some way informative about our lack of information in our single unparameterised universe. Some of these methods appear to work when the model and/or data are sufficiently imperfect, but with further inspection would be subject to a reductio ad absurdum. Where out-of-sample data are available for evaluation, some methods can be given a bill of health; others would benefit from clearer warnings.
The Hawkmoth Effect
The Butterfly Effect is well-known as the sensitivity to initial conditions displayed by some dynamical systems, meaning that a small perturbation to initial conditions can result in a large change to the state of the system after some length of time (dynamical instability). The Hawkmoth Effect, by analogy, is the sensitivity to structural model formulation, meaning that a small perturbation to the system itself can result in a large change to the state of the system after some length of time (structural instability). Consequences for modelling and simulation include the problems of calibrating complex models and the difficulty of identifying “improvements” in quantitative performance with “improvements” in physical representation.
Models and expert judgements are co-developed
When developing a complex model of a complex system, how does a domain expert decide which elements to include or leave out? In part, this decision is a priori judgement; in part, it is informed by experimentation on the model itself. The behaviour of the model shapes the expectations of the modeller, which in turn shape the further behaviour of the model.
Working with models
Making good decisions based on information from models or simulations, then, is not as straightforward as we might like. How to proceed in this epistemically uncertain situation? First, it is necessary to be clear about the aims of the modelling endeavour, and to define well-targeted evaluation procedures which are informative about the adequacy of the model for these aims. Second, to carry out these evaluations and make use of the results. Third, to use the evaluation process itself as an integral part of model development and criticism, such that internal sources of uncertainty can be quantified and external sources are acknowledged and estimated. Only then can the model be used with confidence to inform decision support.