Just as mice and rats are not humans, a mathematical model is not a human, but, like mice and rats, a model can be thought to be representative of human response to chemical exposure. NC3Rs Strategic Award holder, Dr John Paul Gosling from University of Leeds, explains.

Many mathematical models are being developed across a broad spectrum of areas that could be of great use in toxicological safety assessments. In terms of models that are currently in use, there are models that simulate exposure to chemicals like the Monte Carlo Risk Assessment model, which simulates people’s eating habits.

There are physiologically-based pharmacokinetic (PBPK) models that aim to simulate the distribution of chemicals to different organs (e.g., the Symcyp model calculates the distribution of a chemical for human sub-populations with varying physiological characteristics).

There are also models of the toxicological effect of chemicals on different parts of the body (e.g., DNA damage from exposure to radiation).

Crucially, the costs of running computer-based mathematical models are far less than the costs of laboratory experimentation. The increasing realisation that mathematical models have a significant role to play in reducing the use of animals in research can be seen in the joint call for proposals by the NC3Rs and the Engineering and Physical Sciences Research Council under the title “Mathematical modelling in toxicology”.

Despite this progress and investment in developing relevant mathematical models, there is not a general acceptance of the value of mathematical models in a safety assessment context. The difficulty is in bringing the results from these mathematical models into toxicological safety assessments that have been driven historically by animal data.

I think of a mathematical model as being a window on what the model builder believes is going on in a system of interest. In the context of human toxicological safety assessments, I would argue that investigating a mathematical model is the same as when experimenting in the laboratory. When an experiment is performed, the experimenter believes that the observed response (or lack of it) is informative about the human response. However, this is a subjective judgement, and safety assessors will have their own views on which experiments are most informative for different toxicological endpoints.

Similarly, the model builder believes that the mathematically-modelled effect is informative about the human response. Of course, this is a subjective judgement. Also, like an experimental protocol, a modeller can catalogue the choices that are made when building the model and this allows others the opportunity to challenge and improve the model in the future.

**Developing a coherent framework**

One way to consider mathematical models alongside more traditional data sources is to think of the mathematical model as a framework for understanding the processes in the human body. We could then view experimental data as being observations of the modelled process.

By having a mathematical model, we can relate different data sources and understand their relative importance. By providing a sound quantitative framework for bringing both mathematical toxicological models and other non-animal data sources to bear on a toxicological safety assessment, research using animals to support the safety of consumer products could become unnecessary. If this was realised, it would have a significant effect on the overall animal use. In the UK alone, the number of animals used for industry safety assessments in 2012 was 44,787.

In my NC3Rs-funded research project, I am building a coherent framework using statistical techniques within which experiments and mathematical models can be brought together to develop new approaches for safety assessment.

**Utilising statistics to characterise uncertainty**

Statistics has a major role in incorporating mathematical models into toxicological safety assessments due the amount of uncertainty about how the human body responds to chemicals and how best to model the responses. Uncertainty appears in many guises including lack of knowledge about the model settings, discrepancy between the models and reality, and uncertainty about the linkage between the models and experimental datasets.

Using statistics, we can characterise these uncertainties and help focus future research on areas where reduction of uncertainty will help safety assessors. This falls under the umbrella of sensitivity analysis. The aim of sensitivity analysis is to identify the inputs of mathematical models that have the greatest effect on the model’s outputs. These are general statistical techniques that have been applied successfully to toxicological models in the past.

Despite the fact that the ultimate aim of developing mathematical models and frameworks for their use is to reduce the need for animal use in research, it is false to think that mathematical models sit outside of the world of experimentation. In fact, most mathematical models will be created using information that has come from laboratory experiments.

In 2012, there were 15,963 animals used in toxicokinetic studies that have a great bearing on the construction of PBPK models. Here, sensitivity analysis techniques and careful characterisation of uncertainty could help by directing researchers towards experiments that reduce the amount of uncertainty in the models the most.

Data is also needed to help validate models and to improve confidence in the models, but data alone is not enough. The modellers and assessors must have scientific understanding of how the data relate to the processes that are being modelled.

**Improving collaborations**

To boost confidence amongst safety assessors in using mathematical modelling for toxicological safety assessments, there must be communication between the mathematical modellers and the safety assessors.

The human body is complicated, and most realistic models are also mathematically complicated. I have found in past collaborations that models have a much better chance of acceptance if the model builders communicate regularly with the end users and are clear on how their modelling assumptions are affecting model behaviour. The level of acceptance is also helped when the modeller connects their model with the questions that the assessors want to ask.

In the context of human toxicological safety assessments, this means identifying how the model relates to risk of harm in the population of interest.

Confidence can also be improved by assessors realising that another benefit of accounting for uncertainty is that they can determine how conservative the risk management decisions are: risk managers want to identify a safe level of chemical use without being unnecessarily over-conservative.

Mathematical models have the potential to be much more than extra strands in a safety assessor’s weight-of-evidence approach. The models could form the backbone of toxicological safety assessment by providing transparent mechanisms for linking disparate data sources and laying bare expert judgement. By helping toxicological safety assessors to understand the inherent uncertainties and the gap between models and reality, we can help them to understand the value of mathematical models in their safety assessments.