To evaluate the safety of a new chemical, existing toxicological protocols typically require 4 years, £3-4m, and over 4,000 animals. This is inefficient, ethically questionable, and fails to exploit recent biological, mathematical and computational advances. We plan to use advanced computational and statistical methods to investigate how the evaluation process can be improved. More explicitly, we will use Bayesian networks to exploit existing data both to identify the key studies necessary for efficient toxicological assessment, and to quantify what levels of imprecision should be tolerated. Bayesian methods allow us to take existing knowledge about toxicity, even if this knowledge is incomplete or inaccurate, and use it to make better predictions for future chemicals. Because uncertainty and imprecision can be built very naturally into the models (indeed, they are necessary), it becomes easier to make data-driven probabilistic statements about toxicological risk. The outputs will provide a rigorous quantification of the value of each element of existing protocols. In doing so, we will seek to answer the question of how far the detailed data from a panel of 50+ well-studied toxicants can justify applying the 3Rs in the toxicological testing necessary for future chemicals. Explicitly, reduction of the number of animals used for testing will be achieved where our models indicate that sufficient precision can be derived from a smaller battery of tests; refinement of animal testing protocols will become possible where our models identify efficiencies through the holistic assimilation of broad-spectrum data; and where appropriate replacement of animal tests where they can be rationally and quantifiably justified early in a given chemical's testing strategy.
Back to top