In this section
Microdosing and the 3Rs
Professor Malcolm Rowland, University of Manchester
Any approach that has the potential to reduce the number of animals used in research is to be welcomed. One of these that has caught the eye of various groups, especially those involved in the discovery and development of new medicines, has the quaint name of microdosing. But what is microdosing and why is there currently such a great interest in it? To answer these questions we need to consider the task at hand in trying to bring to the market a new effective medicine with an acceptable safety profile, recognising that all compounds have some risk. This article sheds light on the current limitations of candidate drug selection and comments on the possible impact that microdosing can have in helping to transform the process by which new drugs are identified while simultaneously reducing the number of animals used in drug discovery and development (Fig 1).
Figure 1. Overview of the activities involved in modern drug discovery and development, adapted from (1).
Microdosing shifts human studies earlier in the drug development process and would reduce the number of unwanted drugs going through safety and toxicology testing in animals.
Pharmacokinetics and drug attrition
Despite the sequencing of the human genome, and the enormous progress that has been made in our understanding of a whole array of biological processes and diseases, the bringing of a new medicine to the market is still a lengthy and highly problematic process (Fig 2). It takes on average 10-12 years between conception and realisation, and during the process the vast majority of the myriad of compounds tested fail to become acceptable medicines.
Figure 2. The path to a modern medicine: phases of pre-clinical and clinical development, taken from (1).
Animal studies occur throughout drug development in preclinical research and in clinical phases 1 to 3. The process involves identifying one compound with a suitable benefit-to-risk profile from up to a starting point of hundreds of thousands.
The reasons for potential drug candidates being dropped from the pharmaceutical pipeline are manifold. A suitable compound must demonstrate efficacy in the target patient population and have an acceptable safety profile, requirements which are themselves extremely demanding. One property of a compound that influences these and other factors is its pharmacokinetic (PK) profile. That is, how efficiently the compound is absorbed from the site of administration into the body, how well it is distributed to various sites within the body, including the site of action, and how rapidly and by what mechanism(s) it is eliminated, by excretion and metabolism (ADME - absorption, distribution, metabolism and excretion).
Furthermore, the vast majority of compounds are metabolised, therefore the fate of the newly formed metabolites must be taken into account, as many of these are active and some have adverse side effects. It has been estimated that between 10% and 40% of potential drugs fail during early clinical trials because of unsuitable pharmacokinetic features (2,3).
Limitations of pharmacokinetic prediction from animal models
A poor pharmacokinetic profile may render a compound of so little therapeutic value as to be not worth developing. For example, very rapid elimination of a drug from the body would make it impractical to maintain a compound at a suitable level to have the desired effect. Clearly, the ideal is to only test in humans those compounds that have desirable pharmacokinetic properties. However, this is no trivial task. The problem is that despite significant progress to date generally, we are still unable to predict the pharmacokinetic profile in humans of many drug classes from in vitro and computer-based methods.
We are therefore reliant on information gained in animals,
which based on past experience has been the most predictive, to
help screen the compounds for those with an appropriate PK
profile. One commonly-applied approach to predicting a human PK
profile based on animal data is allometric scaling, which
scales the animal data to humans, assuming that the only
difference among animals and humans is body size. While body
size is an important determinant of pharmacokinetics, it is
certainly not the only feature that distinguishes humans from
animals, and perhaps not surprisingly, therefore, this simple
approach has been estimated to have less than 60% predictive
Microdosing: a small dose of your own medicine (5)
This is where microdosing comes in; phases 1 to 3 involve evaluating pharmacological doses generally first in human volunteers and then in patients for efficacy and safety. The hypothesis is that microdosing will help reduce or replace the extensive testing in animals of the many compounds that do not have desirable pharmacokinetic properties in humans and subsequently would be rejected. But what is a microdose, and how could it help?
A microdose is so small that it is not intended to produce any pharmacologic effect when administered to humans and therefore is also unlikely to cause an adverse reaction. For practical purposes this dose is defined as 1/100th of that anticipated to produce a pharmacological effect, or 100 micrograms, whichever is the smaller (6).
The interest in giving such a microdose to humans early in
the drug development process is centred on the view that many
of the processes controlling the pharmacokinetic profile of a
compound are independent of dose level. Therefore, a microdose
will provide sufficiently useful pharmacokinetic information to
help decide whether it is worth continuing compound
development, which includes, for example, toxicity testing in
Regulatory requirements for human pharmaceutical testing
For ethical and safety reasons, and a requirement by regulatory authorities worldwide, no test substance can be given to humans without first demonstrating some evidence of its likely risk. This evidence is provided by a mixture of in vitro and animal data that conforms to requirements laid down by regulatory agencies worldwide (7).
In vivo safety assessment is required to be conducted in at least two animal species, one rodent, usually the rat, and one non-rodent, usually, but not always, the dog. The safety profile of the compound must be tested at sufficiently high doses and durations to produce exposure of the compound and its metabolites in the body at, or in excess of, that likely to be experienced during human clinical use.
Any compound shown to produce unacceptable adverse effects
under these test conditions is dropped. For example, if severe
liver toxicity in the test animal was observed for a compound
it would be dropped from development. In this manner, some
assurance is provided that compounds to be given to humans are
likely to be reasonably safe. However, this is at the cost of
exposing animals to compounds that either will never be given
to humans, because of the adverse profile in animals, or they
are subsequently dropped during human and clinical drug
development, for example, because of an inappropriate PK
Microdosing and the impact on animal use
In this context, the major benefit of microdosing in humans is apparent. Benefits of microdosing in humans are threefold.
Table 1. The effect of microdosing on animal use
The microdosing era: technical and regulatory advances
While conceptually appealing, microdosing has only become a reality in recent years on two accounts. The first is technical, associated with the availability of very highly sensitive and specific methods capable of determining minute quantities of the drug and its metabolites in the body, usually in plasma or blood. The most common of these techniques is liquid chromatography coupled with tandem mass spectrometry (LC-MS-MS). However, in the context of microdosing, even this technique, capable of measuring down to hundreds of picograms/millilitre, is insufficiently sensitive for the purpose. Due to the limitations of LC-MS-MS, an alternative method, Accelerator Mass Spectrometry (AMS) (Fig 3), is likely to be the analytical tool most widely used in microdosing (9). AMS has been used in carbon dating and is exquisitely sensitive, being capable of directly counting individual atoms. An idea of the sensitivity of AMS can be appreciated by its ability to detect a liquid compound even after one litre of it has been diluted in the entire oceans of the world. A feature of AMS is that compounds must be isotopically labeled, most commonly with 14C. This might be viewed as a disadvantage, but in practice it is not generally a problem and can indeed be an advantage.
One regulatory requirement of any drug is an accurate
characterization of its fate in the body with time after
administration. This characterization is undertaken by giving
subjects the radiolabeled drug and following the fate of the
total radioactivity and individual metabolic components, in
plasma and excreta. As such, potentially promising compounds
will always be radiolabeled in anticipation of this
requirement. Furthermore, the dose of radiolabeled compound
required for human AMS studies is only in the order of 100
nanocuries, which is extremely low, at most only temporarily
doubling the normal total body burden of 14C, ultimately
derived from 14C in the air. Indeed, this dose of
radioactivity in the AMS study is so minute as to be well below
that considered by the regulatory authority as a significant
source of material risk.
The second reason that microdosing is on the current agenda is regulatory. Drug development is very highly regulated, and many in the pharmaceutical and biotech industries tend to be cautious in adopting new techniques if there is no clear support from the regulatory agencies. An optimistic development was therefore the publication in 2003 by the EMEA of a document encouraging the evaluation and exploration of microdosing and laying out the non-clinical safety assessment that would be expected to be undertaken before administration of the microdose to humans (6). This encouragement by the EMEA, which has very recently been supported by the FDA (8), should not be underestimated.
Scientific validation and limitations of microdosing
As yet, while preliminary results look promising (10,11,12)
(Fig 4), the concept behind microdosing is by no means proven.
More extensive studies are essential to review the use and
limitation of this technique. As well as further trials, the
power of microdosing in early-phase human drug evaluation
increases when coupled with other ultra-sensitive techniques,
such as Positron Emission Tomography (PET) that measures the
ability of a labelled compound to reach its target site, such
as a receptor in the brain or a tumour (9,12).
Semilogarithmic plot of the plasma concentration-time profile of midazolam following a microdose (¡) and a therapeutic dose (♦), both normalized to a 1-mg dose to allow direct comparison, showing that the microdose does predict the profile after a therapeutic dose. The data were obtained as part of the CREAM trial.
However, there are still many questions surrounding the predictive accuracy of microdosing. There is still not a definitive answer to the question of whether the body's reaction to a particular compound is the same at microdose levels as it is at pharmacological doses. If not, this could lead to: a) false negatives, where a compound is rejected unnecessarily because its PK profile would have been acceptable had it been evaluated at higher therapeutic doses; or b) false positives, where the compound is acceptable based on microdosing data but fails subsequently when tested at therapeutic doses.
The limitations of microdosing relate to compound metabolism
and compound solubility. Many processes within the body involve
the use of specialized transporters, enzymes and binding sites,
which can be saturated such that the pharmacokinetic profile is
very different at the higher therapeutic dose than seen with
the microdose. Also, a compound needs to be in solution to pass
across body membranes and be absorbed to act within the body.
Most compounds will dissolve extremely readily at the microdose
level yielding rapid and often extensive absorption. However,
limited solubility of many compounds at higher, therapeutic
doses means that absorption becomes much more dependent on the
rate and extent of dissolution, which cannot be predicted by
the microdose. The EMEA maximum suggested dose of 100
micrograms may also be too low to achieve the full potential of
While it is widely recognised that more research into microdosing is necessary, currently available data indicate that, used wisely, this technique will help to reduce and replace animal testing in identification of novel drug candidates. In addition to having implications for the 3Rs, microdosing would simultaneously benefit patients and the pharmaceutical industry alike, with quicker access to new medicines and reduced compound attrition at later stages of drug development. Future studies may involve a number of different drugs being administered and analysed consecutively. Moreover, microdosing has wider applications in various areas of the biomedical, biological and environmental sciences, such as the development of endogenous biomarkers to quantitatively evaluate the in vivo effects produced by drugs.
In conclusion, microdosing is a promising frontrunner in the search for alternatives to animals in drug discovery and development and if, scientifically validated, has potential advantages for animals, patients, and the pharmaceutical companies developing life-saving drugs for the future.
Thanks to Dr Kathryn Chapman for helpful comments on this article.
All views and opinions expressed in this article are those of the author and do not necessarily reflect the views and opinions of the NC3Rs.
|Return to top|