A U.S. government scientist poses the following questions on dose-spacing and the inclusion of "human-relevant" dose groups for toxicity testing purposes ($500 for best answer): $500
Question: Proper dose spacing –Some groups want to test “human-relevant” exposure levels. On the other hand, there are recommendations for testing at doses that assure the compound is being absorbed and some toxicity will be observed. MTD is being criticized for possibly producing nonspecific effects, especially with relatively non-toxic compounds. What is the latest thinking on proper dose spacing that is feasible and functional for regulatory assessments? When should additional dose levels (>3) be added and what dose spacing is recommended to capture toxicity AND human relevant exposures (or extrapolations).
This Ping will have two rounds of participation:
This Ping will have two rounds of participation:
- Week 1: Answer the question (01/10/22 - 01/17/22)
- Week 2: Vote for best answer (01/17/22 - 01/24/22)
The accepted answer(s) will be the one(s) receiving the most votes from SciPinion's expert community.
Please remember you can choose to remain anonymous or not in answering the question (please set your display name accordingly when you answer).
Tanya
PREFACE: Regardless of what the latest thinking currently is on dosing regimes in animal studies, regulatory view on the ultimate goal of tox testing remains to be skewed towards hazard IDENTIFICATION (serving C&L) vs CHARACTERIZATION (to inform risk assessment). That is, to ensure any potential effects are not missed, hence approaches like limit doses or MTD are recommended/preferred. Additionally, human exposure is considered to be a very dynamic attribute (compared to intrinsic hazard properties); its estimates are usually associated with huge variation and uncertainty, i.e. not suitable/reliable to drive tox study designs.
ANSWER: The most recently developed approach to top dose selection is the Kinetically Derived Maximum Dose (KMD) method originally suggested for PPP hazard assessments (Saghir, 2015; https://doi.org/10.1016/j.yrtph.2015.05.009). KMD was originally defined as the administered dose that is slightly above the onset of nonlinear TK as informed by administered dose versus systemic exposure data. Further scientific debate around KMD (see https://ntp.niehs.nih.gov/whatwestudy/niceatm/3rs-meetings/past-meetings/kmd-2020/kmd-2020.html?utm_source=direct&utm_medium=prod&utm_campaign=ntpgolinks&utm_term=kmd-2020) has concluded that the design of repeated dose animal studies, including dose selection, should involve weighing available data from available shorter-term in vivo and in vitro studies that inform both TK and TD of a chemical, such as potency, target organ, toxic moiety, metabolism, and mode of action (OECD TG 116) (OECD, 2012/2014). This weight of evidence approach should also include an understanding of human exposures, such as routes and scenarios, and how these exposures drive internal exposure. This integrative approach provides a better opportunity for these longer-term animal studies to provide data that appropriately characterize the nature of specific toxic responses, describe human-relevant dose-response relationships, and elucidate the roles of TK and TD in toxicity pathways. The approach should also consider animal welfare aspect when determining the range and the number of dose levels in testing (e.g. going with >3 dose levels may be problematic due to increase in number of animals required).
Another helpful resource providing an overview of the requirements and approaches to dose selection across various chemical sectors (incl. pharmaceuticals, agrochemicals, food ingredients) was developed by ECETOC in 2021 (see https://www.ecetoc.org/wp-content/uploads/2021/03/ECETOC-TR-138-Guidance-on-Dose-Selection.pdf). In one of the case studies for agrochemicals in Appendix D the approach to dose level selection based on predicted human exposure was investigated. That is, following identification of the worst-case exposure value, an uncertainty factor of 100x (10x for both inter and intra-species differences) can be reversely applied to achieve a ‘target’ NOAEL for the study. Using these data as a guide to the low dose in a chronic study, relevant mid and high doses can be set at conservative intervals (e.g., 10x). Interestingly, the report also states that based on the analysis of classification outcomes no automatic relationship could be established between positive classification outcomes and increasing dose, especially approaching the limit dose (currently 1000 mg/kg/d) for repeat dose toxicity studies across all major endpoints measured. This observation provides reassurance that a workable and acceptable approach can be found that delivers accurate information for risk assessment and assigning a hazard-based classification, and that these needs can co-exist and be served by the same study(see PREFACE).
Another helpful resource providing an overview of the requirements and approaches to dose selection across various chemical sectors (incl. pharmaceuticals, agrochemicals, food ingredients) was developed by ECETOC in 2021 (see https://www.ecetoc.org/wp-content/uploads/2021/03/ECETOC-TR-138-Guidance-on-Dose-Selection.pdf). In one of the case studies for agrochemicals in Appendix D the approach to dose level selection based on predicted human exposure was investigated. That is, following identification of the worst-case exposure value, an uncertainty factor of 100x (10x for both inter and intra-species differences) can be reversely applied to achieve a ‘target’ NOAEL for the study. Using these data as a guide to the low dose in a chronic study, relevant mid and high doses can be set at conservative intervals (e.g., 10x). Interestingly, the report also states that based on the analysis of classification outcomes no automatic relationship could be established between positive classification outcomes and increasing dose, especially approaching the limit dose (currently 1000 mg/kg/d) for repeat dose toxicity studies across all major endpoints measured. This observation provides reassurance that a workable and acceptable approach can be found that delivers accurate information for risk assessment and assigning a hazard-based classification, and that these needs can co-exist and be served by the same study(see PREFACE).
On a final note, it is worth mentioning that lower end of DRR may be of greater importance. The regulatory view on the impact of non‐monotonic dose responses, NMDR, in human risk assessment with regards to establishing reference values is being reconsidered. For example, EFSA calls for an international effort to provide more detailed dose-response information/guidance for risk assessment, taking into account animal welfare considerations as well as developments in the field of NAMs, to facilitate capturing and concluding on the presence of NMDR (https://www.efsa.europa.eu/en/efsajournal/pub/6877).
Maria Dagli
In my opinion, it is correct to test human-relevant exposure levels. However, the human relevant exposure levels are not always known and may vary according to different factors. In addition, it is always important to have, in the toxicity study, doses that can cause adverse effects in order to then test lower doses and determine the NOAEL . Therefore, dose range finding studies are important to inform about the doses which cause adverse effects. From the dose range studies, the lower doses can be defined.
Chester
The use of pharmacokinetics has been largely overlooked when selecting doses for a toxicity study. Bioavailability (fraction actually bioavailable for toxicity), clearance half-life, and measures of internal dosimetry (e.g., plasma levels) are critical parameters for dose spacing and the inclusion of the most relevant doses for humans. When actual pharmacokinetic information is not available for dose selection/spacing, various predictive tools are available for parameter estimation or analogue identification for which there may be data available. The most useful toxicity studies are those designed with dose spacing based on pharmacokinetics information in the test species as compared to humans.
Kenny Crump
The common testing procedure is to test at a dose equal to the MTD and then to add one or two doses at fractions of the MTD. Testing at the MTD has the potential advantage that a negative result has about the best assurance available from the bioassay that the health effect is not caused in the test species by this substance and this route of exposure (absent some quirk in the metabolism or distribution of the substance).
Testing at “human relevant” doses (doses similar in some sense to doses in a human population) seldom provides useful information about the risk at doses to which human are exposed (because each animal tested will be a stand in for perhaps thousands of humans and consequently an increase in a serious health effect such as death by as much as 1 %, although likely considered an extreme risk among humans, would be very unlikely to be detected in a standard bioassay with 50 animals per dose. Testing at very small doses (e.g., ≤ 1/10 of the MTD), would likely have the same effect in a trend test as testing at zero dose, so results of such testing would be equivalent in a practical sense to increasing the size of the control group.
Given this, the best use of multiple doses below the MTD are probably in determining the shape of the dose response. I.e., does the dose response appear to be linear or non-linear? Perhaps the way to address this question is to have several doses (say, two or three) in addition to the MTD spread out between around 30% of the MTD and the MTD.
The NAS Committee on Risk Assessment Methodology (1993) authored the document, “Issues in Risk Assessment”, which discusses this issue at some length, and suggests several options.
David Jacobson-Kram
The low dose in a tox study should ideally deliver an exposure around the expected human exposure. An MTD should be established at some point during product development. Alternatively, the high dose should provide a large (50X) safety margin for exposure. The mid dose should represent an exposure (not dose) in between.
Jun Sekizawa
I agree that MTD is criticized, and it will be better to use "human relevant" exposure levels for toxicity testing. As regards, proper dose spacing it can detect NOAEL or is feasible to calculate MOE with non genotoxic chemicals. With genotoxic chemicals, proper dose spacing should be appropriate to estimate slope factor or unit risk.
DDLevy
I don't see how you can get away from some version of the MTD as the presumptive default for the limit dose. There may be exceptions, e.g. human pharmaceuticals but even there toxicity often becomes evident at doses not much higher than the therapeutic dose. Having said that, there should be room to make that a rebuttable presumption based on ADME, chemical class or other factors which may be specific to the regulatory paradigm. In short, hypothesis driven generation of experimental data should always be considered an acceptable alternative to the generic definition of MTD.
Manuel Dominguez Estevez
Testing the estimated human relevant dose (x), 3x, 10x for pharmaceuticals. Adding a 100x for industrial chemicals, pesticides or food chemicals.
The Dosing should be based on TK, MoA, in vitro and read-across data, if possible.
Avoiding MTD or doses causing severe toxicity, specially in DART studies.
See ECETOC Guidance, 2021 https://www.ecetoc.org/wp-content/uploads/2021/03/ECETOC-TR-138-Guidance-on-Dose-Selection.pdf
The Dosing should be based on TK, MoA, in vitro and read-across data, if possible.
Avoiding MTD or doses causing severe toxicity, specially in DART studies.
See ECETOC Guidance, 2021 https://www.ecetoc.org/wp-content/uploads/2021/03/ECETOC-TR-138-Guidance-on-Dose-Selection.pdf
Shakil Saghir
The best way to set doses in animal studies is to use the onset of nonlinearity in the systemic dose of a chemical and/or its metabolite(s) (biomarkers). Using this method will take care of the issue of absorption as the systemic doses will be associated with chosen external (nominal) doses. The highest dose should be chosen slightly above the POD from linearity or KMD and less than MTD as the appearance of systemic dose nonlinearity is a biological response that occurs at a much lower dose than MTD - use of MTD is subjecting animals to cruelty. The other two doses can be selected at 10 and 100 or 3 and 10 fold lower than the highest dose. Extrapolating the two lower (or the lowest) dose, where the biology of the animals is not compromised, to human-relevant doses will be more meaningful than using doses that are cause some effect in animals. In case to determine the profound toxicity of a chemical, a dose at MTD can be added to satisfy some of the regulatory agencies still asking to conduct studies at MTD; however, I am not sure what beneficial information would be gleaned from using an MTD dose. Alternatively, when a better titration of dose-response is needed one of more doses can be added at the lower end.
Daniel Lerda