SciPoll 396: SOT/EUROTOX Debate: Is There a Role for Artificial Intelligence (AI) and Machine Learning (ML) in Risk Decisions?
On a scale of 1 to 5 (1=definitely not feasible; 3=too uncertain to guess, 5= highly feasible), do you think it will be feasible to rely solely on AI- and ML-based toxicity prediction models for safety testing of chemicals, pesticides, pharmaceuticals, and food ingredients within the next decade? (Please feel free to explain) Grouped By How many years of experience do you have (years since obtaining your final degree)?
Results
(130 Answers)
Answer Explanations 49
We need more reliable models as well for evaluation of the ML model and to supplement.
We will always need some biological validation
The toxicity testing speed and quantity are not sufficient to feed enough data to an AI/ML system to be successful. More data is needed to "teach" AI/ML.
The AI- and ML-based toxicity prediction models will be useful tools to concur the development of safety tests, but not within the next ten years
Food safety variables are multiple, pesticides can have long term effects that only manifest decades later. If AI and ML are used, the model could be rather based on variables and observations of the literature. NOT just recent literature. Some of the best Toxicity research is over 50 years old. Journals only publish NEW research and researchers are encouraged to use references less than 5 years old. Some facts are definitely older than 5 years...
AI and ML will have a place, but we shouldn't entirely remove human sense
But applicability domain is not sufficiently addressed.
The acceptance of decisions purely based on AI/ML is low because of the burden of uncertainty. Thus, NAM data (in vitro etc.) are always needed to a certain extend. Even then, the process of gaining acceptance is quite full of obstacles...
I feel that AI amd ML could be game changers in the future but their output totally depends upon the quality and depth of input data.
In some use cases, this is feasible, e.g. for chemicals used in industrial use or inertly in consumer products with minimal exposure risk to the general public. For pesticides, food additives, and pharmaceuticals I think some level of safety testing will be needed, but can be prioritized/assisted by AI/ML toxicity predictions.
For pharmaceuticals the regulations will not allow to rely only AI- and ML-based toxicity prediction models and completely replace animal and human investigations due to lack of confidence and difficulties around explainability of those models, however they could decrease the need considerably for those investigations and save a lot of time and costs.
I hope so
I do believe that cross-validation against live (at least in vitro) experimentation will only reinforce the belief in AI validation.
With properly conducted in vitro assays, for sure.
At this time except for certain well defined cases it is not possible to do this alone for the safety testing of chemicals, pesticides, pharmaceuticals and public health and food ingredients in a health protective manner. They can be applied to good effect as supplemental tools but not as stand alone tools
Still need experts
I dont think we have enough QSAR validation for all endpoints to do this in the next decade
The process needs substantial validation
Not solely. AI and ML are tools for gathering information. Interpretation of this new information is a job for humans. Applying this information to regulatory issues with wisdom is also a job for humans.
the interface between biotic variation and chemical variation will be too much to rely solely on these models.
Ten years is a too short period to built reliable AI models.
The word "solely" is a deal breaker.
Practical aspects is also needed
Again, unless NAMs exists that completely and accurately replace animal testing, I do not see how AI and ML can make this transition in 10 years, particularly when the EPA goal is by 2035.
I do believe, it will be highly feasible. Considering the recent rapid growth of AI and ML, I think it will be feasible to rely solely on AI- and ML-based toxicity prediction models for safety testing of chemicals, pesticides, pharmaceuticals, and food ingredients within the next decade, even before.
see #1
In conjunction with my answer to #1, I believe it is important to use all tools available, including human study design, experimentation, and interpretation.
The first step of screening might profit from AI techniques. It would be a good way of selecting the agents worth further experimental testing. It might also save costs, but it would not replace laboratory experiments.
As mentioned above, AI/ML today and in the foreseeable future will not be able to predict any of the complex endpoints like repeated (systemic) tox, reprotox, immunotox, (non-genotoxic) carcinogenicity etc. Thus, AI/ML replacing animal testing in the foreseeable future will not be feasible.
See 1 above
Larger populations are needed to test hypotheses
I don’t think the time has come from deliberate exposure of drugs to humans - exposure is too high as is uncertainty of hazard. Foods may be ok depending on the extent of exposure and suitable analog data. I think depending on the application pesticides and other chemistries are more amenable since exposure is often incidental and can be ameliorated with proper PPE.
With the seeming acceleration of AI into all fields, 10 years is a long time and I would be hopeful such applications have been tried and at least partially validated.
As I have expressed concerns in my recent articles and books, unless scientists try to better comprehend the complex electrochemical and highly regulated immune neuroplasticity of human body in health or disease processes, applications of AI or ML could additionally produce false flags that are based on false foundations. Given numerous isolated data that are not integrated or understood on potential biological harms of drugs, vaccines, pesticides, GMO foods and ingredients, as well as other potential genotoxins (EMFs, other low-level carcinogens) including diverse individual health status, make AI or ML subject to un-predictable errors and miscalculations.
NOt in the next decade - classical toxicity tests will still be needed
Again this is incorrectly describing what AI/ML is. Garbarge in, garbage out. Most experiments are not accurate enough and/or do not capture all aspects of natural biology yet (think time resolution of biological effects, circadian rhythm, between human/sample variation, etc).
Not initially. We will need a parallel model.
Feasible, but only at lower Tiers.
Not likely, there should not be an expectation that something will “always” hold true.
I don't believe that there are enough data to adequately predict in vivo safety based on AI and ML alone.
Please see expiation for #1
My results from Leadscope and In Vitro MultiFlow were incorrect when further analyses were performed.
See above.
Yes with new and improved models day to day, it would be certainly feasible to have better predictable models
AI and ML are at the beginning stage. We must learn how to trust the prediction model. As a result, basic research is still needed.
Perhaps someday, but not in the near future. Currently, ai/ml would best serve screening and prioritizing chemicals for hazard.
Again, same as the response for question 2 - it depends on the endpoints, some endpoints are more mature than others.
Interpretation of complex data sets will not be possible with AI/ML but it will be an important tool for certain approved analysis.
See above.