4
SciPoll 548: AI: Harm or help in your area of expertise?
How do you see AI being harmful in your area of expertise?
Results
(138 Answers)
-
user-363282
it can have malice in case of bugs -
user-460715
They are introducing errors in generating text. -
user-732397
Would it replace humans entirely? Doubtful, but..... -
user-836505
We need to recognize the limitations of AI in dealing with real-world high-risk situations. For instance, a poor motion planner can result in the loss of life. -
user-153764
unlikely -
user-887788
It currently generates fictitious responses. It may be used to replace expertise and quality will drop if it becomes the primary or main resource -
user-50697
Giving enough confidence to people without all the necessary knowledge on the subject, since chemistry is a very complex field, many theories are correlated and to consider only one AI affirmation about the topic you intend to work on can lead to big problems. -
user-445218
I do not. -
user-156475
Can't say at present. Too early -
user-195977
It is also being used for military and surveillance purposes, but this is always true. Drone attacks, killer humanoids, these things will happen and it is due to military and security concerns that will always be a part of the human experience. It is best to do as much research as possible in an open manner and avoid monopolization of new technology behind military funding. The playing field should stay as even as possible or it will be exploited by someone. -
user-417392
the use of AI without human validation in my area (medication safety) can be dangerous as it can lead to misinterpretation of results by the general public -
user-270335
Its use with mere commercial interests without waiting an appropriate validation -
user-180652
May be depended on by patients instead of getting professional help -
user-234128
Could lead to "automatization" of specific procedures (e.g., assessments, therapy), which would remove the human providers and decision-makers from part of the process. This would be rationalized as a cost-efficient or cost-saving measure. -
user-628816
Depending on it too much. -
user-98823
If such a literature search output is incorrectly biased, too much weight may be given to the wrong papers. -
user-9932
It may reduce a lot the number of activities being made by humans, especially collecting, organizing, and deriving models using data. -
user-99098
Already many published works rely on questionable assumptions, and wrong interpretation of data (for example, assuming you can say something from a particular technique when that is not the case). It will only get worse. -
user-326793
Wrong decision -
user-263414
teaching, misinformation -
user-625125
If the scientist do not check properly the outputs coming from the algorithms it could led to unuseful results and lost of time -
user-627640
too much dependence on the AI results -
user-947988
Social media fanatics suggesting Franken-plants again. -
user-484966
no too much -
user-765513
Students just doing cut copy paste without proper thinking and hard work -
user-604521
When facts are not checked and when methods that may not be reproducable are used -
user-177409
black box - hard to verify - competing interest -
user-470071
If AI will control motorized units it can produce errors that can do a lot of harm to a patient -
user-340576
ND -
user-66641
To do not realise the results -
user-970956
it could miss details -
user-747249
treatment
Overdiagnosis -
user-276677
computers and or AI should never be end responsible - e.g. in diagnostic or treatment decisions -
user-511217
I don't think so -
user-389881
no way -
user-414626
If the data are personal (less likely in ecotoxicology) I would be concerned that the data would get into the wrong hands. -
user-277089
There is a possibility that scientists will become too dependent on AI without checking the results for validity. -
user-545783
Possibility of false prediction may prevent reaching the correct result, but this possibility also exists in other in vitro and in vivo tests. -
user-313917
My concerns are related to the potential possibility of dual-use or malicious technology regarding high-consequence pathogens, high/maximum biocontainment facilities, biotechnology, and synthetic biology. My concerns are related to biosecurity. -
user-898139
There will be no harm. -
user-123746
Authorship, lack of proper AI citation and copy rights sheating and other research ethics guidelines. -
user-883288
I don’t -
user-733609
AI can be dangerous if scientists stay away from AI, AI can be pushed by irresponsible people for misleading uses -
user-469485
Specially younger generation is adopting AI tools that in due course of time, AI will be driving them. So, in short, technology will rule the mam. -
user-943896
When it is used without philosophy -
user-637348
Not harmful -
user-292351
It could potentially drive to relaxing the attention of genetic counselling operators and cause diagnostic errors -
user-184231
Incorrect utilization. Junk in still equals junk out. AI should not be used with no inderstanding of the underlying data or statistical analysis. -
Sonne72
Fraud. -
user-667012
The tendency to plagiarize from it -
user-300423
If used without care can give very wrong results and be difficult to supervise -
user-156962
Not sure -
user-615872
Wih optimism -
user-599118
AI might pose ethical issues especially if applied to human genetics and genomics. -
user-556903
There is always a danger that AI modelling will be used by researchers who are incompetent in AI and apply automatic ML tools "blindly". However, the main danger, in my opinion, is not selecting data for modelling too narrowly or too selectively. Failure to collect an adequate amount of data and, most importantly, to create an exhaustive list of factors and then "throw" this data into the model is the cause of wrong predictions and decisions. To avoid this data and feature engineering become very important. It is also the fuel for various pseudo-theories. Avoiding these effects is the main task of cooperation between the so-called domain specialists and AI specialists.
-
user-103828
It may run protocol and may not be customized . clinical examinations and differentials need to be streamlined. This may potentially change . -
user-870844
In manuscript development and publishing -
user-97558
It may promote information overload -
user-150822
The result of AI might be misleading. Ìt should be validated -
user-589379
The tendency and the over-optimism that AI will solve all problems better than humans might lead to the situation, where human intelligence will be replaced by AI to the point where silliness and simplistic solutions without humand judgement will take over. In the end machines might tell humans what to do. Nick Bostrom is not so far off the mark with his warnings. -
user-439415
Wrong information -
user-682252
I could see AI replacing some juman jobs, for example that of pathologists that have classically analyzed juman biopsies using microscopy tools, which are time-consuming and are prone to human error. AI could help in the diagnosis of many diseases by feeding it with images of the biopsy tissue -
user-663996
By producing misleading results. -
user-653283
there is no harmful effect of AI on my expertise. -
user-994669
none -
user-900806
Data breach, plagarism -
user-944765
I am concerned that AI will exacerbate disparities as it is generative from what already exits out there---a lot of stereotypes...and it may lead to incorrect diagnoses of people as there are nuances to people's use of language when they discuss how they are feeling/symptoms -
user-531362
Man is a Social being and usually depends on social capital. If replaced by AI, man will become idle and lonely as their will be less time to interact and discuss issues -
user-189534
By taking decisions supported exclusively by AI -
user-300516
At the moment I do not see dangers for my area of expertise -
user-902762
الذكاء الاصطناعي لبعض المستخدمين "ويمكن أن يؤدي ذلك إلى أخطاء اصطناعية في جميع أنحاء العالم من المرضى، وخاصة البيانات الصحية وأمنها، بالإضافة إلى احتمالية الوصول إلى أشخاص غير مجتمعين". -
user-102176
Decision making and farm management practices based on AI will likely be dictated by single private company that owns the technology and algorithm. -
user-470717
It may be misleading sometimes. -
user-171296
AI may discourage critical thinking in favour of rapid throughput. This may lead to models being developed that describe data well, but are not predictive, and are unhelpful in enhancing understanding of new drugs and systems. -
user-509498
I believe that many individuals and organizations will rely on pre-trained networks created by larger entities, as smaller groups, institutions, and the like often lack the necessary data or computational resources to develop their own. However, this reliance means that any potential biases embedded in these widely-used networks could have far-reaching and significant impacts. -
user-340804
I see no harm from AI. I know that some people are afraid that AI can substitute them in the workspace, herewith I believe that the development and widespread of AI technologies will offer new types of employment. -
user-297941
Not at all harmful -
user-695643
AI can be incredibly harmful if used in its pre-mature stages for diagnostics and prognostics. If an AI model is deployed for diagnostics which is not robust, we will see far more false positives and false negatives, which will result in wasted resources and danger to life.
Moreover, if a sufficiently advanced AI falls into the wrong hands, it can be used to discover, design and manufacture harmful materials, -
user-777357
AI might ultimately become smarter than Homo sapiens (See work by Harari) -
user-79617
It may be harmful if the algorithms overdiagnose abnormalities in normal conditions, or underestimate the diagnoses in abnormal conditions. -
user-740
Data privacy -
user-794592
As I noted above, I believe AI can be very helpful in diagnosis and treatment planning our cases. -
user-282806
AI systems are usually a black box, therefore there is no learn effect from a AI algorithm -
user-533989
IN ITS UNCRITICAL APPLICATION TO LITERATURE SEARCHES. -
user-520983
I think individuals completely accepting what AI spits back to them as a totally correct conclusion can harm the progress of research especially if the AI model does not take certain factors into consideration. -
user-578906
Expecting database -
user-331297
No opinion -
user-180243
Use of poor quality data, which pollute the literature -
user-37487
As far as I understand, AI seeks backwords. Thus, it is a mere repetition of the knowledge it has acquired. The total replacement of humans bringing new ideas and pushing barriers is my worst fear. -
user-477483
If used without criteria can generate false information. -
user-484050
No -
user-7366
for those that are not in the discipline a lot of detailed information may induce more fear and anxiety. -
user-310423
If AI produces fake references (I have experienced that - so you must check every article carefully... -
user-671388
For companies and other parties to publish false information that supports their products. -
user-211258
AI can be misused to misinterpret and misreport the research by people who are inexperienced or lacking in integrity. -
user-798662
It can be harmful if it is used to engineer products which are unsafe and harmful to humans. -
user-304247
i dont think it is -
user-914553
Make mistakes that a human would not do. -
user-250140
many kinds of junk or fake data or theories to disturb true evidences -
user-883671
Maybe misused by students -
user-137308
It may stimulate people to just use AI to write their works. -
user-988514
Let AI working for you completely -
user-9504
Only insofar as we fail to properly account for intrinsic biases of AI tools -
user-773118
unclear -
user-126526
No, more good than harm. -
user-307869
For the moment I do not see any arm if it is used correctly. -
user-140649
People will relay on it and then make wrong decisions -
user-913574
Too high expectations lead to AI winters. Let's hope we are not getting into one. -
user-82216
Manipulation of data will be possible/easier. -
user-382369
It would be harmful when the data and its analyses are not properly downloaded/performed. Also, good criteria is needed to discern whether the results of the use of AI are or not. -
user-391781
Some cases are complex and the diagnosis rely heavily on clinical details, physical examination or even the expertise and experience of the pathologist. I feel that these considerations will not be properly addressed if we rely heavily on AI. -
user-499104
Abuse or missuse in tasks not understood by the person using it: statistics, rephrasing entire articles with little knowledge or expertise by the certain individual. -
user-237934
AI can analyze vast amounts of data and so can be a substitute for imaging professionals. -
user-390499
Misinformation. -
user-880409
Dependence by physicians will leads to gaps in care. -
user-486614
The ethical use of AI in neurology raises important questions, particularly in terms of patient privacy, consent, and the responsible handling of sensitive medical data. There's a need for clear guidelines and regulations to ensure ethical practices in the development and deployment of AI technologies in healthcare. -
user-954693
Ethical decisions -
user-577966
No harm noted as yet. As its usage increases, we will learn more about the harms it can cause. -
user-534902
While it may not be in the near future, AI could potentially dominate the field of diagnostic radiology in the later stages, possibly replacing doctors. However, it should not be forgotten that Radiology is a multidisciplinary field that requires a great deal of depth. For AI to achieve this, it needs time. -
user-61436
Not so muvh -
user-844856
It could be dangerous to try to do without the human part that exists in the specialty -
user-181693
If it is utilized and takes the art of medicine away from physicians to look at the entire picture of a patient and make the best decision. -
user-606285
I have several concerns on privacy and security issues, trasparency and quality of responses. -
user-318554
Any -
user-95563
Accuracy of knowledge regarding specificity of subject -
user-719680
Overreach into privacy. The continued issues with racist algorithms -
user-269733
Known information may be incorrect; risk of data fabrication -
user-533285
Social damage -
user-180234
AI that makes up responses is unacceptable and can be harmful to all disciplines. The AI released for use needs to prioritize accuracy and note when information is not available or when it can otherwise not respond to a request. Creativity with responses is very dangerous. For example, I received a reply with what looked like credible literature citations (i.e., the journal was one that would publish articles on the information requested) but, when I checked, they were all fabricated. Luckily I knew that the citations may be "hinky" and checked the citations....some people would not...yikes. -
user-106770
Overestimation. No more patient centered care, increased level social and psichosocial pathologic behavior. -
user-911600
Misses harmful effects -
user-364672
Can't see harm. Missuse is possible if responsibilities are ignored and mysteries expected. Likewise with any other technical advance. -
user-434792
Not at the very moment -
user-856013
Trying to be used in place of humans for psychotherapy -
user-529270
It may loose the hability to sense the peculiaritiies of individual variations and human sensible aspescts. -
user-289373
inaccurate diagnosis -
user-577045
Plagiarism and false reporting can also be done using AI -
user-788615
Although helps in finding things,but it limits the extraordinary thinking on a particular subject matters.