Results
(238 Answers)

Answer Explanations

  • Yes
    user-284533
    Yes, AI is not mistake proof - sometimes it halucinates, sometimes its math ability is suspect, sometimes the grammar is not amazing. Always review.
  • Yes
    user-964888
    Should have a human review to check the relevance.
  • Yes
    user-637083
    Anything generated by AI should definitely be reviewed by a human with expert knowledge.
    AI should be harnessed to make our lives easier and to remove tedious or time-consuming tasks, but the output should ultimately be checked and verified by a human.
  • Yes
    user-397183
    I don't think AI is yet writing scientific papers or even emails that don't require substantial editing
  • Yes
    user-290459
    it's hard to know whether AI written things are true 
  • Yes
    user-650602
    Yes.  It is often wrong
  • Yes
    user-574398
    AI can provide interesting insight and perhaps some answers.  It can also create garbage.  Only a human expert would know the difference.
  • Yes
    user-615884
    Always!
  • Yes
    user-762237
    AI-generated content requires human review to ensure accuracy, quality, and integrity in the publishing process. Human review is critical for several reasons:
    1. While AI systems are advanced, they are not infallible. They can misinterpret data, generate errors, or produce results that lack contextual nuance. Human review ensures that the information and analysis provided by AI is accurate and valid. Researchers can review AI-generated results, correct any errors, and ensure that the results are consistent with research goals and existing knowledge.
    2. The quality of scientific publications must meet high standards, and AI tools, despite their capabilities, cannot fully guarantee this on their own. Human reviewers can assess the coherence, readability, and overall presentation of AI-generated content. They can make necessary adjustments to improve clarity, logical flow, and adherence to publication guidelines, ensuring that the final manuscript is of the highest quality.
    3. AI cannot understand the broader context and interpret results with the depth and insight that human researchers possess. Human review allows for the incorporation of expert judgment, contextual knowledge, and nuanced interpretation that AI cannot provide. This is especially important in complex or interdisciplinary research, where contextual understanding is key to accurate interpretation.
    4. Ensuring ethical integrity in research and publication is paramount. Human review is essential to verify that AI use adheres to ethical standards, such as avoiding biased interpretations, ensuring proper attribution, and maintaining transparency about the AI's role in the research process. Researchers can also ensure that AI-generated content does not inadvertently include sensitive or inappropriate material.
    5. Researchers are responsible for the content they publish. Human review ensures that they take full responsibility for the work, including any content generated by AI. This accountability is critical to maintaining trust and credibility within the scientific community. By thoroughly reviewing AI-generated results, researchers can confidently stand behind their publications.
    In summary, while AI can significantly improve the efficiency and scope of research tasks, human review is essential to ensure that AI-generated content meets rigorous standards of scientific accuracy, quality, and ethical integrity. The collaboration between AI and human expertise ensures that the final publication is both innovative and reliable.
  • Yes
    user-138703
    have no doubt 
  • Yes
    user-276088
    Always need human check of source data verification 
  • Yes
    user-475346
    For the same reasons as mentioned above.  It's good to use to generate initial text, but this needs to be reviewed for scientific accuracy since the machine cannot identify this consistently at the current time.
  • Yes
    user-907425
    Everything requires human review.
  • Yes
    user-740731
    AI is artificial, and it cannot judge or interpret anything based on true evidence
  • Yes
    user-81297
    Yes, I believe that AI supervision is important since you do not know where the AI is "competent". Please, remember that AI is just "interpolating" (and cannot extrapolate). 
  • Yes
    user-245397
    Absolutely, AI has to have human input to make sure that the meaning is correct.
  • No
    user-404499
    not as long as the use of AI is stated in the methods and the summary sentence
  • Yes
    user-905470
    AI is prone to mistakes. It is important that a human reviews the final work to prevent mistakes getting into final publication as these works as used in important fields
  • Yes
    user-638389
    Absolutely, yes as AI makes errors. 
  • Yes
    user-52862
    If credit is to be given to the overall output of human + AI tool, then the human must take responsibility of the overall quality of the output. Therefore, human review is a must.
  • Yes
    user-840552
    Yes, it is essential that human review is carried out in the instance that AI is used to cross-validate the results it generates which can compromise the integrity and credibility of the scientific publication.
  • Yes
    user-543438
    AI still makes mistakes, we don't want inaccurate information being published. 
  • Yes
    user-678105
    AI may use incorrect literature.
  • Yes
    user-598239
    of course
  • Yes
    user-14860
    AI is not perfect, so some errors can be present 
  • Yes
    user-173340
    In my view this is absolutely necessary. I have a good example, I sent a scientific paper in a journal, the editor let the computer to look for plagiatorism. I had terrible problem to push the paper to be published, the editor had certain % of acceptable plagiarism and I was unable to meet that. Sure not, because the AI marked as plagiarism for example the addresses of all authors, in experimental part it were almost all descriptions of standard procedures used, which is very difficult to change without loosing the exactness, etc. In my view, this is unimportant embroilment but I would be nervous if AI should decide about e.g. a declaration of war or select a procedure for complicated surgery


  • Yes
    user-87331
    Always, AI usually has errors due to "hallucinations"
  • Yes
    user-979715
    Yes, definitely, as stated above.
  • Yes
    user-911600
    Human knowledge is necessary 
  • Yes
    user-287804
    Human should definitely be involved in the verification of research findings and their publication.
  • Yes
    user-669208
    Absolutely Yes.
  • Yes
    user-684526
    See above responses.
  • Yes
    user-597118
    human overview is required since AI cannot be held accountable for what is written, furthermore it has been proven that AI can be innacurate. 
  • Yes
    user-507408
    Any blind usage of AI is to be warranted. 
  • Yes
    user-445218
    Hallucinations
  • Yes
    user-696023
    This is a MUST! AI might give a first draft of an English text for non-native English speakers, but it has to checked again and again. Even human translators have to be checked; I have seen enough translations which have completely lost the intention of an article, because translation also requires some understanding of the topic, not only of the language!
  • Yes
    user-596010
    Human supervision is critical. 
  • Yes
    user-445202
    Todas as IAs que já utilizem possuem falhas
  • Yes
    user-554477
    Absolutely!!
  • Yes
    user-673903
    Assuming the researcher actually knows what was done... All AI??
  • Yes
    user-544555
    Definitely!
  • Yes
    user-514238
    Absolutely required. 
  • Yes
    user-744008
    Always!
  • Yes
    user-67936
    AI should only be used to gather data, if at all, the human should review the findings and use them as any other data to generate ideas and interpretations of the data.
  • Yes
    user-123746
    Sure , need verification for falsified and improper content.
  • Yes
    user-689910
    AI should be used both to assist in writing manuscripts (for which authors are ultimately responsible); and they should be used in assisting editors in peer-review.  
  • Yes
    user-266855
    Always, it depends on the quality of sources used for  the content and may be biased if there is much non-scientific media on the topic. 
  • Yes
    user-874889
    The human review make the paper a live
  • Yes
    user-799639
    Mistake in AI product might affect the human reputation/credibility/acceptance
  • Not Sure
    user-41956
    I can see AI helping search QA data sets for specified information. I oppose AI using data to create a de novo report or datum. 
  • Yes
    user-480376
    Of course. The work is intended for human use, so humans should review it.
  • Not Sure
    user-935064
    The power of AI is that it can do things that humans are unlikely to do, at least in a reasonable time frame.  For example, pattern recognition from billions of data entries requires AI.  However, at least with current technology, AI output should be viewed as generating hypotheses to be pursued further, for example, to determine if there are causal relationships rather than correlations.
  • Yes
    user-731405
    Definitely!
  • Yes
    user-902187
    Authors are responsible for the content. AI is not yet reliable for use without review. 
  • Yes
    user-779737
    AI makes lots of mistakes, and the purpose of the peer review process is to make sure the content is correct. So of course humans are still needed to review any content, AI created or otherwise.
  • Yes
    user-383159
    AI responses can be unreliable or shallow
  • Yes
    user-834001
    Of course this needs human review. 
  • Yes
    user-200863
    Oh yes.
  • Yes
    user-802778
    It is a must
  • Yes
    user-642018
    Yes. AI requires human review. AI tools do not replace human insight and perogative. AI could sometimea misunderstand concepts so human review is essential 

  • Yes
    user-809947
    Getting error is common everywhere. Its better to have a review
  • No
    user-785535
    No if AI is used for a check on the manuscript in the finalisation step (see point 4)
  • Yes
    user-987162
    human review is a must as not all data would be correct or required by the researcher
  • Yes
    user-146796
    AI makes mistakes, and should be reviewed by humans as well
  • Yes
    user-548892
    Some of the answers generated need to be refined. So someone should review and edit the document. Not everything generated by AI should be used
  • Yes
    user-156666
    The final check shoud be done by human
  • Yes
    user-560974
    Absolutely yes, because AI may generate incorrect or highly inaccurate content.
  • Yes
    user-79668
    Should be reviewed for accuracy 
  • Yes
    user-111454
    This is to find out whether the information generated by AI is true and correct.
  • Yes
    user-478855
    Ultimately, human authors are responsible for the paper, and they must review whatever the AI has conjured
  • Yes
    user-885754
    It's crucial to have a professional expert in the field filter the AI.
  • Yes
    user-583633
    AI brings a new perspective to bear in reviewing and writing about complex subject matter. However, the responses from AK are often incorrect or misinterpretation. 
  • Yes
    user-764272
    The facts need checking 
    I actually think peer review should be done by both ai and people.... people should just be checking science...ai should be checking grammar, spelling, and plagiarism 
  • Yes
    user-887652
    AI output merely resembles authorship.
  • Yes
    user-284769
    Human review is mandatory.
  • Yes
    user-765807
    When AI is used in the process of writing a publication, it should undergo human review. While AI can assist with tasks like summarization, data analysis, and grammar checking, human judgment remains crucial. Human reviewers ensure that the content aligns with scientific rigor, ethical standards, and the intended message.
  • user-232098
    To ensure accuracy, relevance, and adherence to scientific standards, as well as to verify that the AI outputs are free from biases and errors.
  • Yes
    user-15300
    Most importantly 
  • Yes
    user-634057
    Yes since scientific publications are cathered to very technical crowds it is highnly advised to keep a human expert in the loop to vet that the language model will not hallucinate or misinterpret the real results
  • Yes
    user-952116
    Always, otherwise the author shouldn't be able to claim the work their own.
Please log in to comment.