How might AI-assisted peer review tools reshape scientific publishing, and what safeguards should be implemented to maintain review quality while addressing the increasing volume of submissions?
The peer-reviewing process is being faced with ever-growing challenges as submission rates are still rising exponentially in all areas. Journal editors are faced with the challenge of getting capable reviewers who are ready to volunteer their time, with a result being delays in reviewing, reviewer exhaustion, and worse, compromised quality. Meanwhile, AI technologies are accelerating their ability to review scientific manuscripts for methodology, statistical fitting, plagiarism screenings, and even conceptual novelty detection.
Post an Answer
Sign In to Answer
7 Answers
陈平波
1.审查过程中的“预定位”和“分层”-技术筛选:人工智能可以在稿件送去审查之前对手稿进行初步质量过滤,检查格式、统计错误、图像完整性、抄袭(如Proofig、ImaCheck),甚至基本方法的合理性。这可以直接拒绝有明显缺陷的手稿,减轻审稿人的负担。
-智能匹配:人工智能可以通过分析手稿内容来提高匹配效率和专业性,避免“不匹配”,并更准确地将手稿与合适的审稿人和编辑进行匹配。
- 分层审查:对于大量提交(如预印平台),人工智能可以进行初步内容评估和优先级排名,提醒编辑哪些手稿最具创新性或存在潜在问题,实现分层处理。
2.增强审查员的能力,而不是取代他们——辅助分析:审查员可以利用人工智能工具快速验证参考资料的相关性和及时性,确保数据一致性,并识别与以前研究的矛盾之处。
-语言和清晰度检查:对于非英语母语人士来说,人工智能可以帮助提高文本清晰度,但核心论点仍然需要由人类来判断。
-生成复习积分:人工智能可以根据手稿的内容生成复习问题或要点的初始列表,帮助评审者(尤其是新手)更系统、更全面地检查手稿。然而,最终的评论必须由人类撰写。
3.转变出版模式-加速预印评估:将人工智能与快速公共评估相结合可能会导致“持续评估”模式,即在预印平台上发布手稿后,人工智能会汇总社区和专家评论,形成动态和透明的审查记录。
- 一种新的质量控制模式可能会出现:一种混合模式,结合了“人工智能的初步审查+领域专家的最终审查”,大大缩短了从提交到第一次决定的周期。
-元分析和可重复性检查:人工智能可以帮助审查员快速评估研究方法是否足以支持可能需要额外数据或代码复制的结论和标记结果。
-智能匹配:人工智能可以通过分析手稿内容来提高匹配效率和专业性,避免“不匹配”,并更准确地将手稿与合适的审稿人和编辑进行匹配。
- 分层审查:对于大量提交(如预印平台),人工智能可以进行初步内容评估和优先级排名,提醒编辑哪些手稿最具创新性或存在潜在问题,实现分层处理。
2.增强审查员的能力,而不是取代他们——辅助分析:审查员可以利用人工智能工具快速验证参考资料的相关性和及时性,确保数据一致性,并识别与以前研究的矛盾之处。
-语言和清晰度检查:对于非英语母语人士来说,人工智能可以帮助提高文本清晰度,但核心论点仍然需要由人类来判断。
-生成复习积分:人工智能可以根据手稿的内容生成复习问题或要点的初始列表,帮助评审者(尤其是新手)更系统、更全面地检查手稿。然而,最终的评论必须由人类撰写。
3.转变出版模式-加速预印评估:将人工智能与快速公共评估相结合可能会导致“持续评估”模式,即在预印平台上发布手稿后,人工智能会汇总社区和专家评论,形成动态和透明的审查记录。
- 一种新的质量控制模式可能会出现:一种混合模式,结合了“人工智能的初步审查+领域专家的最终审查”,大大缩短了从提交到第一次决定的周期。
-元分析和可重复性检查:人工智能可以帮助审查员快速评估研究方法是否足以支持可能需要额外数据或代码复制的结论和标记结果。
Arvind
Hello Esteemed SciPinion Committee Members,
Whenever we are peer-reviewing, we must look for these important factors:
Does the manuscript’s content fall within the scope of the journal?
Is there any Key Word that is not included in the manuscript title?
Do authors’ affiliations correspond to the content of the manuscript?
Does the Abstract contain the contents of each part of the manuscript (IMRaD)?
Are the Key Words complete?
Is the content of the Introduction adequate?
Is the content of the Materials and Methods complete?
Is the description of the experiments clear and complete?
Are the experimental data presented in the manuscript’s biostatistics content reliable?
Are the experimental data of the Results true and reliable?
Are the quality and resolution of the images up to standard?
Do the selection and design of the figures and tables follow the principles of necessity and clarity?
Is there any duplication between various parts of the manuscript and between the main text and the content presented in the figures and tables?
Are the figures and tables numbered consecutively in the order in which they appear in the manuscript?
Is the content of the Discussion reasonable?
Is the Conclusion reasonable?
Are all references necessary and reasonable?
Do authors omit important references?
Are all references related to the topic of the manuscript?
Do authors only cite their own earlier publications?
Is the manuscript’s text correct, concise, and clear?
Will the manuscript’s content be of interest to readers?
Are additional experiments needed for the study?
Does the research scope comply with ethics?
And the additional comments on how to improve a scientific manuscript any further. I had reviewed over 1000 manuscripts but never used any AI assistance. Unsupervised intelligence can be shallow most of the times. So, in my opinion AI should be used for the correction of grammatical corrections only and many top Publishing Houses, rightly don't accept AI based peer-review.
Thanks,
Dr. Arvind Kumar MORYA
Whenever we are peer-reviewing, we must look for these important factors:
Does the manuscript’s content fall within the scope of the journal?
Is there any Key Word that is not included in the manuscript title?
Do authors’ affiliations correspond to the content of the manuscript?
Does the Abstract contain the contents of each part of the manuscript (IMRaD)?
Are the Key Words complete?
Is the content of the Introduction adequate?
Is the content of the Materials and Methods complete?
Is the description of the experiments clear and complete?
Are the experimental data presented in the manuscript’s biostatistics content reliable?
Are the experimental data of the Results true and reliable?
Are the quality and resolution of the images up to standard?
Do the selection and design of the figures and tables follow the principles of necessity and clarity?
Is there any duplication between various parts of the manuscript and between the main text and the content presented in the figures and tables?
Are the figures and tables numbered consecutively in the order in which they appear in the manuscript?
Is the content of the Discussion reasonable?
Is the Conclusion reasonable?
Are all references necessary and reasonable?
Do authors omit important references?
Are all references related to the topic of the manuscript?
Do authors only cite their own earlier publications?
Is the manuscript’s text correct, concise, and clear?
Will the manuscript’s content be of interest to readers?
Are additional experiments needed for the study?
Does the research scope comply with ethics?
And the additional comments on how to improve a scientific manuscript any further. I had reviewed over 1000 manuscripts but never used any AI assistance. Unsupervised intelligence can be shallow most of the times. So, in my opinion AI should be used for the correction of grammatical corrections only and many top Publishing Houses, rightly don't accept AI based peer-review.
Thanks,
Dr. Arvind Kumar MORYA
Arvind
Hello Esteemed SciPinion Committee Members,
Whenever we are peer-reviewing, we must look for these important factors:
Does the manuscript’s content fall within the scope of the journal?
Is there any Key Word that is not included in the manuscript title?
Do authors’ affiliations correspond to the content of the manuscript?
Does the Abstract contain the contents of each part of the manuscript (IMRaD)?
Are the Key Words complete?
Is the content of the Introduction adequate?
Is the content of the Materials and Methods complete?
Is the description of the experiments clear and complete?
Are the experimental data presented in the manuscript’s biostatistics content reliable?
Are the experimental data of the Results true and reliable?
Are the quality and resolution of the images up to standard?
Do the selection and design of the figures and tables follow the principles of necessity and clarity?
Is there any duplication between various parts of the manuscript and between the main text and the content presented in the figures and tables?
Are the figures and tables numbered consecutively in the order in which they appear in the manuscript?
Is the content of the Discussion reasonable?
Is the Conclusion reasonable?
Are all references necessary and reasonable?
Do authors omit important references?
Are all references related to the topic of the manuscript?
Do authors only cite their own earlier publications?
Is the manuscript’s text correct, concise, and clear?
Will the manuscript’s content be of interest to readers?
Are additional experiments needed for the study?
Does the research scope comply with ethics?
And the additional comments on how to improve a scientific manuscript any further. I had reviewed over 1000 manuscripts but never used any AI assistance. Unsupervised intelligence can be shallow most of the times. So, in my opinion AI should be used for the correction of grammatical corrections only and many top Publishing Houses, rightly don't accept AI based peer-review.
Thanks,
Dr. Arvind Kumar MORYA
Whenever we are peer-reviewing, we must look for these important factors:
Does the manuscript’s content fall within the scope of the journal?
Is there any Key Word that is not included in the manuscript title?
Do authors’ affiliations correspond to the content of the manuscript?
Does the Abstract contain the contents of each part of the manuscript (IMRaD)?
Are the Key Words complete?
Is the content of the Introduction adequate?
Is the content of the Materials and Methods complete?
Is the description of the experiments clear and complete?
Are the experimental data presented in the manuscript’s biostatistics content reliable?
Are the experimental data of the Results true and reliable?
Are the quality and resolution of the images up to standard?
Do the selection and design of the figures and tables follow the principles of necessity and clarity?
Is there any duplication between various parts of the manuscript and between the main text and the content presented in the figures and tables?
Are the figures and tables numbered consecutively in the order in which they appear in the manuscript?
Is the content of the Discussion reasonable?
Is the Conclusion reasonable?
Are all references necessary and reasonable?
Do authors omit important references?
Are all references related to the topic of the manuscript?
Do authors only cite their own earlier publications?
Is the manuscript’s text correct, concise, and clear?
Will the manuscript’s content be of interest to readers?
Are additional experiments needed for the study?
Does the research scope comply with ethics?
And the additional comments on how to improve a scientific manuscript any further. I had reviewed over 1000 manuscripts but never used any AI assistance. Unsupervised intelligence can be shallow most of the times. So, in my opinion AI should be used for the correction of grammatical corrections only and many top Publishing Houses, rightly don't accept AI based peer-review.
Thanks,
Dr. Arvind Kumar MORYA
Alvass
AI may help finding relevant information. However, the results and the methodology must be peer-reviewed by humans
Jeff Erlich
Authors are already using AI to improve their papers (which is a good thing!). So, just like other fields, we should use AI to improve science, science communication and the process of going from submission to publication.
As AI gets better, I think journals should "pre-review" papers with AI to assist reviewers. The pre-review can:
As AI gets better, I think journals should "pre-review" papers with AI to assist reviewers. The pre-review can:
- Summarise / list recent related work to evaluate novelty and impact of findings
- Point out potential statistical anomalies
- Create a table linking the main claims of the paper with figures/sections of the results and section of the methods.
These pre-reviews can also be part of the editorial assessment. I think with proper prompt engineering and fine-tuning, there is no good reason not to facilitate peer-review with AI.
Of course, these need to be "air-gapped" or otherwise secured so submitted papers do not become part of training data.
I find that peer review is a highly stochastic process. Sometimes it works well, sometimes not. Public post-publication review (i.e. like pubpeer) is the future.
Charles
I would not encourage giving money for a review. However, NO scientist should be allowed to publish a paper without making several COMPETENT reviews (say three/ submissions). As editor, I experienced that big scientists are rarely available for a review, if at all; some penalty should be included in the scientometric evaluation for that.