How do you evaluate academic papers? $100
SciPinion is seeking your opinion on how you evaluate publications. We are interested in the criteria you personally use to assess the quality of peer-reviewed academic papers. What factors make a paper trustworthy and credible from your point of view?
We invite you to share your thoughts on the following:
- Evaluation Criteria: What factors do you consider when judging a paper's quality? This can include methodology, data analysis, relevance, novelty, clarity, etc.
- Scoring Systems: Do you prefer any formal scoring systems such as Klimisch Score or other grading frameworks for this purpose? How effective are they in your opinion?
- Personal Recommendations: Any other personal tips or methods you use to determine the reliability of a publication?
Regardless if you follow a structured approach or rely on your own set of criteria, we want your opinion.
The author of the response with the most upvotes will receive a $100 reward courtesy of SciPinion!
Answers can be submitted until June 21, 2024, when voting will begin and remain open until June 28, 2024.
Answers can be submitted until June 21, 2024, when voting will begin and remain open until June 28, 2024.
Kindest Regards
SciPinion
General
Bhoj R Singh
The most important criteria in an article review are clarity in methodology and data analysis. The use of appropriate methodology to explore or conduct an experiment tells about the efforts and sincerity of the work, and reviewing the analysis reveals different biases accepted by the researchers/ authors and the appropriateness of the analytical tests used. Data availability gives an idea about the truthfulness of the article. The rest of the things like relevance, novelty, and clarity are subjective and have a lot of scope for improvement, but a flaw in methodology in a biological experiment is often irreparable.
Scoring systems are not of much use and recommendations must always be ignored for the good of the progress of science.
Scoring systems are not of much use and recommendations must always be ignored for the good of the progress of science.
Dr. Asif Mahmood
I use to judge paper quality from Abstract, Results, Discussions and Conclusions.
Critically I used to confirm background calculations of the presented information. I use to suggest many supportive comments to improve quality as well as clarity of the work under review.
Critically I used to confirm background calculations of the presented information. I use to suggest many supportive comments to improve quality as well as clarity of the work under review.
Dr. Akhilesh Prajapati
These are few point on the basis a publication can be accessed.
1. Well explained abstract.
2. Published in reputed peer reviewed journal where double blind reviewing process used.
3. Journal should be indexed in Scopus, web of science and pubmed cited.( SCI indexing)
4. The publication should be written clearly and easy to understand. Specifically, abstract, method, results and discussion.
5. No or very less Self cited.
6. No Plagiarism, data Can be reproducible and available for everyone.
1. Well explained abstract.
2. Published in reputed peer reviewed journal where double blind reviewing process used.
3. Journal should be indexed in Scopus, web of science and pubmed cited.( SCI indexing)
4. The publication should be written clearly and easy to understand. Specifically, abstract, method, results and discussion.
5. No or very less Self cited.
6. No Plagiarism, data Can be reproducible and available for everyone.
Boffer Bings
Title: Does it usefully reflect the findings? Titles that overplay findings are a bad sign. A clever title is a bonus as long as it isn't distracting.
Authors: Too many, too few, or a reasonable number and array of specialties given the topic? Are their contributions clear?
Abstract: Is it really an abstract, giving enough information to support a decision to read further (or not), or is it just a teaser? The latter wastes a reader's time.
Introduction: Does it adequately contextualize the state of the science leading to the present paper or is it an exercise in virtue signaling and "secret handshakes" meant to identify the authors as disciplinary insiders, whether eminent or aspiring? The latter suggest unhealthy tangential motivations. Are the reported driving question and hypothesis reasonable, or does either appear suspiciously specific enough to suggest that it could have been reverse engineered from the results?
Methods: Can I imagine replicating this work given the information provided? If not, why should I trust it? Have the authors explicitly admitted to any potential concerns? If not, why not? Were they unaware of any possible problems, or are they hiding something? Is there any suggestion that the methods might have been retrospectively revised to support the outcome?
Results: Are the reported results achievable given the reported methods? Is there extraneous material? Does anything that might obviously flow from the hypothesis and methods seem to be missing? Are table and diagram labels self-explanatory? Is graph scaling appropriate?
Conclusions/Discussion: In philosophical terms, are they "begging the question", i.e., did the formulation of the questions or hypotheses render some aspect of the results inevitable? How much of the analysis relies on metaphor? How much on "hand waving"? Did the authors appropriately place their results in context? Did they miss something? Did they claim undue credit for advancement? Did they usefully identify future research directions?
Overall: Did any obvious flaws slip past editors and peer reviewers?
Authors: Too many, too few, or a reasonable number and array of specialties given the topic? Are their contributions clear?
Abstract: Is it really an abstract, giving enough information to support a decision to read further (or not), or is it just a teaser? The latter wastes a reader's time.
Introduction: Does it adequately contextualize the state of the science leading to the present paper or is it an exercise in virtue signaling and "secret handshakes" meant to identify the authors as disciplinary insiders, whether eminent or aspiring? The latter suggest unhealthy tangential motivations. Are the reported driving question and hypothesis reasonable, or does either appear suspiciously specific enough to suggest that it could have been reverse engineered from the results?
Methods: Can I imagine replicating this work given the information provided? If not, why should I trust it? Have the authors explicitly admitted to any potential concerns? If not, why not? Were they unaware of any possible problems, or are they hiding something? Is there any suggestion that the methods might have been retrospectively revised to support the outcome?
Results: Are the reported results achievable given the reported methods? Is there extraneous material? Does anything that might obviously flow from the hypothesis and methods seem to be missing? Are table and diagram labels self-explanatory? Is graph scaling appropriate?
Conclusions/Discussion: In philosophical terms, are they "begging the question", i.e., did the formulation of the questions or hypotheses render some aspect of the results inevitable? How much of the analysis relies on metaphor? How much on "hand waving"? Did the authors appropriately place their results in context? Did they miss something? Did they claim undue credit for advancement? Did they usefully identify future research directions?
Overall: Did any obvious flaws slip past editors and peer reviewers?
Akif
Firstly, I start reading the whole paper carefully when I feel myself fresh minded, especially at morning hours rather than afternoons.
I do pay attention every line, tables and references ( each reference counted). Still I see while reviewing the papers that authors do write unattentively such as mentioning 5 tables but only 3 of them were provided in the paper. Even the same mistake is repeated giving the impression that manuscript is not carefully read by all the authors and corrections were provided only by one of them. Some manuscripts sent for evaluation are also having fairly old references, i.e, no references withing the 5 years were cited. There are all apart from the right english language "not being used". While putting the web address of the citation material, authors may forget about the access date or even if that one is present, not from the present time, for example at least one year before the current time. References printing style may be different from each other and format seems to be the same which is remaining from the previous rejected journal, very possibly. I do always point out with examples in a kind manner to the authors since I do believe critics should be motivating rather than discouraging. I do sometimes suggest the important published material regarding the topic so as to be added into the references section.
In case of already known information is given too long, I do suggest them to be shortened.
In case of methods unless properly declared, I do offer the authors explain more.
Abbreviations are sometimes too much used but without given full before, including in the abstract section.
Journal's impact factor is important, meanwhile careless, unattentive or declarations full of prejudgements with lock of weaknesses would decline the power of the study/ manuscript.
I do pay attention every line, tables and references ( each reference counted). Still I see while reviewing the papers that authors do write unattentively such as mentioning 5 tables but only 3 of them were provided in the paper. Even the same mistake is repeated giving the impression that manuscript is not carefully read by all the authors and corrections were provided only by one of them. Some manuscripts sent for evaluation are also having fairly old references, i.e, no references withing the 5 years were cited. There are all apart from the right english language "not being used". While putting the web address of the citation material, authors may forget about the access date or even if that one is present, not from the present time, for example at least one year before the current time. References printing style may be different from each other and format seems to be the same which is remaining from the previous rejected journal, very possibly. I do always point out with examples in a kind manner to the authors since I do believe critics should be motivating rather than discouraging. I do sometimes suggest the important published material regarding the topic so as to be added into the references section.
In case of already known information is given too long, I do suggest them to be shortened.
In case of methods unless properly declared, I do offer the authors explain more.
Abbreviations are sometimes too much used but without given full before, including in the abstract section.
Journal's impact factor is important, meanwhile careless, unattentive or declarations full of prejudgements with lock of weaknesses would decline the power of the study/ manuscript.
Hicham T
Evaluation Criteria: What factors do you consider when judging a paper's quality? This can include methodology, data analysis, relevance, novelty, clarity, etc
Valeria
I have evaluated a lot of manuscripts for different international journals. In general, I complete a form which presents a structure that varies very little from journal to journal. However, the form generally includes a space to explain what I think about the manuscript with no limit of characters or words. So, I often take advantage of this and write a detailed evaluation. I think that the Material and Methods section is crucial, specifically, the explanation of the experimental design and the associated statistical analyses. Also, I think that the way in which the authors show their results is very important. For example, they might include tables but attractive figures showing the principal results in a clear manner are indispensable. The Discussion section must be written according to the obtained results which must be compared with published results obtained previously. The introduction section must exp´lain clearly all the topics that the authors address in the Discussion section and must include the general question they seek to study which might be associated to one or more objectives and the corresponding hipoteses. There should be a clear correlation between each hipotesis and the respective analyses and results which, in turn, should be resumed in the discussion. The most representative references should be correctly cited but the list of references should not be too long. The authors should incluide Supplementary material if it helps understanding their work. Finally, I think that the issue addressed in the manuscript should be relevant for the journal and relatively novel (although I do no´t think that the last question is indispensable) and the manuscript should be as clear and short as possible, including page and line numbers. I have never´ used any formal scoring system and I do not think that they are really neccessary.
Prof. Yonar
First of all, my approach to evaluating an article starts with the harmony of the subject and content (abstract) of the article. I examine how effectively the results are presented. Then, I examine the methodology applied and evaluate its suitability. Then, I look at the compatibility of the discussion and results with the literature and make my decision.
I am not applying any scoring
Finally the novelty of study is improves my interest on the article.
Finally the novelty of study is improves my interest on the article.
Ines
When evaluating research papers, several issues are of importance. the relevance of the question it answers, the metodology used to answer this questions, the results obtained (these includes proper result presentation and statistical analysis), the conclusions which come from those results and the importance of these conclusions for the field.
I think scoring systems are useful, but to certain extent, a combination of scores and personal assessments works better in my opinion.
I think scoring systems are useful, but to certain extent, a combination of scores and personal assessments works better in my opinion.
James Bus
This response is offered only for academic toxicology publications in which the papers offer final suggestions/conclusions that the experimental in vivo and/or in vitro data presented in such papers specifically infer potential plausible adverse human health outcomes. My initial read of all such papers is to simply ask the critical question of what should be expected of any toxicology study: Do the author(s) present any analysis of whether the "doses" used in the study reasonably overlap/approach actual measured real-world human exposure "doses"?
For example, if the paper describes an in vivo "effect" of concern at, e.g., a 100 mg/kg dose while available published (but often uncited) data suggests that real-world human exposure scenarios are reasonably in the low ug/kg dose range, there is an immediate concern that the findings likely have very limited, if any, exposure "quality" relevance to the objective/inference of paper, i.e., to identify endpoint(s) offering a reasonable priority concern to potential adverse human outcomes.
Similarly, for in vitro studies, the "dose" question can be posed as: How much of a dose would have to be reasonably given to humans (also to experimental animals) in order to reach the test concentration producing the effect of concern? Thus, for example, if the "effect" concentration is 100 ug/ml while human exposures reasonably predict steady-state or even Cmax blood/tissue concentrations are very reasonably many orders of magnitude lower, an absence or only superficial consideration of such context immediately reduces the exposure "quality" of the finding as reasonably informing human health concerns.
I continue to be disappointed in how far too often supposed "toxicology" studies, and particularly those of environmental chemicals, fall substantially short of providing any plausible or substantive context of how the experimental doses compare to realistic human exposures. It is not enough to simply state that humans are exposed without attempting to detail as to how much, and particularly so when reasonable human exposures are readily known or have been plausibly estimated. Many papers often state "little is known" about human exposures (thus inferring a simplistic rationale for whatever experimental dose(s) are evaluated), when even a cursory review of the literature indicates this not so and the selected doses are indeed far greater than any reasonably anticipated human doses.
Of course, there multiple other "dose" considerations that should also factor into a "quality" assessment of whether the study is informative of an experimental "problem formulation" posed as identifying doses and associated effects presenting realistic health concerns. For example, was an in vivo study conducted by a human-relevant exposure route? If a chemical's primary route of exposure is via skin, and the physical-chemical properties suggest poor dermal penetration, toxicity findings reported by ip, iv or sc dosing bypassing the skin barrier suggest an immediate dose "quality" issue. Similarly, if the chemical's structure indicates a high probability of oral first-pass metabolism, and this is the primary route human exposure, again ip/iv/sc dose identified effects pose immediate dose "quality" concerns due to bypassing of such first-pass metabolism.
The above examples are offered as high-level but nonetheless critical dose/exposure "quality" considerations in determining if the results of toxicity study findings are indeed "fit for purpose" in supporting any author-supplied conclusions suggesting realistic human health concerns.
For example, if the paper describes an in vivo "effect" of concern at, e.g., a 100 mg/kg dose while available published (but often uncited) data suggests that real-world human exposure scenarios are reasonably in the low ug/kg dose range, there is an immediate concern that the findings likely have very limited, if any, exposure "quality" relevance to the objective/inference of paper, i.e., to identify endpoint(s) offering a reasonable priority concern to potential adverse human outcomes.
Similarly, for in vitro studies, the "dose" question can be posed as: How much of a dose would have to be reasonably given to humans (also to experimental animals) in order to reach the test concentration producing the effect of concern? Thus, for example, if the "effect" concentration is 100 ug/ml while human exposures reasonably predict steady-state or even Cmax blood/tissue concentrations are very reasonably many orders of magnitude lower, an absence or only superficial consideration of such context immediately reduces the exposure "quality" of the finding as reasonably informing human health concerns.
I continue to be disappointed in how far too often supposed "toxicology" studies, and particularly those of environmental chemicals, fall substantially short of providing any plausible or substantive context of how the experimental doses compare to realistic human exposures. It is not enough to simply state that humans are exposed without attempting to detail as to how much, and particularly so when reasonable human exposures are readily known or have been plausibly estimated. Many papers often state "little is known" about human exposures (thus inferring a simplistic rationale for whatever experimental dose(s) are evaluated), when even a cursory review of the literature indicates this not so and the selected doses are indeed far greater than any reasonably anticipated human doses.
Of course, there multiple other "dose" considerations that should also factor into a "quality" assessment of whether the study is informative of an experimental "problem formulation" posed as identifying doses and associated effects presenting realistic health concerns. For example, was an in vivo study conducted by a human-relevant exposure route? If a chemical's primary route of exposure is via skin, and the physical-chemical properties suggest poor dermal penetration, toxicity findings reported by ip, iv or sc dosing bypassing the skin barrier suggest an immediate dose "quality" issue. Similarly, if the chemical's structure indicates a high probability of oral first-pass metabolism, and this is the primary route human exposure, again ip/iv/sc dose identified effects pose immediate dose "quality" concerns due to bypassing of such first-pass metabolism.
The above examples are offered as high-level but nonetheless critical dose/exposure "quality" considerations in determining if the results of toxicity study findings are indeed "fit for purpose" in supporting any author-supplied conclusions suggesting realistic human health concerns.
RobF
Roughly in order:
1) clearly written and complete report of the study, so it is possible to judge the remaining criteria
2) importance of the research problem and research question(s) to advance the state of the art of theory and practice in the field (includes replications as well as original research questions).
3) use of a study design with sufficient external and internal validity to answer the research question(s)
4) proper execution of the study and analysis of data, with adequate mitigation of threats to validity.
5) claims of findings and conclusions properly qualified according to strength of evidence presented.
6) thorough discussion of implications of the study conclusions for the research problem.
7) application of research ethics and publication ethics (e.g., evidence of plagiarism, treatment of participants, etc.)
There are so many variations of these evaluation factors that I don't think a scoring system could have adequate external validity.
Reviewer's personal tips: whenever I identify an issue to be addressed in resubmission, I try to write a specific suggestion on how to address the issue.
1) clearly written and complete report of the study, so it is possible to judge the remaining criteria
2) importance of the research problem and research question(s) to advance the state of the art of theory and practice in the field (includes replications as well as original research questions).
3) use of a study design with sufficient external and internal validity to answer the research question(s)
4) proper execution of the study and analysis of data, with adequate mitigation of threats to validity.
5) claims of findings and conclusions properly qualified according to strength of evidence presented.
6) thorough discussion of implications of the study conclusions for the research problem.
7) application of research ethics and publication ethics (e.g., evidence of plagiarism, treatment of participants, etc.)
There are so many variations of these evaluation factors that I don't think a scoring system could have adequate external validity.
Reviewer's personal tips: whenever I identify an issue to be addressed in resubmission, I try to write a specific suggestion on how to address the issue.
ChiaraG
Personally, my main criteria in evaluating / reviewing a scientific paper are:
1. Methodology: a clear experimental design with full disclosure of procedures is mandatory
2. Novelty and significance of the experimental question
3. Readability of both text and figures
1. Methodology: a clear experimental design with full disclosure of procedures is mandatory
2. Novelty and significance of the experimental question
3. Readability of both text and figures
Francesco Grande
The publishing editor is very important. If the journal is only open-access or has also a subscription model. The trustworthiness and credibility in the field. Who is the editor-in-chief and which people is the editorial and review board composed of?
XMejuto
The evaluation criteria are based on the novelty of the subject of the manuscript, that the methodology is correct and is clearly stated to guarantee the reproducibility of the data, if necessary. Furthermore, another key point is that the data analysis and conclusions are relevant. Of course, it is also appreciated that it is an easy-to-read manuscript, where the key ideas are clearly stated. Apart from that, I do not use scoring systems since I do not consider them effective.
Francisco Wilker Mustafa Gomes Muniz
Usually, I spend 2-3h exclusively reading the manuscript and the literature about the topic. Methodological aspects are critically analyzed, specially when there is a prior research protocol of the study.
Marija
Novelty in research, appropriate statistical analysis, good data presentation, references up to date
Angelica
For me, a proper English language is the first thing I evaluate, because it generally reflects the quality and effort put into the research done. A paper with poor language is really difficult to read, as it takes focus from the content.
Second thing I evaluate are tables and figures, as well as their legends. This is crucial to understand results and any analyses made on the results. If I cannot understand what is reported in tables and figures are poorly made, as well as badly written legends, then how should I be able to interpret the results?
Thereafter I check if methodology is sound and understandable, before moving on to results and discussion.
Scoring systems and defined quality criteria can be good to have if you are new to reviewing, so you know what to check for and be consistent. But I feel, the more experienced I have gained, the less I need to have these as a help and the reviewing is quicker, since I now know more about how to evaluate papers.
Second thing I evaluate are tables and figures, as well as their legends. This is crucial to understand results and any analyses made on the results. If I cannot understand what is reported in tables and figures are poorly made, as well as badly written legends, then how should I be able to interpret the results?
Thereafter I check if methodology is sound and understandable, before moving on to results and discussion.
Scoring systems and defined quality criteria can be good to have if you are new to reviewing, so you know what to check for and be consistent. But I feel, the more experienced I have gained, the less I need to have these as a help and the reviewing is quicker, since I now know more about how to evaluate papers.
Tashfeenchem