How do you evaluate academic papers? $100

SciPinion is seeking your opinion on how you evaluate publications. We are interested in the criteria you personally use to assess the quality of peer-reviewed academic papers. What factors make a paper trustworthy and credible from your point of view?
We invite you to share your thoughts on the following:
  • Evaluation Criteria: What factors do you consider when judging a paper's quality? This can include methodology, data analysis, relevance, novelty, clarity, etc.
  • Scoring Systems: Do you prefer any formal scoring systems such as Klimisch Score or other grading frameworks for this purpose? How effective are they in your opinion?
  • Personal Recommendations: Any other personal tips or methods you use to determine the reliability of a publication?

Regardless if you follow a structured approach or rely on your own set of criteria, we want your opinion.
The author of the response with the most upvotes will receive a $100 reward courtesy of SciPinion!

Answers can be submitted until June 21, 2024, when voting will begin and remain open until June 28, 2024.

Kindest Regards

SciPinion

General
Title: Does it usefully reflect the findings? Titles that overplay findings are a bad sign. A clever title is a bonus as long as it isn't distracting.
Authors: Too many, too few, or a reasonable number and array of specialties given the topic? Are their contributions clear?   
Abstract: Is it really an abstract, giving enough information to support a decision to read further (or not), or is it just a teaser? The latter wastes a reader's time.
Introduction: Does it adequately contextualize the state of the science leading to the present paper or is it an exercise in virtue signaling and "secret handshakes" meant to identify the authors as disciplinary insiders, whether eminent or aspiring? The latter suggest unhealthy tangential motivations. Are the reported driving question and hypothesis reasonable, or does either appear suspiciously specific enough to suggest that it could have been reverse engineered from the results? 
Methods: Can I imagine replicating this work given the information provided? If not, why should I trust it? Have the authors explicitly admitted to any potential concerns? If not, why not? Were they unaware of any possible problems, or are they hiding something? Is there any suggestion that the methods might have been retrospectively revised to support the outcome?
Results: Are the reported results achievable given the reported methods? Is there extraneous material? Does anything that might obviously flow from the hypothesis and methods seem to be missing? Are table and diagram labels self-explanatory? Is graph scaling appropriate?
Conclusions/Discussion: In philosophical terms, are they "begging the question", i.e., did the formulation of the questions or hypotheses render some aspect of the results inevitable? How much of the analysis relies on metaphor? How much on "hand waving"? Did the authors appropriately place their results in context? Did they miss something? Did they claim undue credit for advancement? Did they usefully identify future research directions?
Overall: Did any obvious flaws slip past editors and peer reviewers? 
Greeting!
 
My point of view regarding the questions raised is as follows:
 
Evaluation Criteria: What factors do you consider when judging a paper's quality? This can include methodology, data analysis, relevance, novelty, clarity, etc.
·        In a paper, a clearly focused question should be address and use valid methods to answer the research question.  
·        The purpose and objectives should be clearly stated, specific, and relevant as well as be aligned with the research design, methods, and analysis.
·        Why and how to conduct the study should be clearly and logically described in the problem statement.
·        In the design and method of research, the method of data collection and analysis should be clearly and in detail stated.
·        The limitations of the study should be stated honestly.
·        It should be stated how to guarantee the reliability and validity of the study. 
·        In addition to the fact that the methodology should not have any weakness, the process should be so well and clearly stated that another person can easily follow it.
·        It is important how to interpret the data. Is it evidence-based and logical?  Is it related to purpose and objectives, existing texts and theories? Is the research question or problem answered well?
·        The conclusion of the article should be based on the results and discussion and be realistic and also in line with the purpose and objectives of the study.  Are the results of this study important?
 

Scoring Systems: Do you prefer any formal scoring systems such as Klimisch Score or other grading frameworks for this purpose? How effective are they in your opinion?
 
I usually use these tools. They are helpful. But they are not 100% effective. Because, each study has its own characteristics, which also require specific evaluation.
 
Personal Recommendations: Any other personal tips or methods you use to determine the reliability of a publication?
The methodology is very important to me. However, if the results of the study do not add to the existing knowledge, the paper has no value.  If the study does not have methodological weaknesses, and its results have a positive impact on society, its quality can be improved by correcting other weaknesses that can be improved. A single tool cannot be used to evaluate all articles. The quality of each article should be evaluated specifically and according to the topic and methodology used.

Thanks,
Best Regards

Evaluating academic papers involves considering several key criteria to assess their quality and credibility. I personally look after the following points in the academic papers while evaluating them.
Methodology: This involves an understanding of how appropriate and sound the methods used for the research are. This entails study design, sampling strategies as well as the methods used in data collection.
Data Analysis: It does matter the way the data is analyzed and interpreted. The employed statistical methods should be appropriate for the type of data obtained and the results must be well described.
Relevance: The paper should meet the following criteria: The paper should focus on a research question or a problem in the given field. It should be relevant to add something new to the body of knowledge or to the practice.
Novelty: The originality of the paper and novelty, which in fact is the novelty of the paper, is the degree of originality of the paper. It should be different from other similar papers in the sense that it should present a new angle or approach.
Clarity and Structure: A good paper should be written coherently, well-argued, and easy to follow for the readers. The structure should be coherent and progressive; it should follow a logical sequence of introduction, development and conclusion.
Author Credibility: The reputation and the background of the authors can also give the reader some idea of the quality of the work in question.
Citation and References: Examining the quality and appropriateness of the source material cited in the paper can help determine if the work is grounded in prior research.

As for the scoring systems, they can be quite helpful if there is a clear set of criteria to compare results with; however, they may not be as effective in the case of analyzing the results of a scientific study within a particular field of knowledge. These can be helpful in terms of providing consistency to the assessments but can be limiting when it comes to the richness of quality.
I have evaluated a lot of manuscripts for different international journals. In general, I complete a form which presents a structure that varies very little from journal to journal. However, the form generally includes a space to explain what I think about the manuscript with no limit of characters or words. So, I often take advantage of this and write a detailed evaluation. I think that the Material and Methods section is crucial, specifically, the explanation of the experimental design and the associated statistical analyses. Also, I think that the way in which the authors show their results is very important. For example, they might include tables but attractive figures showing the principal results in a clear manner are indispensable. The Discussion section must be written according to the obtained results which must be compared with published results obtained previously. The introduction section must exp´lain clearly all the topics that the authors address in the Discussion section and must include the general question they seek to study which might be associated to one or more objectives and the corresponding hipoteses. There should be a clear correlation between each hipotesis and the respective analyses and results which, in turn, should be resumed in the discussion. The most representative references should be correctly cited but the list of references should not be too long. The authors should incluide Supplementary material if it helps understanding their work. Finally, I think that the issue addressed in the manuscript should be relevant for the journal and relatively novel (although I do no´t think that the last question is indispensable) and the manuscript should be as clear and short as possible, including page and line numbers. I have never´ used any formal scoring system and I do not think that they are really neccessary.

Roughly in order:
1) clearly written and complete report of the study, so it is possible to judge the remaining criteria
2) importance of the research problem and research question(s) to advance the state of the art of theory and practice in the field   (includes replications as well as original research questions).
3) use of a study design with sufficient external and internal validity to answer the research question(s)
4) proper execution of the study and analysis of data, with adequate mitigation of threats to validity. 
5) claims of findings and conclusions properly qualified according to strength of evidence presented. 
6) thorough discussion of implications of the study conclusions for the research problem. 
7) application of research ethics and publication ethics  (e.g., evidence of plagiarism, treatment of participants, etc.)

There are so many variations of these evaluation factors that I don't think a scoring system could have adequate external validity.

Reviewer's personal tips: whenever I identify an issue to be addressed in resubmission,  I try to write a specific suggestion on how to address the issue. 
These are few point on the basis a publication can be accessed.
1. Well explained abstract.
2. Published in reputed peer reviewed journal where  double blind reviewing process used.
3. Journal should be indexed in Scopus, web of science and pubmed cited.( SCI indexing)
4. The publication should be written clearly and easy to understand. Specifically, abstract, method, results and discussion.
5. No or very less Self cited.
6. No Plagiarism, data Can be reproducible and available for everyone.
Evaluation Criteria: What factors do you consider when judging a paper's quality? This can include methodology, data analysis, relevance, novelty, clarity, etc.
- Methodology and data availability. If they are not willing to share their raw data and/or their data analysis, I do not trust much their data.
- Quantitative methods: If their main confirmatory data is not the result of a reproducible quantification, but rather qualitative, I am not too sure. Of course, this depends on the type of assay and or conclusion they have made.
- Protocol description: If the protocol is well described and even published in Protocols.io, they have more credibility to me than when their methods are just referencing a chain of papers from their old publications. Every generation in a lab changes a little bit their protocols so that they work with new reagents, or to improve the signal they measure. Even the software can change. If they are not updating their protocols and publishing their modifications, then their quality of research is not guaranteed, in my opinion.

Scoring Systems: Do you prefer any formal scoring systems such as Klimisch Score or other grading frameworks for this purpose? How effective are they in your opinion?
- I have never heard from any of them until today. 

Personal Recommendations: Any other personal tips or methods you use to determine the reliability of a publication?
- The type of statistics they use and their statistic analysis. As I mentioned before, if I cannot have access to their raw data and/or their data analysis scripts (R or python), I do not trust too much the reproducibility/replicability  of their research. 
- The "show off " effect: If they use fancy words to make their research look like they will have a huge impact, like "novel" or "unique", I do not trust their commitment to reproducibility and documentation of their research. Same with too much specialized jargon usage. I think that they are more under "if I cannot convince them, confuse them" mentality. Not really too trustworthy.

The evaluation criteria are based on the novelty of the subject of the manuscript, that the methodology is correct and is clearly stated to guarantee the reproducibility of the data, if necessary. Furthermore, another key point is that the data analysis and conclusions are relevant. Of course, it is also appreciated that it is an easy-to-read manuscript, where the key ideas are clearly stated. Apart from that, I do not use scoring systems since I do not consider them effective.
The most important criteria in an article review are clarity in methodology and data analysis. The use of appropriate methodology to explore or conduct an experiment tells about the efforts and sincerity of the work, and reviewing the analysis reveals different biases accepted by the researchers/ authors and the appropriateness of the analytical tests used. Data availability gives an idea about the truthfulness of the article. The rest of the things like relevance, novelty, and clarity are subjective and have a lot of scope for improvement, but a flaw in methodology in a biological experiment is often irreparable. 
Scoring systems are not of much use and recommendations must always be ignored for the good of the progress of science.
Firstly, I start reading the whole paper carefully when I feel myself fresh minded, especially at morning hours rather than  afternoons.

I do pay attention every line, tables and references ( each reference counted). Still I see while reviewing the papers that authors do write unattentively such as mentioning 5 tables but only 3 of them were provided in the paper. Even the same mistake is repeated giving the impression that manuscript is not carefully read by all the authors and corrections were provided only by one of them. Some manuscripts sent for evaluation are also having fairly old references, i.e, no references withing the 5 years were cited. There are all apart  from the right english language "not being used". While putting the web address of the citation material, authors may forget about the access date or even if that one is present, not from the present time, for example at least one year before the current time. References printing style may be different from each other and format seems to be the same which is remaining from the previous rejected journal, very possibly. I do always point out with examples in a kind manner to the authors since I do believe critics should be motivating rather than discouraging. I do sometimes suggest the important published material regarding the topic so as to be added into the references section.

In case of already known information is given too long, I do suggest them to be shortened.

In case of methods unless properly declared, I do offer the authors explain more. 

Abbreviations are sometimes too much used but without given full before, including in the abstract section.

Journal's impact factor is important, meanwhile careless, unattentive or declarations full of prejudgements with lock of weaknesses would decline the power of the study/ manuscript.
First of all, my approach to evaluating an article starts with the harmony of the subject and content (abstract) of the article. I examine how effectively the results are presented. Then, I examine the methodology applied and evaluate its suitability. Then, I look at the compatibility of the discussion and results with the literature and make my decision.
 
I am not applying any scoring

Finally the novelty of study is improves my interest on the article. 
This response is offered only for academic toxicology publications in which the papers offer final suggestions/conclusions that the experimental in vivo and/or in vitro data presented in such papers specifically infer potential plausible adverse human health outcomes.  My initial read of all such papers is to simply ask the critical question of what should be expected of any toxicology study: Do the author(s) present any analysis of whether the "doses" used in the study reasonably overlap/approach actual measured real-world human exposure "doses"?   

For example, if the paper describes an in vivo "effect" of concern at, e.g., a 100 mg/kg dose while available published (but often uncited) data suggests that real-world human exposure scenarios are reasonably in the low ug/kg dose range, there is an immediate concern that the findings likely have very limited, if any, exposure "quality" relevance to the objective/inference of paper, i.e., to identify endpoint(s) offering a reasonable priority concern to potential adverse human outcomes.  

Similarly, for in vitro studies, the "dose" question can be posed as: How much of a dose would have to be reasonably given to humans (also to experimental animals) in order to reach the test concentration producing the effect of concern?  Thus, for example, if the "effect" concentration is 100 ug/ml while human exposures reasonably predict steady-state or even Cmax blood/tissue concentrations are very reasonably many orders of magnitude lower, an absence or only superficial consideration of such context immediately reduces the exposure "quality" of the finding as reasonably informing human health concerns. 

I continue to be disappointed in how far too often supposed "toxicology" studies, and particularly those of environmental chemicals, fall substantially short of providing any plausible or substantive context of how the experimental doses compare to realistic human exposures.  It is not enough to simply state that humans are exposed without attempting to detail as to how much, and particularly so when reasonable human exposures are readily known or have been plausibly estimated. Many papers often state "little is known" about human exposures (thus inferring a simplistic rationale for whatever experimental dose(s) are evaluated), when even a cursory review of the literature indicates this not so and the selected doses are indeed far greater than any reasonably anticipated human doses.

Of course, there multiple other "dose" considerations that should also factor into a "quality" assessment of whether the study is informative of an experimental "problem formulation" posed as identifying doses and associated effects presenting realistic health concerns.  For example, was an in vivo study conducted by a human-relevant exposure route? If a chemical's primary route of exposure is via skin, and the physical-chemical properties suggest poor dermal penetration, toxicity findings reported by ip, iv or sc dosing bypassing the skin barrier suggest an immediate dose "quality" issue.  Similarly, if the chemical's structure indicates a high probability of oral first-pass metabolism, and this is the primary route human exposure, again ip/iv/sc dose identified effects pose immediate dose "quality" concerns due to bypassing of such first-pass metabolism.  

The above examples are offered as high-level but nonetheless critical dose/exposure "quality" considerations in determining if the results of toxicity study findings are indeed "fit for purpose" in supporting any author-supplied conclusions suggesting realistic human health concerns. 


Personally, my main criteria in evaluating / reviewing a scientific paper are: 
1. Methodology: a clear experimental design with full disclosure of procedures is mandatory 
2. Novelty and significance of the experimental question
3. Readability of both text and figures 

For me, a proper English language is the first thing I evaluate, because it generally reflects the quality and effort put into the research done. A paper with poor language is really difficult to read, as it takes focus from the content.
Second thing I evaluate are tables and figures, as well as their legends. This is crucial to understand results and any analyses made on the results. If I cannot understand what is reported in tables and figures are poorly made, as well as badly written legends, then how should I be able to interpret the results?
Thereafter I check if methodology is sound and understandable, before moving on to results and discussion.
Scoring systems and defined quality criteria can be good to have if you are new to reviewing, so you know what to check for and be consistent. But I feel, the more experienced I have gained, the less I need to have these as a help and the reviewing is quicker, since I now know more about how to evaluate papers. 
Evaluation Criteria: Suitable methodology for the research hypothesis, clear description of materials, equipment and methodology. Provided protocols with enough detailed protocols for another researcher to follow. Fulfilled necessary ethical approvals. Appropriate statistical methods and correctly applied. The rationality, novelty and innovation of hypothesis. Structurally well organized and written with appropriate references. 

Scoring Systems: Yes, for consistent and transparent evaluation of manuscript, grading system plays an important role for effective assessing. Grading system indicating poor to excellent.

Personal Recommendations: By the reviewing process one can determine the reliability of a publication.

I use to judge paper quality from Abstract, Results, Discussions and Conclusions. 
Critically I used to confirm background calculations of the presented information. I use to suggest  many supportive comments to improve quality as well as clarity of the work under review. 
The publishing editor is very important. If the journal is only open-access or has also a subscription model. The trustworthiness and  credibility in the field. Who is the editor-in-chief and which people is the editorial and review board composed of?
Novelty, methodology and data/results analysis are important. Sometime depending on the nature of subject the methodology may not be novel but the results and analysis leading to significant breakthrough in the field. e.g. sometimes in the synthetic chemistry the synthesis of molecule may be carried out by the routine method but the biological studies or other applications are so important the it can be a breakthrough in the field.
There are many diverse factors! ...
Evaluation Criteria: What factors do you consider when judging a paper's quality? This can include methodology, data analysis, relevance, novelty, clarity, etc
When evaluating research papers, several issues are of importance. the relevance of the question it answers, the metodology used to answer this questions, the results obtained (these includes proper result presentation and statistical analysis), the conclusions which come from those results and the importance of these conclusions for the field.
I think scoring systems are useful, but to certain extent, a combination of scores and personal assessments works better in my opinion.
Usually, I spend 2-3h exclusively reading the manuscript and the literature about the topic. Methodological aspects are critically analyzed, specially when there is a prior research protocol of the study.
Novelty in research, appropriate statistical analysis, good data presentation, references up to date