Data fabrication, p-hacking and biased peer review are issues with published papers. In your opinion, how can we fix this problem as a scientific community?

Recently, these issues have been brought up in many fields, resulting in retraction of important papers and investigation on the data produced by prominent scientists. That undermines the trust I used to put in published data, granted that we are always skeptical in science, but we always move forward basing our new hypothesis on what has been published. I wonder how can we help to improve the trustworthiness of published data.
Bioinformatics Biostatistics Psychology Statistics
Faraz Ahmad
Issues of data fabrication may never be resolved unless you have visual data like inmunoblots. Trust is key here. However, it is sometimes helpful to take certain steps such as presentation of data as scatter bar plot instead of bar plot, for example.
David Joubert
For biased peer review, adopt a two-level review system in which one member of the editing board looks carefully at the review to ensure validity and fairness.
Develop an agreement between universities so that faculty and scholars are encouraged to be active as reviewers. Currently, that is not given any weight in applications for tenure, etc. So the suggestion is to provide incentives.
For p-hacking, articles should really de-emphasize p-values anyway, since most samples are not probabilistic in nature. Analyses should emphasize effect size, not alphas. 
Data fabrication, well that one can perhaps be alleviated by broader use of an open science format...make data more broadly available. But that will not resolve all problems associated with it. 

I believe that we have a peer review crisis, we have more papers and journals and we are overloaded by revision requests. I have seen some reviews that were 1-2 sentences long. It means that someone merely rushed through the manuscript. I believe that there should be a databases of reviewers and quality of their reviews should be evaluated and the most important point, they should be compensated for their work. This is unfortunately one rule of the free market, that people demand some profit for their work, especially since they see that they have to pay few thousand euros/dollars for each publication!
In my opinion in a long term, without at least a small compensation for reviewer's work, we will have worse and worse quality of peer review.
Humayun Kabir, RN, BSN, MPH, MSc
In my experience, the free peer review model does not work; it is dead, and science is vulnerable to this model now. It is always hard to find two peer reviewers, even after I invited 30 reviewers. After waiting for two months, I have seen two reviewers provide just 1/2-line comments, probably because they did not read the paper at all. As a non-paid editor, while working on many projects of my own, it is also hard to take on the responsibilities of the articles as peer reviewers.

More research is needed to improve the peer review system so that experts can read the manuscript and provide their neutral opinion. There are not a lot of pieces of evidence that publishers can use for action. Therefore, before taking further action, I would always think about investing more in research on how editors and peer reviewers will be willing to do their honest jobs. 
Dr. Muhammad Zafar-ul-Hye
Addressing issues like data fabrication, p-hacking, and biased peer review requires a multifaceted approach from the scientific community. Here are some potential solutions:

  1. Transparency and Open Data:
    • Encourage researchers to make their data openly available for scrutiny. This allows others to replicate studies and verify results.
    • Journals could require authors to share raw data and analysis scripts as part of the publication process.
  2. Pre-registration of Studies:
    • Encourage or require researchers to pre-register their study protocols before data collection begins. This helps prevent selective reporting and p-hacking.
    • Journals could give more weight to pre-registered studies, regardless of the results.
  3. Replication Studies:
    • Promote and reward replication studies to validate or challenge existing findings. Replication can help identify issues such as p-hacking and data fabrication.
    • Journals could consider publishing well-conducted replication studies.
  4. Training and Education:
    • Provide training to researchers on research ethics, data analysis, and statistical methods. This can enhance awareness about potential pitfalls and misconduct.
    • Include ethics and research integrity courses in graduate and postgraduate programs.
  5. Whistleblower Protection:
    • Establish mechanisms to protect whistleblowers who report misconduct. This encourages individuals to come forward without fear of retaliation.
    • Encourage a culture of responsibility and accountability within research institutions.
  6. Peer Review Reform:
    • Implement double-blind peer review to reduce bias.
    • Journals could consider disclosing the names of reviewers to the authors after the review process is complete, promoting accountability.
  7. Incentive Structure:
    • Reevaluate the current system of incentives for researchers. Emphasize the quality of research over quantity.
    • Recognize and reward transparent and rigorous research practices.
  8. Improving Statistical Literacy:
    • Enhance statistical literacy among researchers and reviewers. This includes a better understanding of statistical methods, the importance of power analysis, and the limitations of p-values.
  9. Independent Oversight:
    • Establish independent bodies or committees to investigate allegations of misconduct. This helps ensure a fair and unbiased assessment of the situation.
  10. Technology Solutions:
    • Leverage technology to detect data anomalies and statistical inconsistencies.
    • Develop tools that can assist researchers in conducting robust statistical analyses.
  11. Collaborative Efforts:
    • Encourage collaboration and communication among researchers, institutions, and journals to collectively address and prevent misconduct.
Where is the evidence base for this? I've been reviewing papers for more than 30 years. I came across one cheat paper. 

I rely on academic integrity of my peer reviewers and they of mine. I sign a COI statement before submission. It's not perfect. We could all do with training. 

If you suspect fraud contact COPE for advice. 

I disagree that reviewing is not given weight in academic promotions. Quite the reverse. A reviewer of The Lancet stands out on an application form. In my post peer review contributes to my CPD points annually for my professional body. 
Biased peer review can be efficiently avoided if i) reviewers are educated about bias-related issues and 2) reviewers are requested to specifically examine research methodology of reviewed papers in terms of bias prevention. But, as long as bias is subjective, blinding of reviewers should be applied. 
Regarding biased reviews: I think, the solution is to have an additional level of review. Imagine that we have an additional reviewer assessing not the original manuscript or proposal, but fairness of reviews.  A first-layer reviewer who is repeatedly judged unfair can be flagged in the system.
Эта проблема очень сложна. Для решения её нужно получать рецензию от большей численности рецензентов и чтобы рецензенты имели хорошие работы.

Post an Answer

Sign In to Answer