Data fabrication, p-hacking and biased peer review are issues with published papers. In your opinion, how can we fix this problem as a scientific community?
Recently, these issues have been brought up in many fields, resulting in retraction of important papers and investigation on the data produced by prominent scientists. That undermines the trust I used to put in published data, granted that we are always skeptical in science, but we always move forward basing our new hypothesis on what has been published. I wonder how can we help to improve the trustworthiness of published data.
Bioinformatics
Biostatistics
Psychology
Statistics
David Joubert
For biased peer review, adopt a two-level review system in which one member of the editing board looks carefully at the review to ensure validity and fairness.
Develop an agreement between universities so that faculty and scholars are encouraged to be active as reviewers. Currently, that is not given any weight in applications for tenure, etc. So the suggestion is to provide incentives.
For p-hacking, articles should really de-emphasize p-values anyway, since most samples are not probabilistic in nature. Analyses should emphasize effect size, not alphas.
Data fabrication, well that one can perhaps be alleviated by broader use of an open science format...make data more broadly available. But that will not resolve all problems associated with it.
Develop an agreement between universities so that faculty and scholars are encouraged to be active as reviewers. Currently, that is not given any weight in applications for tenure, etc. So the suggestion is to provide incentives.
For p-hacking, articles should really de-emphasize p-values anyway, since most samples are not probabilistic in nature. Analyses should emphasize effect size, not alphas.
Data fabrication, well that one can perhaps be alleviated by broader use of an open science format...make data more broadly available. But that will not resolve all problems associated with it.
bartlosz
I believe that we have a peer review crisis, we have more papers and journals and we are overloaded by revision requests. I have seen some reviews that were 1-2 sentences long. It means that someone merely rushed through the manuscript. I believe that there should be a databases of reviewers and quality of their reviews should be evaluated and the most important point, they should be compensated for their work. This is unfortunately one rule of the free market, that people demand some profit for their work, especially since they see that they have to pay few thousand euros/dollars for each publication!
In my opinion in a long term, without at least a small compensation for reviewer's work, we will have worse and worse quality of peer review.
In my opinion in a long term, without at least a small compensation for reviewer's work, we will have worse and worse quality of peer review.
Humayun Kabir, RN, BSN, MPH, MSc
In my experience, the free peer review model does not work; it is dead, and science is vulnerable to this model now. It is always hard to find two peer reviewers, even after I invited 30 reviewers. After waiting for two months, I have seen two reviewers provide just 1/2-line comments, probably because they did not read the paper at all. As a non-paid editor, while working on many projects of my own, it is also hard to take on the responsibilities of the articles as peer reviewers.
More research is needed to improve the peer review system so that experts can read the manuscript and provide their neutral opinion. There are not a lot of pieces of evidence that publishers can use for action. Therefore, before taking further action, I would always think about investing more in research on how editors and peer reviewers will be willing to do their honest jobs.
More research is needed to improve the peer review system so that experts can read the manuscript and provide their neutral opinion. There are not a lot of pieces of evidence that publishers can use for action. Therefore, before taking further action, I would always think about investing more in research on how editors and peer reviewers will be willing to do their honest jobs.
asr
Where is the evidence base for this? I've been reviewing papers for more than 30 years. I came across one cheat paper.
I rely on academic integrity of my peer reviewers and they of mine. I sign a COI statement before submission. It's not perfect. We could all do with training.
If you suspect fraud contact COPE for advice.
I disagree that reviewing is not given weight in academic promotions. Quite the reverse. A reviewer of The Lancet stands out on an application form. In my post peer review contributes to my CPD points annually for my professional body.
I rely on academic integrity of my peer reviewers and they of mine. I sign a COI statement before submission. It's not perfect. We could all do with training.
If you suspect fraud contact COPE for advice.
I disagree that reviewing is not given weight in academic promotions. Quite the reverse. A reviewer of The Lancet stands out on an application form. In my post peer review contributes to my CPD points annually for my professional body.
Mikko
Biased peer review can be efficiently avoided if i) reviewers are educated about bias-related issues and 2) reviewers are requested to specifically examine research methodology of reviewed papers in terms of bias prevention. But, as long as bias is subjective, blinding of reviewers should be applied.
A.R.
Regarding biased reviews: I think, the solution is to have an additional level of review. Imagine that we have an additional reviewer assessing not the original manuscript or proposal, but fairness of reviews. A first-layer reviewer who is repeatedly judged unfair can be flagged in the system.
Faraz Ahmad