Results
(9 Answers)

Answer Explanations

  • Yes
    Expert 3
     I think that a p-value of 0.05 is as useful as any other value. The real question is: for what fraction of the population of interest do you want to be certain, and what effect size are you considering. 
  • Yes
    Expert 7
    It is a suitable default.  But one should not use ANY number to compare to - significant/non-significant is not a useful dicotomy - one should look at how narrow the confidence interval is and how precisely a parameter of interest is estimated.
  • Yes
    Expert 1
    I have recently come across this exact problem. A colleague was conducting logistic regression analysis using the p-value cut-off of 0.1. When I reviewed their models, it was clear that too many factors were being included in the analysis. Given that biomonitoring studies have greater limitations this approach would not be acceptable, it would only muddy the waters. Using p<0.05 provides a more strict cut-off and allows the inclusion of more definitive variables.
  • Yes
    Expert 9
    This seems like a suitable default.  But "default" suggests that there may be other situations where alternatives are appropriate.  For example, exploring novel hypotheses might encourage a looser p value of 0.10, so that promising leads are not excluded.  But confirmatory or high-stakes studies might want to use a more stringent p value such as 0.01 or lower.  That can also be true if there are many associations being tested (multiple comparisons).  Personally and where possible, I prefer using confidence intervals over p values; I think they convey more information.
  • Yes
    Expert 8
    I would say it is a suitable default (adjusted for multiple testing where needed), but of course, one should look beyond the exact p-value when interpreting results! It's never black or white. Therefore it's crucial to report estimates and measures of variation around effect estimates - these should guide interpretation (together with knowledge of biases in the study, confounding, analyses, etc.), not the p-value. 

    And of course, a p-value alone should also not guide important policy, regulation, or other decisions beyond the interpretation of study results in a scientific paper. 

  • Yes
    Expert 5
    I'm not a fan of these hard cut-points for p-values, but generally agree with this. 
  • Yes
    Expert 4
    It is an arbitrary value and I don't think there exists a "best" p-value.
  • Yes
    Expert 2
    The use of a type 1 error (p-value) of 0.05 (two-sided) as a default in epidemiologic research involving biomarkers of environmental exposure is generally suitable, but it may not be optimal for all situations. The 0.05 threshold is widely accepted because it balances the risks of false positives (type 1 error) and false negatives (type 2 error) in most research contexts. However, it is a somewhat arbitrary standard and might not always reflect the specific requirements of studies involving biomarkers of environmental exposure, where results can have significant public health implications.
    In fields where biomarkers are subject to high variability or where public health decisions are made based on findings, a lower p-value threshold (e.g., 0.01) could be more appropriate to reduce the likelihood of false positives and ensure that associations are robust and reliable. Conversely, in exploratory studies or those with limited sample sizes, where detecting weak signals is more challenging, a p-value threshold of 0.1 might be justifiable to avoid missing potential associations.
    Personally, while 0.05 is generally accepted, I believe a more context-specific approach is often warranted, with stricter thresholds in confirmatory studies and potentially higher thresholds in exploratory settings.