Results
(9 Answers)

  • Expert 3

     I think that 10% is as useful as any other value. I do believe it would be helpful to correct for any bias present in the study, but I am also curious about how the bias correction would be implemented. 
  • Expert 6

    10% is fine, if supplemented by a sensitivity analysis that considers confounding and within-person variability by age, metabolism, and timing of sample, irrespective of imprecision of exposure measurement on single samples.
  • Expert 7

    I do not think "cookbook" approach is appropriate.   Whether to adjust or not depends on the how much data is available, how many variable you need to adjust, and a change in the estimate due to adjustment.  Often it is important to present both unadjusted and adjusted values.
  • Expert 1

    This is the trick question! How long is a piece of string? Bias should always be minimized as much as possible. Similar to bias assessment used in systematic reviews/meta-analysis, a spectrum approach might be reasonable (i.e., 0-3% Excellent; 4-6% Good; 7-10% fair; >10% not acceptable).
  • Expert 9

    I think 10% is acceptable.  I prefer, where feasible and not overwhelming to the reader, to present results both unadjusted for covariates and adjusted.  That way the reader can get a sense of the impact of adjustment for themselves.  Again, I like the idea of doing sensitivity analyses using upper and lower bounds of important covariates, for the same reason.
  • Expert 8

    First, i think the rule of thumb for a 10% change in estimates for choosing covariates is not appropriate. I understand the wish for such a pragmatic rule,  but e.g. not including many confounders which all have ~9% effect on the estimates would results in significantly biased estimates. i think choice for including covariates as potential confounders should always be driven by prior knowledge, theory, knowledge from previous studies or biological pathways, etc. DAGs are useful to visualize this and make decisions. 
    I guess the same holds for the size of bias in exposure assessment. One would want to reduce it as much as possible and then take what we know about the potential measurement errors into account in interpreting results, whether it's estimated at 1 or 20%. Knowledge of the direction of the bias, whether the error is differential or not, etc. is also crucial. Adding a threshold is too simplistic and could lead to less critical interpretations. 
  • Expert 5

    I think that this depends somewhat on the scale of measurement, the outcome measurement and the risk estimate used (OR vs RR).  I think that 10% is reasonable, but these adjustments need to be evaluated with the data at hand (i.e. are estimates changes and precision increased?).
  • Expert 4

    At most 5%, if larger than 5% it would be about 50% of the effect of a covariate (that affect the effect estimate by 10%).
  • Expert 2

    In my view, when designing epidemiologic studies involving biomarkers of environmental exposure, I consider a bias due to measurement error of around 10% or less to be tolerable before applying a statistical adjustment. This aligns with the common heuristic used for confounding, where a bias exceeding 10% is typically viewed as a threshold for adjustment.
    The rationale behind this threshold is that smaller biases (under 10%) are unlikely to substantially distort effect estimates or lead to misleading conclusions, especially in studies with sufficient statistical power. However, when bias from measurement error exceeds 10%, it can significantly impact the accuracy of the association between exposure and outcomes, potentially masking true effects or introducing false ones. Given the inherent variability in biomarker levels, even moderate biases can weaken the validity of results if left unadjusted.
    While the 10% threshold is somewhat arbitrary, I find it provides a practical balance—ensuring that we address meaningful bias without over-correcting. In cases where biomarker variability is particularly high, or the study's outcomes have critical public health implications, I might argue for an even lower threshold to maintain confidence in the findings.