4.7
What do you think is the acceptable power of epidemiologic design, when type 1 error (p-value) is fixed at 0.05 (two-sided)? Please specify_____ %.
Results
(9 Answers)
-
Expert 3
Isn't there an "an" missing before "epidemiological design"?
I think the question is ambiguous. Do I not need study design, effect size (d), sample size(n), and standard deviation (sd), too to answer the question? I assume a cross-sectional design and use the sample size equation to calculate power: Zb = sqrt((n*d^2)/(2*sd^2)). If n=100, delta = 0.3, sd = 1, => Zb = 0.161 (hopefully)
=> Zb I am looking up = 56% -
Expert 6
80% is fine, but more important is the effect size estimated. 80% is too low for a huge effect (e.g., a RR of 5) but perhaps too high for a small effect (RR = 1.1or 1.2). Observational studies are too crude and susceptible to bias to infer causal effects of very small magnitude. -
Expert 7
90% -
Expert 1
Traditionally, the minimum Power of any epidemiological study is 80%. But given other limitations associated with the study design (e.g., costs, sample size) this is not always possible. In reality, I have only ever seen sample size calculations published for two studies, these studies achieved the required Power. Most Power calculations are used primarily for Grant applications to show that the required sample size is feasible, however, the majority of published studies never meet the required Power to even undertake analysis. -
Expert 9
Between 80% and 90%. I think this depends on several factors. For example, context of the study - is it a novel hypothesis where lower power might be acceptable, or have several similar studies preceded it, and it is trying to answer a question unequivocally where higher power is needed? What are the clinical or public health consequences of failing to find a true association? Is it feasible (given funding, other resources, or practical constraints) to conduct a study with very high power? -
Expert 8
80% is conventional and I would also say that is in general the minimal acceptable level. Higher would of course be better but generally comes at a cost (sample size, resources, participant burden, etc etc) -
Expert 5
I think 80% is the minimum power accepted generally by epidemiologists. However, problematically, they rarely include provisions for the effects of confounding and measurement error in their sample size calculations during design or power calculations post hoc, so many studies in occupational and environmental epidemiology are likely underpowered (i.e. less than 80%).
For long term, costly studies, I believe the power should be higher than 80% as the costs need to be justified and a 20% chance of being "wrong" is too high, especially for exposures that may affect large segments of the population. -
Expert 4
90%, which means we will fail to detect statistical significance in 1/10 of the analysis. -
Expert 2
I would suggest a power between 80% and 90% when the type 1 error (alpha level) is fixed at 0.05 (two-sided). This range strikes a balance between reducing the probability of a false negative (type 2 error) and maintaining a reasonable sample size. However, though this range is widely accepted in practice, it is a subjective choice, just like the type 1 error of 0.05. The acceptable power level depends on several factors, including the expected effect size, the importance of the outcome, the feasibility of recruiting a larger sample size, and the potential consequences of missing a true association. In high-stakes research, such as studies related to public health or clinical interventions, higher power (closer to 90%) may be preferred to ensure the findings are robust, while in exploratory or resource-constrained studies, lower power (around 80%) might be acceptable.