Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up. Connect and share knowledge within a single location that is structured and easy to search. Is there a way of investigating publications bias in a meta-analysis of single case studies?
Publication bias - Wikipedia
Jump to navigation. Comparative continuous outcomes are commonly measured on an absolute mean difference scale, and it is not uncommon for the magnitude of effect to be related to response in the control arm i. When this is the case, funnel plots can appear highly asymmetric, even when publication bias is not present, since correlations between outcome and both effect size and its standard error exist.
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.