Show Me the Study: How Biased Research Hijacked Evidence-Based Medicine (Part 1)
- Qaisar J Qayyum MD
- Jun 7, 2025
- 16 min read
Updated: Jun 7, 2025
Dr. Qaisar J. Qayyum
Chief Editor, Noor Journal of Complementary and Contemporary Medicine, Clinical Assistant Professor, Oklahoma, USA. Email: chiefeditor@njccm.org

Abstract
Evidence-Based Medicine (EBM) was established to enhance clinical decision-making by anchoring it in methodologically rigorous research. However, its foundational tools—randomized controlled trials (RCTs), meta-analyses, and statistical inference, have come under increasing scrutiny. The critique lies not in the scientific principles themselves, but in the selective and often distorted ways these tools are applied, interpreted, and communicated.
This multi-part article critically explores the structural biases, linguistic framing, and methodological manipulations that shape contemporary clinical literature. Through case studies on statin use, selective serotonin reuptake inhibitor (SSRI) efficacy, and anticoagulation strategies, it illustrates how modest benefits are overstated, adverse effects minimized, and marginal findings rhetorically elevated to therapeutic significance.
In its concluding section, the article proposes a constructive epistemological model that redefines medical certainty as a dynamic interplay of statistical inference, clinical observation, and lived patient experience. This integrative framework seeks to restore the epistemic integrity of medical evidence while fostering renewed trust in its clinical application.
Introduction: The Crisis Behind “Show Me the Study”
“Show me the study” has become a common rhetorical weapon in medical and public discourse, used to affirm authority, silence skepticism, and end debate with the appearance of objectivity. But beneath this demand lies a deeper, often unexamined question: What qualifies as a valid study? Who defines its credibility, and by what assumptions is it judged?
Evidence-Based Medicine (EBM) was originally designed to elevate clinical care by grounding decisions in rigorous, empirical data. It aimed to replace opinion and anecdote with reproducible results, measurable endpoints, and systematic inquiry. Yet in practice, that noble vision has drifted. As this article will show, EBM today is increasingly shaped by statistical distortions, commercial incentives, and methodological shortcuts that mask weak results behind the illusion of scientific authority.
Algorithms Over Judgment: The Rise of “Cookbook Medicine”
While Evidence-Based Medicine (EBM) was conceived to elevate clinical care through rigorous data, its application in practice has increasingly collapsed nuanced decision-making into rigid, algorithmic routines. Clinical calculators and decision trees, initially developed to assist with complex decisions, are now often treated as mandates rather than guides. This shift has given rise to what critics call “cookbook medicine”: a mechanistic adherence to protocols based on population averages, with inadequate attention to the physician’s clinical expertise or the individual patient’s context and values.
Consequently, interventions validated under idealized study conditions are frequently applied indiscriminately to patients who fall outside trial demographics, such as the elderly, those with multiple comorbidities, or individuals with distinct cultural or personal priorities. For example, rigid application of antihypertensive or anticoagulation guidelines, based solely on numerical thresholds, can result in overtreatment, complications, or disregard for quality-of-life considerations.
EBM, when properly understood, was never intended to replace clinical judgment, but to inform it. It rests on three pillars: the best available evidence, the clinician’s expertise, and the patient’s values and circumstances. Undermining any one of these, particularly the latter two—risks reducing medicine from a healing art to a protocol-driven enterprise, vulnerable to depersonalization and, ultimately, algorithmic automation.
One of the most forceful critiques of modern Evidence-Based Medicine comes from Dr. John P.A. Ioannidis, the renowned epidemiologist and author of Why Most Published Research Findings Are False. In a 2016 editorial in The BMJ (29), he warned that EBM has been “hijacked” by commercial interests, bureaucratic overreach, and conflicted guideline panels.
What began as a scientific reform movement, he argues, now risks devolving into a system shaped more by marketing, convenience, and industry lobbying than by clinical relevance or scientific rigor. Loannidis contends that many published findings, though presented with statistical sophistication, are often more likely to be false than true, due to flexible trial designs, underpowered studies, and selective reporting. Guidelines, once a reflection of consensus, are increasingly distorted by conflicts of interest and institutional inertia.
This critique echoes our central thesis: that EBM, stripped of clinical judgment and patient context, risks becoming a rigid, protocol-driven system where abstractions like hazard ratios or relative risk reductions dominate, while meaningful outcomes and patient values are marginalized. If this trend continues, evidence-based care may become numerically precise yet clinically hollow. Reclaiming its original spirit requires a renewed commitment to relevance, humility, and transparency.
Consider, for example, initiating antihypertensive therapy solely because blood pressure exceeds 140/90 mmHg, or prescribing anticoagulation the moment a CHA₂DS₂-VASc score reaches 2. While seemingly evidence-based, such actions often bypass the very elements what matters most to many patients, functional capacity, autonomy, and quality of life. These priorities are especially critical in outcomes research on multimorbidity and aging, which emphasize preserving independence and minimizing treatment burden over meeting arbitrary targets (26).
Rather than empowering clinicians, these tools often displace judgment, turning medicine into checklist management. Numbers replace narratives. Protocols overshadow context.

Figure 1: A Patient-Centered Vision for Evidence-Based Medicine. This figure presents a reoriented vision of EBM grounded in real-world relevance, patient priorities, clear communication of results, and the deliberate avoidance of unnecessary scientific-sounding jargon. The goal is evidence that informs, not obscures, clinical decision-making.
The GDMT Paradigm: From Science to Scorecard
One of the most visible examples of this drift is the rise of GDMT—Guideline-Directed Medical Therapy, in quality dashboards, EMR templates, and hospital performance metrics. Though meant to ensure standardized care, GDMT often reduces nuanced medical reasoning to rote prescription of approved drugs.
Physicians are assessed less on judgment than on whether specific medications were ordered, regardless of a patient’s frailty, comorbidities, or treatment preferences. The result is a flattening of individualized care, shaped more by regulatory incentives and pharmaceutical input than by patient-centered wisdom
When Protocols Replace Patients: Ossification of Evidence-Based Medicine
This shift exemplifies what some have called the “ossification” of Evidence-Based Medicine (EBM), in which the original balance between best evidence, clinical expertise, and patient preference is displaced by protocol compliance shaped more by population averages, regulatory demands, and industry influence than by individualized care. As a practicing clinician, I have observed, alongside many peers, that thoughtful, patient-centered decisions are too often marginalized in favor of institutional metrics.
Greenhalgh et al. warn that EBM is in crisis not due to a lack of data, but because it has lost the very equilibrium that once made it valuable: the integration of science, judgment, and human priorities [30]. In this climate, medicine risks devolving into a bureaucratic routine, undermining clinical autonomy, eroding patient trust, and compromising the ethical foundations of care.
A Mirror for a Broken System
This pattern reflects a broader truth: like democracy, which thrives in principle but falters in execution, EBM must be judged not by its ideals but by its real-world outcomes. If it is to be reclaimed, EBM must return to its roots, rebalancing scientific rigor with narrative understanding, population evidence with individual meaning, and statistical significance with clinical significance.

Figure 2: Summary of the deviation of Evidence-Based Medicine from its original intent, highlighting statistical bias, rigid algorithms, and the erosion of patient-centered care.
Case in Point: Antihypertensive Therapy in the Elderly
Consider an 85-year-old patient with multiple comorbidities and a recent history of falls, presenting with a blood pressure of 145/85 mmHg. According to guideline thresholds, this patient qualifies for antihypertensive treatment. But blindly applying this cutoff may do more harm than good. A cohort study in JAMA Internal Medicine found that older adults on antihypertensives, especially those with a prior fall injury, had a significantly higher rate of serious fall-related hospitalizations than non-users (1). For this patient, the most meaningful outcome may not be tighter blood pressure control, but preserving balance, reducing medication burden, and maintaining independence.
Reclaiming EBM: From Rhetoric to Relevance
This article challenges the mythologized status of “the study” by exposing the structural, rhetorical, and philosophical distortions embedded in much of today’s clinical evidence. The issue is not a lack of data, but rather how that data is produced, selected, framed, and disseminated, often in ways that obscure the very people it is meant to help.
In the sections that follow, we examine how modern studies are structured to inflate marginal benefits, downplay harms, and transform weak findings into clinical doctrine. From framing bias and surrogate endpoints to semantic manipulation and exaggerated statistics, these patterns form a system that appears methodologically sound but is too often detached from clinical relevance and patient need.
The goal is not to reject science, but to rescue it from misuse. To reclaim a model of evidence that reflects what matters most, not just what is easiest to measure.
Common Pitfalls in Clinical Trials
The Mechanics of Bias: How Studies Are Built to Deceive
Many of the problems in modern clinical literature stem not from isolated errors, but from design practices that, intentionally or not, produce overly favorable results. Before a single patient is enrolled, the architecture of a trial can be shaped to amplify benefits, suppress harms, and tilt outcomes toward a desired conclusion. While each design choice may seem minor in isolation, their cumulative effect constructs a narrative more favorable than the underlying data justifies.
Behind the polished surface of randomized controlled trials (RCTs) and systematic reviews lies a landscape of methodological shortcuts, semantic distortions, and statistical manipulations. These tactics, while often unnoticed by casual readers, are well-documented and widespread.
The result: publications that are mathematically correct, statistically significant, and peer-reviewed, but conceal crucial gaps in relevance, rigor, or reproducibility in real-world patient populations.
Structural and Methodological Distortions
1. Poor Generalizability
Narrow inclusion criteria and tightly controlled environments produce clean data that often fails to reflect real-world complexity. (2)
2. Ethnic and Demographic Homogeneity
Trials that enroll predominantly white, middle-class participants cannot reliably predict effects across diverse populations. (3)
3. Skewed Sample Selection
Inclusion and exclusion criteria are often manipulated to favor ideal patient profiles—excluding elderly, multimorbid, or medication-intolerant patients. (3, 4)
4. Washout/Run-in Periods
Participants who experience early adverse effects during pre-randomization phases are often excluded from randomized controlled trials through washout or run-in procedures. While this strategy is sometimes methodologically can be justified, it can distort the reported safety profile of an intervention. For instance, statin trials report low rates of muscle pain, with one high-profile study claiming that over 90% of reported symptoms were not attributable to the drug itself [28]. In contrast, large-scale observational studies and patient surveys tell a different story. One real-world study found a 73.5% prevalence of muscle pain among statin users (95% CI: 68.4–78.1%), with lower limb pain being the most common site [27]. This discrepancy highlights how selective trial enrollment can obscure adverse effects that are frequent and clinically relevant in routine practice [5]

Figure 3. Discrepancy in Reported Statin-Associated Muscle Pain. This visual compares the incidence of muscle pain reported in randomized clinical trials (e.g., 4–10%, with over 90% deemed unrelated to statins) versus real-world observational data, where up to 73.5% of patients reported muscle-related symptoms.
5. Confirmation Bias in Design.
Many studies begin with an implicit belief in the intervention’s benefit, influencing design choices that favor positive outcomes. (6)
6. Surrogate and Composite Endpoints
Rather than measuring actual clinical improvements (e.g., survival), many trials rely on indirect proxies like lab values or imaging changes. These may not translate to meaningful patient benefit. (7)
7. Multiple Endpoints and Data Mining
Including numerous exploratory or secondary outcomes increases the odds of chance findings being presented as significant. (8)
8. Underpowered Sample Sizes
Trials with small numbers may lack the statistical power to detect real effects—or may produce false positives due to random variation. (9)
9. Poor Reporting of Key Metrics
Critical data such as Number Needed to Treat (NNT) or Number Needed to Harm (NNH) are missing in over 90% of trials, hindering interpretation. (10)
10. Relative Risk Reporting Without Absolute Context
Relative changes are often highlighted, e.g., “42% reduction in coronary deaths”, without disclosing that the absolute difference was just 3.5%. (11)
11. Mathematical Obfuscation
Clear clinical metrics are replaced with statistical abstractions (e.g., hazard ratios, standardized effect sizes) that obscure clinical meaning. (12)
12. Low Thresholds for Success
Modest improvements (e.g., 25–30% symptom reduction) may be labeled “recovery,” even when patients experience little meaningful benefit. (13)
13. 13. Placebo vs. Active Comparator
Drugs are often tested against placebo rather than existing standard treatments, exaggerating perceived efficacy. (14)
14. Disconnect from Clinical Practice
Randomization, while methodologically sound, often strips away the variability inherent in real-world medicine. (15)
15. Rhetorical Ambiguity and Framing Bias
Strategic language, e.g., “clinically meaningful,” “consistently higher NNT”, and persuasive visuals can inflate impressions of efficacy. (16)
16. Publication Bias
Negative or inconclusive results are far less likely to be published, distorting the perceived weight of evidence. (17)
17. Commercial Influence
Industry-sponsored trials are significantly more likely to report favorable outcomes due to preferential design, analysis, and reporting practices.(18)
18. Guideline Distortion
Many treatment guidelines are shaped more by cost-efficiency, institutional priorities, or industry lobbying than by unbiased, patient-centered data. (19)
19. Weak Evidence as Clinical Doctrine
As demonstrated by multiple examples throughout this article, marginal or biased data is often elevated into official protocols and performance metrics. This enshrinement restricts clinician autonomy, institutionalizes suboptimal practices, and ultimately risks compromising patient care
These recurring patterns reveal that even when studies are labeled “randomized,” “controlled,” or “peer-reviewed,” they may rest on compromised ground. Recognizing these pitfalls is not cynicism, it is essential to reclaim medical science as a vehicle for truth, not just technique.
Framing Bias: When Numbers Speak Louder Than Truth
A Closer Look at the Gold Standard
One of the most pervasive tactics used to exaggerate clinical benefit is framing bias, the practice of reporting relative risk reduction (RRR) without simultaneously disclosing absolute risk reduction (ARR) or Number Needed to Treat (NNT). This selective framing inflates perceived efficacy and can mislead both clinicians and patients.
Case in Point: Statins and the Illusion of Impact
A widely cited case illustrating the limitations of relative risk framing involves the use of statins for secondary prevention of cardiovascular events. According to independently evaluated data from TheNNT.com (20), the absolute benefits of statins over a five-year period are modest, while the risks remain clinically significant:
1 in 83 patients (1.2%) will have their life saved
1 in 39 (2.5%) will avoid a non-fatal heart attack
1 in 125 (0.8%) will avoid a stroke
1 in 10 (10%) will experience muscle damage
1 in 50 (2%) will develop diabetes
While these figures offer a numerical summary, how they are presented dramatically alters their perceived impact. In many public-facing materials, statins are promoted using relative risk reductions, with statements like:
“This 42% reduction in the risk of coronary death accounts for the improvement in survival” in the Scandinavian Simvastatin Survival Study (4S). (21)
Relative Risk Reduction vs. Absolute Benefit
Let’s take a closer look at that 42% claim. In the 4S trial, there were 189 coronary deaths in the placebo group versus 111 in the simvastatin group. This yielded a relative risk of 0.58, often publicized as a “42% reduction in coronary deaths.” Yet this figure masks the more relevant absolute risk reduction: a drop from 8.5% to 5.0%, or just 3.5%. This translates to a Number Needed to Treat (NNT) of 29 over six years, meaning 29 people would need to take the drug for that period to prevent one coronary death. This framing bias, emphasizing relative benefit, can make modest outcomes appear impressive, especially when serious harms like diabetes or muscle damage are also possible.

Figure 4 . Coronary death rates in the Scandinavian Simvastatin Survival Study (4S) over six years. While the relative risk reduction was 42%, the absolute risk dropped from 8.5% in the placebo group to 5.0% in the statin group, a difference of just 3.5%.
When Visuals Deceive: The Power of Presentation
As outlined by the Royal Australian College of General Practitioners,[ 22] a high-risk 65-year-old man who smokes and has hypertension and hyperlipidemia may have a 10-year cardiovascular mortality risk of 38%. Statin therapy could lower this to 34.6%, yielding a relative risk reduction of 9%, but the absolute reduction is just 3.4%. For a low-risk 45-year-old woman with mildly elevated cholesterol, the absolute benefit shrinks even further: from 1.4% to 1.3%, a mere 0.1% reduction. Yet both cases are described under the same “relative reduction” umbrella.

Figure 5. Relative vs. Absolute Risk Comparison, showing how identical relative reductions can mislead when applied across populations with very different baseline risks. While both patients receive the same relative reduction, their actual benefit diverges sharply. In low-risk individuals, up to 99 out of 100 may take the drug without any measurable advantage—a critical consideration often lost in the way data is framed.
This underscores a central issue: quoting relative statistics without absolute context creates a misleading impression of effectiveness. Phrases like “42% reduction in risk” may sound impressive, but in the absence of baseline risk, they distort both patient understanding and physician judgment. As shown in a BMJ study, when physicians were presented with the same data framed in different formats, they made significantly different clinical decisions, despite no change in the underlying evidence. (23)
Presenting benefits in absolute terms, such as “reducing risk from 3 in 100 to 2 in 100”, supports clearer expectations, enhances informed consent, and prevents the exaggeration of modest treatment effects. Framing benefits exclusively in relative terms not only inflates perceived efficacy, but also encourages over-prescription, especially when applied indiscriminately across patient populations.

Figure 6: Statistical vs Clinical Significance. This cartoon highlights how a 44% relative risk reduction in coronary death (from 5.4% to 3%) can mask a modest 2.4% absolute benefit, illustrating the gap between impressive statistics and meaningful patient outcomes.
To ensure transparent and ethical communication, clinicians can present both relative and absolute risk figures to patients. Without this balance, we risk replacing informed consent with marketing rhetoric, and genuine care with numerical illusion.
Why Relative Risk Reduction Misleads—and Misguides
Marketing by Design: The Rhetoric of Relative Risk
Relative Risk Reduction (RRR) remains one of the most abused metrics in clinical communication, statistically valid, yet strategically misleading. While it provides a percentage change between groups, it obscures the actual chance of benefit, especially when absolute effects are small.
Take, for example, a therapy that reduces the risk of an event from 3 in 100 to 2 in 100. The absolute benefit is just 1%, but the RRR is 33%. This inflated figure sounds impressive but masks the reality that 99 out of 100 people will not benefit.
This distortion is not accidental. RRR is routinely used in advertisements, abstracts, and press releases to exaggerate clinical relevance. When absolute risk is mentioned at all, it is often relegated to fine print or technical appendices, nowhere near the visual prominence or persuasive force of the headline relative reduction. This imbalance is designed to shape perception, not to inform.
The inconsistency worsens when side effects are reported in absolute terms (e.g., “1 in 100 may bleed”), while benefits are reported in relative terms (e.g., “30% reduction in stroke risk”). This selective formatting creates an illusion of high benefit and minimal risk, undermining informed consent.
Toward Honest Medicine: Ethical Communication of Risk
To restore clarity and ethical balance in medical decision-making:
Absolute Risk Reduction (ARR) and Number Needed to Treat (NNT) should be the primary metrics used in both patient and clinician communication.
If RRR is used, Relative Risk Increase (RRI) for harms must be presented with equal visibility and format.
Promotional and guideline materials must avoid asymmetric framing, where benefits are bold and harms are buried.
In essence, RRR is not just misleading, it appears to be marketing by design. Its continued use without context confuses clinicians, misleads patients, and distorts shared decision-making. Honest medicine demands better.
Conclusion to Part 1
This section has highlighted how the foundational instruments of modern Evidence-Based Medicine, once envisioned as safeguards against anecdote and bias, are increasingly deployed in ways that obscure rather than clarify therapeutic value. Through multiple examples, we have shown how flawed trial designs, restrictive inclusion criteria, relative risk framing, and the rhetorical inflation of modest outcomes collectively distort both the scientific literature and clinical decision-making.
These are not isolated deviations; they are systemic patterns. When modest absolute benefits are promoted using relative metrics, when harms are buried beneath surrogate endpoints, and when statistical abstractions displace patient-centered outcomes, the result is not just academic confusion, but compromised patient care.
The findings presented here do not indict science itself, but rather how science is curated, presented, and consumed within a healthcare system shaped by competing incentives. For the practicing clinician, this raises critical questions: Are the metrics I’m using clinically meaningful? Is this recommendation based on real-world outcomes, or statistical proxies? For patients, it demands a renewed insistence on clarity: What is the actual benefit for someone like me?
In Part 2, we follow this trajectory further, examining the structural forces that sustain and normalize these distortions: financial sponsorship, publication bias, regulatory blind spots, and the quiet erosion of physician judgment and patient voice. We also begin outlining a more grounded framework for medical certainty, rooted not only in analytical rigor, but in careful observation, reproducibility in real-world populations, and relevance to lived clinical experience. This foundation will ultimately set the stage for the deeper reforms explored in Part 3..
The question is no longer simply whether a study exists, but whether the evidence it claims to offer is trustworthy, applicable, and aligned with what matters most in the real world.
Acknowledgment
This article was written with AI assistance. All claims are supported by credible, peer-reviewed references, which were validated for accuracy and authenticity, ensuring scientific integrity throughout. In the event of any inadvertent errors, the responsibility lies with the AI, and corrections will be made promptly upon identification. I would like to express my sincere gratitude to Dr Marjorie Renfrow for her thoughtful review and invaluable feedback. Her expertise and guidance have played a pivotal role in refining and enhancing this article.
Author’s Note on Scope and Intent. This article critiques methodological and systemic trends in medical research and does not allege misconduct by any specific individuals, institutions, or companies. All examples and analyses are drawn from publicly available data and peer-reviewed literature.
REFERENCES
Tinetti ME, Han L, Lee DS, et al. Antihypertensive medications and serious fall injuries in a nationally representative sample of older adults. JAMA Intern Med. 2014;174(4):588–595. https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/1832197
U.S. FDA. Enhancing the Diversity of Clinical Trial Populations — Guidance for Industry. 2020. https://www.fda.gov/media/134754/download
Witham MD, Logan P, Brady MC. Assessment of representation of older adults in trials of pharmacologic interventions for ischemic heart disease: systematic review. Br J Clin Pharmacol. 2021. https://bpspubs.onlinelibrary.wiley.com/doi/10.1111/bcp.14539
Tait AR, Voepel-Lewis T, Zikmund-Fisher BJ, Fagerlin A. Optimizing the presentation of research findings: the relative versus absolute risk debate. Cogn Res Princ Implic. 2023;8(1):10. https://doi.org/10.1186/s41235-023-00520-y
Dechartres A, et al. Reporting of harms in randomized controlled trials. PLoS Med. 2019. https://pmc.ncbi.nlm.nih.gov/articles/PMC6377048/
Nickerson RS. Confirmation bias: a ubiquitous phenomenon in many guises. Review of General Psychology. 1998. https://www.researchgate.net/publication/286835865_Considering_confirmation_bias_in_design_and_design_research
Ferreira-González I, et al. Problems with use of composite end points in cardiovascular trials: systematic review. BMC Medicine. 2007;5:6. https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-017-0902-9
U.S. FDA. Multiple Endpoints in Clinical Trials Guidance. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/multiple-endpoints-clinical-trials
Button KS, et al. Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci. 2013. https://research-information.bris.ac.uk/en/publications/power-failure-why-small-sample-size-undermines-the-reliability-of
Guyatt G, et al. The need for better reporting of harms. JAMA. 2004;292(3): 264–271. https://jamanetwork.com/journals/jama/fullarticle/194958
Scandinavian Simvastatin Survival Study Group. Randomised trial of cholesterol lowering in 4444 patients. Lancet. 1994. https://www.thelancet.com/pb/assets/raw/Lancet/pdfs/issue-10000/4s-statins.pdf
Manson JE, Bassuk SS. Biomarkers in chronic disease: scientific and ethical implications. Am J Prev Med. 2003. https://pmc.ncbi.nlm.nih.gov/articles/PMC3653612/
Soomro GM, et al. SSRIs vs placebo for OCD. Cochrane Database Syst Rev. https://www.cochrane.org/CD001765/DEPRESSN_selective-serotonin-re-uptake-inhibitors-ssris-versus-placebo-for-obsessive-compulsive-disorder-ocd
Khan A, et al. The effect of inclusion/exclusion criteria in antidepressant trials. Psychiatric Annals. 2008. https://pubmed.ncbi.nlm.nih.gov/18303940/
Fiore LD, et al. A guide to real-world effectiveness and safety studies. Clin Trials. 2011. https://pmc.ncbi.nlm.nih.gov/articles/PMC4632358/
Ntaios G, et al. Apixaban vs warfarin: real-world evidence. Stroke. 2018. https://www.ahajournals.org/doi/full/10.1161/STROKEAHA.117.018395
Dwan K, et al. Systematic review of publication bias. Cochrane Database. https://s4be.cochrane.org/blog/2018/08/07/publication-bias-the-answer-to-your-research-question-may-be-sitting-in-somebodys-file-drawer/
Lexchin J, et al. Pharmaceutical industry sponsorship and research outcome. J Health Econ. 2003. https://www.journals.uchicago.edu/doi/10.1086/730383
Aberegg SK. Medical decision-making and the illusion of evidence. Medicines. 2021;8(7):36. https://www.mdpi.com/2305-6320/8/7/36
TheNNT.com. Statins for Heart Disease Prevention. https://thennt.com/nnt/statins-for-heart-disease-prevention-with-known-heart-disease/
Scandinavian Simvastatin Survival Study Group. Lancet. https://www.thelancet.com/pb/assets/raw/Lancet/pdfs/issue-10000/4s-statins.pdf
Royal Australian College of General Practitioners. Statins and Risk Reduction. https://www1.racgp.org.au/newsgp/clinical/have-the-benefits-of-statins-been-overstated
Covey J. A meta-analysis of risk reduction formats used in decision aids. BMJ. 2003;327(7417):741. https://www.bmj.com/content/327/7417/741
Bloch MH, et al. Meta-analysis of SSRI dose-response in OCD. Cochrane Database. https://www.cochrane.org/CD001765
Ntaios G, et al. Stroke. AHA Journals. https://www.ahajournals.org/doi/full/10.1161/STROKEAHA.117.018395
Tinetti ME, Fried TR, Boyd CM. Designing health care for the most common chronic condition—multimorbidity. JAMA. 2012;307(23):2493–2494. https://pmc.ncbi.nlm.nih.gov/articles/PMC4083627/
27. Alsheikh R, Alharthi S, Alzahrani A, Babtain F, Babtain F, Almalki A, et al. Prevalence of statin-associated muscle symptoms in Saudi Arabia: A cross-sectional study. Int J Gen Med. 2022;15:7817–26. doi:10.2147/IJGM.S378994. Available from: https://pmc.ncbi.nlm.nih.gov/articles/PMC9034880/
28. University of Oxford. New study shows muscle pain not due to statins in over 90% of those taking treatment [Internet]. 2022 Aug 30 [cited 2025 May 26]. Available from: https://www.ox.ac.uk/news/2022-08-30-new-study-shows-muscle-pain-not-due-statins-over-90-those-taking-treatment
29. Ioannidis JPA. Why most published research findings are false. PLoS Med. 2005 Aug;2(8):e124 https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124
Greenhalgh T, Howick J, Maskrey N. Evidence based medicine: a movement in crisis? BMJ. 2014;348:g3725. https://www.bmj.com/content/348/bmj.g3725


Bull's eye !
A mirror to the spiritually blind.
دیکھنا " تحریر" کی لذت کہ جو اس نے کہا
میں نے یہ جانا کہ گویا یہ بھی میرے دل میں ہے