Evidence-based medicine, systemat...
In recent years, progress and innovations in healthcare are measured by evidence- based medicine (EBM), systematic reviews, and meta-analyses. A systematic review is defined as, ���the application of scientific strategies that limit bias by the systematic assembly, critical appraisal, and synthesis of all relevant studies on a specific topic.��� In contrast, meta-analysis is the statistical pooling of data across studies to generate pooled estimates of effects. Meta-analysis usually is the final step in a systematic re- view. Systematic reviews and meta-analyses are labor intensive, requiring expertise in both the subject matter and review methodology, and also must follow the rules of EBM which suggest that a formal set of rules must complement medical training and com- mon sense for clinicians to interpret the results of clinical research effectively. While expertise in the subject matter is crucial, expertise in review methods is also particu- larly important. Despite an explosion of systematic reviews and meta-analyses, the empiric research on the quality of systematic reviews has shown that not all systematic reviews are tru- ly systematic, having highly variable quality, deficiencies in methodologic assessment of the quality of the included manuscripts, and bias. Even then, systematic review of the literature is currently the best, least biased, and most rational way to organize, cull, evaluate, and integrate the research evidence from among the expanding medi- cal and healthcare literature. However, a dangerous discrepancy between the experts and the evidence continues to persist in part because multiple instruments are avail- able to assess the quality of systematic reviews or meta-analyses. Steps in conducting systematic reviews include planning, conducting, reporting, and disseminating the results. The Quality of Reporting of Meta-analysis (QUOROM) statement provides a checklist and a flow diagram. The checklist describes the pre- ferred way to present the abstract, introduction, methods, results, and discussion sec- tions of the report of an analysis. This review describes various aspects of systematic reviews and meta-analyses of randomized trials with a special focus on intervention- al pain management. Key words: Randomized trials, pragmatic trials, evidence-based medicine, system- atic reviews, meta-analyses, guidelines, bias, interventional pain management, Qual- ity of Reporting of Meta-analysis (QUOROM), Cochrane reviews Pain Physician 2009 12:1:35-72 Evidence-Based Medicine Evidence-Based Medicine, Systematic Reviews, and Guidelines in Interventional Pain Management: Part 3: Systematic Reviews and Meta-Analyses of Randomized Trials From: 1Pain Management Center of Paducah, Paducah, KY 2Millennium Pain Center, Bloomington, IL 3Pacific Coast Pain Management Center, Laguna Hills, CA and 4Massachusetts General Hospital and Harvard Medical School, Boston, MA. Dr. Manchikanti is Medical Director of the Pain Management Center of Paducah, Paducah, KY, and Associate Clinical Professor of Anesthesiology and Perioperative Medicine, University of Louisville, Louisville, KY. Dr. Benyamin is President, Millennium Pain Center, Clinical Associate Professor, Department of Surgery, College of Medicine, University of Illinois, Urbana-Champaign, IL. Dr. Helm is a Medical Director, Pacific Coast Pain Management Center, Laguna Hills, CA. Dr. Hirsch is Chief of Minimally Invasive Spine Surgery, Depts. of Radiology and Neurosurgery, Massachusetts General Hospital and Assistant Professor of Radiology, Harvard Medical School, Boston, MA. Address correspondence: Laxmaiah Manchikanti, MD 2831 Lone Oak Road Paducah, Kentucky 42003 E-mail: email@example.com Disclaimer: There was no external funding in the preparation of this manuscript. Conflict of interest: None. Manuscript received: 12/5/2008 Accepted for publication: 12/19/2008 Free full manuscript: www.painphysicianjournal.com Laxmaiah Manchikanti, MD1, Ramsin Benyamin, MD2, Standiford Helm II, MD3, and Joshua A. Hirsch, MD4 www.painphysicianjournal.com Pain Physician 2009 12:35-72��� ISSN 1533-3159
Pain Physician: January/February 2009:12:35-72 36 www.painphysicianjournal.com and represent a continuum. Systematic reviews and meta-analyses are considered to be the best sources of evidence (12-14). Consequently, systematic reviews and meta-analyses provide clinicians, researchers, policy-makers, and patients with a synthesis of an un- manageable and exponentially increasing number of manuscripts by linking and correlating huge amounts of information with identification of beneficial or harmful interventions. Numerous organizations, pri- vate and public, for-profit and not-for-profit, have been involved in evidence synthesis (15-37). Health care providers and other decision-makers depend on systematic reviews and meta-analyses as information resources in which bias has been reduced by the systematic identification, appraisal, synthesis, and, if relevant, statistical aggregation of all relevant studies on a specific topic according to a predeter- mined and explicit method (8,16,25,26,38-53). How- ever, like any research enterprise, particularly one that is observational, systematic reviews and meta-analy- ses of evidence can be flawed. In a 1987 survey of 86 English language meta-analyses (38) assessing each publication on 23 items from 6 content areas consid- ered important in the conduct and reporting of meta- analyses of randomized trials, only 24 (28%) of the 86 meta-analyses reported that all 6 content areas had been addressed. An updated survey, which included subsequently published meta-analyses showed little improvement in the rigor with which they were re- ported (39). Moher and Tricco (40) described issues re- lated to the conduct of systematic reviews with a focus on the nutrition field and made recommendations for improving systematic review conduct. They found mul- tiple variations in the systematic reviews, for example 4 systematic reviews examining the cardiovascular ef- fects of vitamin E supplements (54-57). Surprisingly, all the systematic reviews had similar questions, even though variations were apparent, such as one review focusing on effectiveness (55), another one focusing on efficacy (56), one review searching multiple data- bases (56), and another (54) searching only one data- base. They also utilized different inclusion and exclu- sion criteria, had language limitations, and outcome differences were also evident. The number of studies in these systematic reviews varied from 7 to 84, yet consistency was observed in 3 of these systematic re- views, with no association between vitamin E and any cardiovascular endpoint (54-56). However, the fourth review (57), which conducted a dose-response analy- sis, for which high doses of vitamin E were shown to Ein vidence-based medicine (EBM), systematic reviews, meta-analyses, and clinical guidelines medicine in general and in interventional pain management in particular are signs of progress in the effort to keep pace with health care innovations, which continue to grow and constantly add to broader and more complex health care interventions and systems. EBM is considered as a shift in medical paradigms, which acknowledges that intuition, unsystematic clinical experience, and pathophysiologic rationale are insufficient grounds for clinical decision- making (1-3). The hierarchy of strength of evidence for treatment decisions varies from N of 1 randomized controlled trials (RCTs) on the top, followed by systematic reviews of randomized trials, and ranging all the way down to unsystematic clinical observations (3). Consequently, systematic reviews of randomized trials take the highest priority as N of 1 RCTs are extremely rare. Systematic reviews and meta-analyses are increasingly popular evidence-based tools and are often used to answer complex research questions across many different research domains (4,5). A systematic review is defined as, ���the application of scientific strategies that limit bias by the systematic assembly, critical appraisal and synthesis of all relevant studies on a specific topic��� (5-7). Systematic reviews are labor intensive and require expertise in both the sub- ject matter and review methods. Systematic reviewers must follow the rules of EBM which suggests that a for- mal set of rules must complement medical training and common sense for clinicians to interpret the results of clinical research effectively. Thus, knowing the tools of evidence-based practice is necessary but not sufficient for delivering the highest quality of patient care. Conse- quently, expertise in one area or another is not enough and may lead to inaccurate conclusions, in turn lead- ing to inappropriate applications of the results (8-10). While expertise in the subject matter is crucial, exper- tise in review methods is also particularly important. Meta-analysis, in contrast to a systematic review, is the statistical pooling of data across studies to gen- erate a summary (pooled estimates of effects) (11-13). Generally, a meta-analysis is the final step in a system- atic review (11). A meta-analysis should ideally start with an unbiased systematic review that incorporates articles chosen using predetermined inclusion criteria. However, sometimes meta-analyses are done without an initial systematic review. While meta-analysis and systematic review are not synonymous (4,11,12), they have many similarities
www.painphysicianjournal.com 37 Systematic Reviews and Meta-analyses of Randomized Trials significantly increase the risk of all-cause-mortality by 9% to 14%. Ultimately, 2 of the systematic reviews (54,56) concluded that vitamin E had no benefit with respect to cardiovascular events, one systematic review (55) concluded that vitamin E had neither benefit nor harm with respect to cardiovascular events, and the fourth review (57) concluded that there was a dose- response relation between vitamin E and increased risk of all-cause mortality, concluding that vitamin E at high doses is harmful. Several publications have described the science of reviewing research (44,57) differences among narrative reviews, systematic reviews, and meta-analyses (44), and how to carry them out (8,25,27), critically appraise (58-62), and apply (28) systematic reviews and meta- analyses in practice. Due to the expanding nature of systematic reviews and meta-analyses, multiple publica- tions and guidelines have been published (11,43). The purpose of this report is to provide guidance for clinical research for the interventional pain physi- cian by focusing on the methodology of conducting a systematic literature review and meta-analysis. 1.0 What Is the termInology of revIeWs? The terminology used to describe systematic re- views and meta-analyses has evolved over time. There are multiple types of reviews and analysis available in the medical literature. Other types of reviews, such as narrative reviews, do not use the explicit methods. A systematic review consists of a clearly formulated question and explicit methods to identify, select, and critically appraise relevant research and then collects and analyzes the data from the studies that are includ- ed in the review. A meta-analysis is the use of statisti- cal techniques in a systematic review, which integrates the results of included studies. Thus, a systematic re- view does not necessarily include a meta-analysis and could be systematically reviewed alone or in combina- tion with meta-analysis. All other types of reviews may be susceptible to bias (63). A properly conducted sys- tematic review or meta-analysis is much more resource and labor intensive than a narrative review (64). Table 1 illustrates differences between a systematic review and methods of the other types of reviews (11). Table 1. Comparison of traditional and systematic reviews. Components of a review Traditional, narrative reviews Systematic reviews Formulation of the question Usually address broad questions Usually address focused questions Methods section Usually not present, or not well-described Clearly described with pre-stated criteria about partici- pants, interventions, and outcomes Search strategy to identify studies Usually not described mostly limited by reviewers, abili- ties to retrieve relevant studies usually not reproducible and prone to selective citation Clearly described and usually exhaustive transparent, reproducible and less prone to selective citation Quality assessment of identified studies Usually all identified studies are included without explicit quality assessment Only high-quality studies are included using pre-stated criteria if lower-quality studies included, the effects of this are tested in subgroup analyses Data extraction Methods usually not described Usually undertaken by more than one reviewer onto pre- tested data forms attempts often made to obtain missing data from authors of primary studies Data synthesis Qualitative description employing the vote counting ap- proach, where each included study is given equal weight, irrespective of study size and quality Meta-analysis assigns higher weights to effect measures from more precise studies pooled, weighted effect measures with confidence limits provide power and precision to results Heterogeneity Usually dealt with in a narrative fashion Heterogeneity dealt with by graphical and statistical methods attempts are often made to identify sources of heterogeneity Interpreting results Prone to cumulative systematic biases and personal opinion Less prone to systematic biases and personal opinion Source: Pai M et al. Systematic reviews and meta-analyses: An illustrated, step-by-step guide. Natl Med J India 2004 17:86-95 (11).
Pain Physician: January/February 2009:12:35-72 38 www.painphysicianjournal.com 2.0 an IntroductIon to systematIc revIeWs The history of synthesizing research is inextricably bound up in the history of EBM ��� the global move- ment to use the best evidence about what does and does not work in health care. James Lind, a Scottish naval surgeon, who is credited with having produced one of the early records of a scientific trial and having written one of the first systematic reviews of evidence, provides modern medicine with the history of system- atic reviews (65,66). On board the Salisbury on May 20, 1747, Lind (66) took 12 patients with scurvy, whose cases ���were as similar as I could have them.��� He divided them into 6 groups of 2 and administered different treat- ments to each pair of sufferers. The 6 treatments were cider, elixir vitriol, vinegar, seawater, a com- bination of oranges and lemons, and mixture of garlic, mustard seed, and balsam of Peru. Six days later, Lind���s findings were clear. ���The result of all my experiments was that oranges and lemons were the most effectual remedies for this distemper at sea��� (67). The results of this were published 6 years later acknowledging the need to review the exist- ing literature on scurvy systematically and to discard the weaker forms of evidence. Lind (67) wrote, ���As it is no easy matter to root out prejudices ��� it be- came requisite to exhibit a full and impartial view of what had hitherto been published on the scurvy . . . by which the sources of these mistakes may be detected. Indeed, before the subject could be set in a clear and proper light, it was necessary to remove a great deal of rubbish.��� Thus, gathering the pub- lished research, getting rid of the ���rubbish,��� and summarizing the best of what remains is essentially the science of systematic reviews. Through the early decades of the twentieth century, scientists working in diverse areas from environmental air quality to physics and agriculture employed rudimentary tech- niques of research synthesis. In 1904 Karl Pearson published a landmark review of the evidence about the effects of vaccines against typhoid (68). During the 1960s and 1970s, early systematic re- view methods were advanced by social scientists (69). Even though the importance of evidence synthesis in medicine was recognized in the 1970s (70), the wide- spread use of these systematic reviews and meta-anal- yses did not occur until 2 decades later (71). The stim- ulating aspect which potentially contributed to this ���movement��� was evidence that the judgements and opinions of experts were often biased. Thus, the term ���systematic review��� was coined long before EBM (72). In 1971, Archie Cochrane (70), a British epidemiolo- gist, persuasively advocated the scientific evaluation of commonly used medical therapies through RCTs. By 1979, Cochrane was suggesting that the results of RCTs of the same intervention be systematically sum- marized. A few years later in 1984, Richard Light and David Pillemer (73) published the pioneering work in the recent history of research synthesis. Three years later, Cynthia Mulrow (74) delivered her damning assessment of the quality of 50 reviews published in the world���s leading medical journals during 1985 and 1986. She concluded that these reviews were often subjective, scientifically unsound, and inefficient with only one of the 50 reviews clearly specifying methods of identifying, selecting, and validating included in- formation. Subsequently, in 1993, Oxman and Guyatt (75) published their critique of the poor quality of re- view articles, based on an assessment of 36 published reviews. Governments in a number of countries have started subsidizing systematic reviews and also many health care organizations in the 1990s started pro- ducing systematic reviews in the public and private sectors (45,71,76-78). Further, the Agency for Health Care Research and Quality (AHRQ) and the U.S. De- partment of Health and Human Services designated research groups in the United States and Canada as evidence-based practice centers (EPCs) (77). These cen- ters conduct ���systematic, comprehensive analyses and syntheses of the scientific literature to develop evi- dence reports and technology assessments on clinical topics that are common, expensive, and present chal- lenges to decision makers��� (77). In addition, in 1999, the Healthcare Financing Administration (HCFA), now the Centers for Medicare and Medicaid Services (CMS) announced that it would require such system- atic reviews of the evidence before making any ma- jor national coverage decisions, though this policy, in theoery, does not affect the many coverage decisions about therapies made at the regional or state level (78). But, in practice terms, regional and state level decisions (local coverage decisions or LCD���s) are based on evidence-based medicine, systematic reviews, and clinical studies.
www.painphysicianjournal.com 39 Systematic Reviews and Meta-analyses of Randomized Trials 3.0 an IntroductIon to meta-analyses Karl Pearson in 1904 summarized and synthesized the results of 11 studies in a landmark review of the effects of vaccine against typhoid, thereby anticipat- ing the development of the meta-analysis, the sta- tistical method used to pool the results of different studies (68). Three years later, in the United States, Joseph Goldberger abstracted and pooled data from 26 of the 44 studies (71). Since its introduction, meta- analysis has been established as an influential branch of health services research, with hundreds of meta- analyses published in the medical literature each year (79). The statistical basis of meta-analysis started in astronomy with intuition and experience suggest- ing that the combination of data might be better than attempts to select amongst them (80). In 1976, the psychologist Gene Glass coined the term ���meta- analysis��� in a paper entitled ���Primary, Secondary and Meta-analysis of Research��� (69). Meta-analysis is becoming increasingly popular in modern medicine (27,81,82). Meta-analysis has been credited with overcom- ing the problem first identified by Pearson (68), that ���any of the groups . . . are far too small to allow of any definite opinion being formed at all, having regard to the size of the probable error involved.��� Even though the size of trials published in medical journals has been increasing ever since 1948, many trials fail to detect, or exclude with certainty, a mod- est but clinically relevant difference in the effects of 2 therapies. Essentially, small trials may prove con- tradictory with their conclusions and confuse those seeking guidance. The meta-analytic approach may overcome this problem by combining trials evaluat- ing the same intervention in a number of smaller, but comparable, trials. Further, meta-analysis may highlight areas where there is a lack of adequate evidence and thus identify where further studies are needed. In fact, a period of starvation is common practice after gastrointestinal surgery, but a meta- analysis (83) of RCTs of this practice concluded that giving patients nothing by mouth may do more harm than good, and that a large trial is required to clarify this issue. Meta-analysis offers a sounder basis for subgroup analyses, particularly if they are based on individual participant data (84,85). 4.0 What Is the QualIty of systematIc revIeWs? There has been an explosion of systematic reviews and meta-analyses as shown in Fig. 1. Empiric research on the quality of systematic reviews has shown that not all systematic reviews are truly systematic (16,46), that the quality of systematic reviews is highly vari- able (44,47), and that the Cochrane reviews, on aver- age, may be more rigorous and better reported than journal reviews (46,48). However, recent studies also have shown deficiencies even in Cochrane reviews with methodological problems (49,50). Further, it has been shown that among evaluation of 240 systematic reviews from journals, only 48% assessed their quality (51) in the evaluation of 480 systematic reviews in DARE, only 52% assessed quality (16) and in the evaluation of 50 systematic reviews on asthma, only 28% reported va- lidity assessment criteria (48). This indicates a lack of evaluation of the quality of primary studies, which sets apart systematic reviews from traditional reviews. Fur- ther, among meta-analyses, heterogeneity is a common finding (45). Empiric work on meta-analyses also has shown that evaluation of heterogeneity is not univer- sally done and that only approximately 45% to 68% of reviews tested for heterogeneity (16,48,52). The results from meta-analyses are not always trustworthy (86-97) led to research into the numerous ways in which bias may be introduced, and the development of methods to detect the presence of such bias. Moher et al (86) in evaluation of epidemiology and reporting characteristics of systematic reviews concluded that the quality of their reporting was inconsistent, and the readers should not accept sys- tematic reviews uncritically. Delaney et al (87) in a systematic evaluation of the quality of meta-analy- ses in the critical care literature concluded that over- all quality of the reports of meta-analyses available to critical care physicians was poor. Consequently, they suggested that the physicians should critically evaluate these studies prior to considering applying the results of these studies in their clinical practice. McElvenny et al (88) in evaluation of meta-analyses in occupational epidemiology concluded that con- troversy remains over the definition and validity of meta-standardized mortality ratios, heterogeneity in exposure, and multiple other issues. Dixon et al (89) in critical appraisal and assessment of the meth-