[100% Unique Papers] Engaged Patient Activities Engaged
Evidence-based medicine (EBM), previously translational medicine, can be defined as improving care based on empirical research and/or hands-on practice. The EBM’s approach is based on either direct patient care (bottom-up) or an experiment translated into guidelines (top-down). EBM is used in either in conjunction or as quality assessment tool(s) for continuous quality improvement (CQI). The healthcare leader must utilize the evidence to promote healthcare quality.
- Explain the process of evidence-based analysis.
- Highlight the major players and their roles in EBM policy (i.e., government, providers, patients, etc.).
- Explain the EBM “Levels of Evidence” as defined in your course text.
- Review the case “Constraints of the ACA on Evidence-Based Medicine.”
- Provide a written analysis of the case “Constraints of the ACA on Evidence-Based Medicine” in Chapter 9 of your textbook. (Utilize the “Levels of Evidence and Grades of Recommendations” as defined by the University of Minnesota) Clearinghouse.
- Summarize the policy of EBM in your conclusion.
The Policy Analysis Process: Evidence-Based Medicine
Evidence-based medicine is not a new concept, but its use is increasingly widespread. The concept has many labels, including evidence-based practice and a more recent offshoot, translational medicine. The term has been attributed to Dave Sackett and his clinical epidemiology colleagues at McGill University, who presented it in a series of 1980 articles in a Canadian journal. In a 1996 article, they presented a revised definition, stating that evidence-based medicine is “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of the individual patient. It means integrating individual clinical expertise with the best available external clinical evidence from systematic research” (Sackett et al., 1996).
Evidence-based medicine can be implemented as a top-down or bottom-up approach, and this difference can lead to considerable confusion. With the top-down approach, experts form a consensus based on strong empirical evidence, and that consensus is then disseminated as standard guidelines and protocols. With the bottom-up approach, a provider deals directly with the patient, identifying the problem and searching for the best evidence about what to do next. Even when strong, evidence-based protocols are available, the bottom-up process can take into account patient variability, patient preferences and situational factors, and the provider’s expertise. The provider’s expertise may be based on evidence or experience, but usually a combination of both leads to better outcomes.
Resistance to evidence-based medicine often stems from providers’ fears that the top-down approach will deprive them of their autonomy. A further concern is that the top-down approach may be based on statistical analyses involving homogeneous populations and will not be aligned with the needs of the specific patient. For example, randomized clinical trials may select a homogeneous population without secondary diagnoses, whereas many patients typically presenting in a practice will have multiple conditions. Sackett et al. (1996) addressed these concerns by stating:
Evidence-based medicine is not “cookbook” medicine…. External evidence can inform, but can never replace, individual clinical expertise, and itis this expertise that decides whether the external evidence applies to the individual patient at all, and if so, how it should be integrated into a clinical decision. (p. 317)
Some providers also are concerned about requiring evidence-based practices for diagnoses, procedures, technologies, and professional settings when there is not an adequate body of research from which to draw conclusions. In medicine, research is often funded by drug and device manufacturers, but it is much more difficult to find funds for research in areas such as mental health, public health, and social services. Policy makers are reluctant to pay for unproven interventions and often are tempted to mandate that programs be evidence based. Such requirements can tie the hands of service providers who must rely on expert opinions unless and until robust research becomes available. Public health, for example, has long believed in the importance of inspecting food establishments, but because foodborne illnesses largely go unreported and public health research is poorly funded, it is difficult to measure the overall effectiveness of restaurant inspections, much less compare the value of different regulatory practices.
9.1 REDUCING VARIATION AND SAVING RESOURCES
Reducing variation in the way clinicians in an organization approach frequently encountered diagnoses clearly has benefits. Standardizing treatment protocols results in less waste of resources and more consistent outcomes. Clinical guidelines are widely available from prestigious organizations as well as local management. However, provider resistance to the top-down approach is common. Farias et al. (2013) reported on locally developed standardized clinical assessment and management plans (SCAMPs) that offer more flexibility when applied to varied and complex patients. With SCAMPs, providers work with clinical guidelines, but the clinicians are free to deviate, provided they report their reasons for doing so. After about 200 patients are treated using a particular SCAMP, provider comments, related cost data, and the most recent literature are reviewed, and the SCAMP is revised. Such a process can be managed locally or applied to a professional network. The authors report that physician compliance is greatly increased by this process.
9.2 CROSSCURRENTS INVOLVED
The rise to prominence of evidence-based medicine is a result of several factors:
• Rapid generation of new and revised scientific information
• Pressures on clinicians to conserve time
• Payer and patient concerns about the rising cost of care
• Concerns of all parties about the relatively high rate of medical errors
• Long lead times needed to adopt new information and change practice behaviors
• Commercial free speech and the viral movement of information and misinformation on the Web, which means more information—good and bad—in the hands of the public
Both biased and unbiased information is available to practitioners and to the public. One sees advertisement after advertisement, especially about newly defined health problems and new treatments. These are almost all paid for by someone who wants to change provider and patient behaviors. Before changing a clinical practice, however, the provider has to vet all these sources of information and revise his or her script for dealing with that clinical situation and with the questions put forward by patients and their families in the face of those pressures. Providers could spend all their time following the literature and do little else. However, both the downward pressure on fees and the second-guessing of providers by payers forces the provider to pay attention to secondary information sources to discover the latest consensus and assess what is clinically appropriate for the patient at hand. In some cases, such as central line infections, what was once an acceptable risk is now considered a medical error, and the provider has to bear more of the risk. The individual practitioner’s judgment is less and less likely to be free from scrutiny, and yet that practitioner must operate in an environment where information can seldom be taken at face value.
The plight of the policy analyst is not much different. He or she will have to rely more on medical experts for interpretation and recommendations. However, the information coming in is likely to be transmitted by individuals with distinct points of view, if not distinct financial interests. Policy analysts, too, are visited by lobbyists and consumer advocates. They have access to the same journals and press reports and have to make evaluations with respect to scientific merit and economic impact. Boden and Epstein (2006) warn about the fallacies of “policy-based evidence” where the advocates (political or social) conduct or cite research that begins with a policy solution and generates only arguments supporting that alternative.
9.3 THE PROCESS OF EVIDENCE-BASED ANALYSIS
Evidence-based medicine has its roots in clinical epidemiology. It is an analysis based on:
1. A problem definition, which enables the analyst to focus on clinical questions and clinical information.
2. An effective search of the available, relevant information for evidence concerning the clinical question.
3. Assessing the level of the evidence and its validity and selecting the best available answer for implementation.
4. Trying the approach in one’s clinical practice.
5. Evaluating the performance of the new or revised clinical response, and consciously incorporating positive results into one’s expertise set.
6. Adapting this knowledge to the needs of the specific patient.
Not surprisingly, this parallels the approaches used for continuous quality improvement. Evidence-based medicine rests on continuous personal, professional, and/or organizational learning. Obviously, it is very much dependent on the participation of the provider, although chronic disease patients also can become very good at it in their area of interest.
Clinical Decision Making
When a clinician and a patient are searching for the best treatment for a particular situation in real time, they will want to consider only treatments that are efficacious. They will most likely select those that are sufficiently effective and offer sufficient value to guarantee their use. The tragedy of American medicine is the tendency to want to do something, and often the most technologically advanced and most expensive thing (Hadler, 2013). Hadler (2013) noted that much of American medicine incorporates treatments that have proven efficacy, but that do not have a proven significant effect on clinical outcome. They often relate only to risk factors or to a very limited segment of the population with a specific diagnosis, and yet they’re widely applied.
Levels of Evidence
The evidence tools used by analysts are often presented as an evidence pyramid or hierarchy of evidence. Anyone who goes on an Internet search engine looking for images related to “levels of evidence” will be deluged with graphical representations, mostly pyramidal, expressing pretty much the same rankings. The top of the pyramid represents the most reliable studies, which are the fewest in number. At the bottom are the least reliable ones, the anecdotes, personal opinions (expert or otherwise), and case reports, which are greatest in number. The pyramid may have anywhere from 4to 10 levels. What is counterintuitive about these pyramids is that expert opinions are at or near the bottom and randomized controlled trials (often referred to as the “gold standard”) are in the middle. At the top are systematic studies that integrate findings from multiple studies. An example would be Cochrane Collaborative Review. Below that would be meta-analyses involving multiple studies, followed by synthesis of a limited number of studies. Then come randomized controlled trials. Below that would likely be cohort studies, followed by case-control studies. The Agency for Healthcare Research and Quality (AHRQ) has boiled this down to three categories of strength of evidence (Table 9-1).
Some representations include clinical guidelines, which tend to be reliable but vary in their underlying levels of evidence. They have been known to come from reviews and meta-analyses, as well as from a process known as GOBSAT, which stands for “Good Old Boys Sitting Around Talking.” A useful classroom exercise would be to call up the available set of pyramidal images and pick one for use in future class discussions.
Example of the Preventive Services Task Force
The U.S. Preventive Services Task Force (USPSTF) has developed its own methodology for evaluating proposed recommendations. Preventive services have a long-term and substantive impact on the cost of care. They represent an area where guidelines are helpful because most clinicians have to rely on them rather than clinical experience. However, the guidelines can be controversial, as demonstrated by the recent debates over recommended reductions in early breast cancer screenings and the PSA test for prostate cancer. The Task Force recommendations also carry a great deal of weight with payers who are expected to include those specific services in their coverage.
Table 9-1 Three Categories for Rating the Strength of Evidence at AHRQ
Drawing on elements of these established systems, the Innovations Exchange uses three categories to provide meaningful distinctions in assessing the strength of the link between the innovation and the observed results:
Strong: The evidence is based on one or more evaluations using experimental designs based on random allocation of individuals or groups of individuals (e.g., medical practices or hospital units) to comparison groups. The results of the evaluation(s) show consistent direct evidence of the effectiveness of the innovation in improving the targeted health care outcomes and/or processes, or structures in the case of health care policy innovations.
Moderate: While there are no randomized, controlled experiments, the evidence includes at least one systematic evaluation of the impact of the innovation using a quasi-experimental design, which could include the nonrandom assignment of individuals to comparison groups, before-and-after comparisons in one group, and/or comparisons with a historical baseline or control. The results of the evaluation(s) show consistent direct or indirect evidence of the effectiveness of the innovation in improving targeted health care outcomes and/or processes, or structures in the case of health care policy innovations. However, the strength of the evidence is limited by the size, quality, or generalizability of the evaluations, and thus alternative explanations cannot be ruled out.
Suggestive: While there are no systematic experimental or quasi-experimental evaluations, the evidence includes nonexperimental or qualitative support for an association between the innovation and targeted health care outcomes or processes, or structures in the case of health care policy innovations. This evidence may include noncomparative case studies, correlation analysis, or anecdotal reports. As with the category above, alternative explanations for the results achieved cannot be ruled out.
If the available qualitative and quantitative information is insufficient to place the innovation in one of the three categories above, the activity fails to meet the minimum inclusion criterion for evidence, and therefore is not eligible for inclusion as an Innovation Profile in the AHRQ Health Care Innovations Exchange. It may, however, qualify for inclusion as an Innovation Attempt.
Table 9-2 presents an example of the system the Task Force uses to grade recommendations. There are two categories of grades: one for “Suggestions for Practice” and one for “Certainty of Net Benefits.” Note the introductory paragraph about the change in the definition of Grade C.
Biases in Evidence Gathering
The gold standard for gathering new evidence is the randomized, controlled clinical trial, which uses a specifically selected population randomly divided into a control group and a treatment group. The control group, which may receive a placebo or other sham intervention or the current normal treatment, is compared to the treatment group. The design objective parallels the economist’s holy grail of “all other things being equal.” Concerns about bias are likely to be related to the relevance of the sample for clinical decision making or the reporting of the results. Only a very limited number of variables can be controlled directly in such a study.
Table 9-2 U.S. Preventive Services Task Force Recommended Grade Definitions
What the Grades Mean and Suggestions for Practice
Describing the strength of a recommendation is an important part of communicating its importance to clinicians and other users. Although most of the grade definitions have evolved since the USPSTF first began, none has changed more noticeably than the definition of a C recommendation, which has undergone three major revisions since 1998. Despite these revisions, the essence of the C recommendation has remained consistent: at the population level, the balance of benefits and harms is very close, and the magnitude of net benefit is small. Given this small net benefit, the USPSTF has either not made a recommendation “for or against routinely” providing the service (1998), recommended “against routinely” providing the service (2007), or recommended “selectively” providing the service (2012). Grade C recommendations are particularly sensitive to patient values and circumstances. Determining whether or not the service should be offered or provided to an individual patient will typically require an informed conversation between the clinician and patient.
Suggestions for Practice
The USPSTF recommends the service. There is a high certainty that the net benefit is substantial.
Offer or provide this service.
The USPSTF recommends the service. There is high certainty that the net benefit is moderate or there is moderate certainty that the net benefit is moderate to substantial.
Offer or provide this service.
The USPSTF recommends selectively offering or providing this service to individual patients based on professional judgment and patient preferences. There is at least moderate certainty that the net benefit is small.
Offer or provide this service for selected patients depending on individual circumstances.
The USPSTF recommends against the service. There is moderate or high certainty that the service has no net benefit or that the harms outweigh the benefits.
Discourage the use of this service.
The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of the service. Evidence is lacking, of poor quality, or conflicting, and the balance of benefits and harms cannot be determined.
Read the clinical considerations section of USPSTF Recommendation Statement. If the service is offered, patients should understand the uncertainty about the balance of benefits and harms.
Levels of Certainty Regarding Net Benefit
Level of Certainty*
The available evidence usually includes consistent results from well-designed, well-conducted studies in representative primary care populations. These studies assess the effects of preventive services on health outcomes. This conclusion is therefore unlikely to be strongly affected by the results of future studies.
The available evidence is sufficient to determine the effects of the preventive service on health outcomes, but confidence in the estimate is constrained by such factors as:
• The number, size, or quality of individual studies.
• Inconsistency of findings across individual studies.
• Limited generalizability of findings to routine primary care practice.
• Lack of coherence in the chain of evidence. As more information becomes available, the magnitude or direction of the observed effect could change, and this change may be large enough to alter the conclusion.
The available evidence is insufficient to assess effects on health outcomes. Evidence is insufficient because of:
• The limited number or size of studies.
• Important flaws in study design or methods.
• Inconsistency of findings across individual studies.
• Gaps in the chain of evidence.
• Findings not generalizable to routine primary care practice.
• Lack of information on important health outcomes.
More information may allow estimation of effects on health outcomes.
* The USPSTF defines certainty as “likelihood that the USPSTF assessment of the net benefit of a preventive service is correct.” The net benefit is defined as benefit minus harm of the preventive service as implemented in a general, primary care population. The USPSTF assigns a certainty level based on the nature of the overall evidence available to assess the net benefit of a preventive service.
Source: Reproduced from: U.S. Preventive Services Task Force Grade Definitions (2008, May). Retrieved from www.uspreventiveservicestaskforce.org/uspstf/grades.htm
More recently, there has been support, usually governmental, for comparative effectiveness studies, which tend to be observational in nature. They attempt to compare treatments under field conditions. However, the inputs and conditions of such studies are not as tightly controlled as in randomized, controlled clinical trials. They are considered suitable for hypothesis generation, but not as proof of efficacy. In the case of pharmaceuticals that have already been tested for efficacy, they are perhaps more meaningful than they are in the case of new procedures and other interventions. Comparative effectiveness studies have the advantage of being able to use large, relatively available databases, lowering the cost and duration of the study. Potential sources of bias include sample variety; practice variation; and unknown, uncontrolled variables. Because they use data that can be associated with charges, payments, and costs, as well as safety outcomes, they are critical to the measurement of that currently fashionable construct—value.
Clinician experience enters into decision making as well. However, a clinician’s experience has likely been influenced by past training, marketing efforts, event importance, recency effects, payer contracts, and perhaps even personal economic interests. Off-label use is often based on clinician experience, and it often is influenced by both legal and illegal promotion efforts.
Patient observation is important, but it is also subject to some of the same biases as clinician experience. There is also the lack of observational training and less experience (often based on a sample of one). Patients living with long-term major chronic problems often become very good observers and tie into observational networks that amplify their limited experience. They may even have more experience than the average clinician. Hey are certainly an important source for socioeconomic and psychosocial support information that is crucial to effective community-based care. Marketing efforts can shape patient attitudes and expectations in ways unsupported by effectiveness evidence.
9.4 CONSTRAINTS ON VARIABLES USED IN ANALYSIS OF EVIDENCE
Congress has constrained the use of certain economic and outcome valuations. These exclusions are presented in the case at the end of this chapter. However, the USPSTF continues to specify the following outcome measures as analysis inputs:
• Deaths, where relevant
• Important health outcomes, such as strokes avoided, or cancers caused
• Quality-adjusted life-years, if possible
• Harms (adverse events/states)
The Task Force’s experiences show the ambivalence and contentiousness that surrounds the rigorous use of evidence in the health sector. Remember here that one person’s waste is another person’s income.
9.5 THE EXAMPLE OF NICE
The National Health Service (NHS) in the United Kingdom has long had a process for assessing evidence and developing guidelines through its independent National Institute for Health and Clinical Excellence (NICE).
“We are internationally recognized for the way in which we develop our recommendations, a rigorous process that is centered on using the best available evidence and includes the views of experts, patients and caregivers, and industry,” notes the institute’s website. “We do not decide on the topics for our guidance and appraisals. Instead, topics are referred to us by the Department of Health. Disease burden, resource implications, practice variations, and other factors are considered when determining topics to address. Our guidance is then created by independent and unbiased advisory committees.” (NICE, 2013a)
Other countries have equivalent organizations.
9.6 DECISION AIDS
Evidence-based medicine uses guidelines and protocols, but it also strives to ensure that patients are part of the decision-making process. Current information technology can be used to support the dissemination of decision aids for both patients and caregivers. Sooner or later these applications will become part of the national information technology standard for meaningful use, which is being defined and implemented in stages.
Section 3506 of the Affordable Care Act (ACA) authorizes a program under the new Center for Medicare & Medicaid Innovation to develop decision aids that will:
… facilitate collaborative processes between patients, caregivers or authorized representatives, and clinicians that engages the patient, caregiver or authorized representative in decision-making, provides patients, caregivers or authorized representatives with information about trade-offs among treatment options, and facilitates the incorporation of patient preferences and values into the medical plan.
The term “preference sensitive care” means medical care for which the clinical evidence does not clearly support one treatment option such that the appropriate course of treatment depends on the values of the patient or the preferences of the patient, caregivers or authorized representatives regarding the benefits, harms and scientific evidence for each treatment option, the use of such care should depend on the informed patient choice among clinically appropriate treatment options. (p. 469)
States can support this movement through mandates or incentives for the implementation of decision aids (King & Moulton, 2013).
This movement is driven, in part, by discomfort with the authoritarian nature of the clinician’s typical role and, in part, by the knowledge that when patients are given information and choices the observed outcomes are better and the costs of care are often lower (Hibbard, Greene, & Overton, 2013; Veroff, Marr, & Wennberg, 2013). Cost is a major driver of patient-driven health care.
The February 2013 issue of Health Affairs was partly devoted to the themes of patient engagement and patient activation. The previous quote from the ACA highlights patients’ twin roles as active and informed consumers and as involved clinical decision makers. The patient may be involved as an individual, in collaboration with the payer, or in collaboration with the physician. Table 9-3 provides examples of activities associated with each of the roles.
Table 9-3 Examples of Engaged Patient Activities
Engaged Patient Roles
Clinical Decision Maker
Use Hospital COMPARE Use consumer satisfaction databases Do comparison shopping Ask local friends and experts
• Guideline databases
• Advocacy sites
• General information sites
Ask others about experiences
Get data on in-network providers Discuss with case managers Compare on exchanges
Access Web portals Discuss with case managers Review literature and brochures Make inquiries about coverage
Agree on appropriate entry mode Discuss rates and fees Observe and/or discuss philosophy and attitudes on cost and aggressiveness of treatment
Study journal literature Review guidelines together Use joint decision aids Talk through behaviors and preferences
Source: Reproduced from: Patient Outcomes Research Teams (PORTS): Managing Conflict of Interest. (1991). Institute of Medicine, Washington: National Press, p. 21. Courtesy of the National Academies Press, Washington, D.C.
Patients acting alone, or family members or other advocates acting on their behalf, can engage in information searches and decision making by consulting any number of sources of information. Comparative databases on quality and cost for hospitals and other providers are available. Patients can talk to their neighbors, professionals, or local experts.
Many websites are available that are devoted to specific symptoms and diagnoses, such as the American Diabetes Association (www.diabetes.org) and Patients Like Me (www.patientslikeme.com), as well as the medical literature and databases of governmental and professional guidelines (domestic and international). These can provide access to reports backed by evidence that falls along the evidence hierarchy, including reports from sources with major potential biases, such as television ads and vendor websites.
Then there are information providers fulfilling the role of honest broker, such as the insurance exchanges authorized under the ACA and administered either by the states or the federal government. Many payers maintain Web portals where enrollees can find tips on certain diagnoses and conditions, and they may also provide case managers for patients with certain chronic diseases or catastrophic illnesses. Again, the validity and level of evidence can vary a great deal.
The collaborative relationship between the provider and the patient envisioned in this movement is relatively new. So far, demonstration efforts have identified major barriers to widespread adoption, especially the drain on provider time, lack of payment for the time used, physician perceptions about patients’ ability to understand evidence, lack of relevant information, and patients’ preferences for a provider who acts as an authority figure (Lin etal., 2013; Yergian et al., 2013). A study of the use of Web-based decision aids in the NHS indicated that “clinicians did not feel the need to refer patients to use decision support tools, web-based or not, and, as a result, felt no requirement to change existing practice routines” (Elwyn et al., 2012).A review of the literature about the advantages and disadvantages of shared decision making and its effects on outcome in mental illness services is provided in SAMHSA (2011).
An interesting finding in the research on this change in the health care culture is that although physicians have the greatest influence over patient behavior, other clinic staff and off-site personnel can contribute successfully to the support of shared decision making (Veroff et al., 2013; Courneya,Palattao, & Gallagher, 2013).
To support collaboration, the patient and the physician must have an understanding of how, when, and where the patient will receive services, such as by telephone, email, in the physician’s office, or at an alternative service site. They must have a discussion of their philosophies and attitudes toward issues such as aggressiveness of treatment and costs of care. Some patients will want to control costs, whereas others will be uncomfortable when clinicians focus on costs in clinical decision making (Sommers et al., 2013). However, cost will play an increasingly important role in informing consumer behavior in the future.
9.7 DETERMINING VALUE
Most randomized, controlled clinical trials are conducted on proposed prescription drugs for which patent protections provide a potential monopoly. In such trials, U.S. researchers only need to establish a pharmaceutical’s safety and efficacy. By comparison, medical devices can piggyback on the testing of similar devices and procedures that do not require licensing. Manufacturers usually support studies in which a placebo is the control. This implies that the product being tested just has to be better than doing nothing at all. Determining whether a new product has value greater than that of existing products requires a comparative effectiveness study that includes cost comparisons. Because of the reluctance of manufacturers to conduct such studies, the government has had to step in. This is a relatively recent development. In 1989–1990, the predecessor to AHRQ issued a series of contracts for Patient Outcomes Review Team (PORT) studies that were quite controversial. A list of the initial studies is shown in Table 9-4. These studies tended to evaluate high-volume and/or high-cost interventions, and the concept was not popular with the provider community. Some of the issues addressed by these studies are still open to debate. In fact, after the publication of a study showing that it did not matter what type of provider treated acute (without sciatica) low back pain, surgeons almost succeeded in getting Congress to defund the agency. By 2008, however, concerns about value had become so great that the American Recovery and Reinvestment Act (ARRA) contained significant funding for comparative research, and the ACA established the new Patient-Centered Outcomes Research Institute (PCORI).
Table 9-4 Patient Outcomes Research Teams Funded as of October 1990
Title of Project
Principal Investigator or Institution
Assessing Therapies for Benign Prostatic Hypertrophy and Localized Prostate Cancer
John E. Wennberg
The Consequences of Variation in Treatment for Acute Myocardial Infarction
Barbara J. McNeil
Harvard Medical School
Back Pain Outcome Assessment Team
Richard A. Deyo
University of Washington
Variations in Cataract Management: Patient and Economic Choice
Earl P. Steinberg
The Johns Hopkins University
Assessing and Improving Outcomes: Total Knee Replacement
Deborah A. Freund
Outcome Assessment Program in Ischemic Heart Disease
David B. Pryor
Outcome Assessment of Patients with Biliary Tract Disease
J. Sanford Schwartz
Analysis of Practices: Hip Fracture Repair and Osteoarthritis
James I. Hudson
University of Maryland
Variations in the Management and Outcomes of Diabetes
New England Medical Center
Assessment of the Variation and Outcomes of Pneumonia
Wishwa N. Kappor
University of Pittsburgh
9.8 TRANSLATIONAL MEDICINE: ADOPTION, ADAPTATION, AND COMPLIANCE
Many are concerned with how slowly the results of basic research make their way into practice and the amount of time it takes for practice innovations to be evaluated and adopted. In addition, some experts suggest that clinician compliance with suggested guidelines is only about 50%. No one knows what the right figure actually is, because many patients do not duplicate the conditions envisioned by the guidelines and protocols. Because adoption rates affect both the quality and cost of health care, there has been increasing interest in more effectively linking basic science and clinical practice. The attempt to build a bridge between the silos of research and practice is called translational research, and medical researchers involved in such multidisciplinary work have defined their new field as translational medicine. Interest in this interface increased sharply after the National Institutes of Health began funding such efforts through the National Center for Advanced Translational Sciences.
The field of translational medicine is still loosely defined. A consensus report from the Evaluation Committee of the Association for Clinical Research Training proposed a definition of translational research. According to the report, translational research seeks to improve the public’s health by supporting the multidirectional integration of three types of research: basic, patient oriented, and population based. The definition specifies three types of translational research:
T1 research expedites the movement between basic research and patient-oriented research that leads to new or improved scientific understanding or standards of care. T2 research facilitates the movement between patient-oriented research and population-based research that leads to better patient outcomes, the implementation of best practices, and improved health status in communities. T3 research promotes interaction between laboratory-based research and population-based research to stimulate a robust scientific understanding of human health and disease. (Rubio et al., 2010)
Current efforts in translational medicine range from analyzing gene sequences associated with treatment outcomes to methods of improving patient compliance. The multidisciplinary teams may involve a wide array of members. It remains to be seen how this type of research will be accepted in academic medicine because it often involves observational studies and critical input from many disciplines. Observational studies involving expanding registries and clinical databases show great promise but face a number of barriers (Lauer & D’Agostino, 2013; Fleurence, Naci, & Jansen,2010).
The policy analyst will rub elbows with team members with skills in clinical epidemiology and experience with care delivery. The use of evidence-based medicine is increasing, and the analyst must be knowledgeable about its concepts and terms. The trained analyst will recognize that it follows the logical paradigm adopted by systems analysts, industrial engineers, and quality and safety improvement managers—that is, the scientific method. Much of the actual research and analysis will be carried out by health professionals in the burgeoning cottage industry of providing meta-analyses, summary reviews, protocols, and guidelines. It is important to know and understand the hierarchy of evidence and stay current with the rapidly expanding development of decision aids and with studies about how to activate and engage patients in clinical decision making. Although political and economic interests will try to seize the process of policy analysis and warp it in their favor, the antidote is to maintain high standards for validity and quality of evidence throughout. Policy makers in the United States are increasingly demanding such professionalism behind the scenes, if not in their public discourse.
Case 9 Constraints of the ACA on Evidence-Based Medicine
The ACA expanded the emphasis on developing evidence-based medicine in the 2009 stimulus act and established the Patient-Centered Outcomes Research Institute (PCORI) within the Centers for Medicare & Medicaid Services. However, the same legislation limited the ways the Institute’s research could be used within the Department of Health and Human Services. There clearly was a concern among federal lawmakers that these research findings would find their way directly into the workings of the even more controversial Medicare Advisory Payment Commission.
Pearson and Bach (2010) noted:
Under current law and because of years of precedent, Medicare generally covers any treatment that is deemed “reasonable and necessary, “regardless of the evidence on the treatment’s comparative effectiveness or its cost in relation to other treatments. Likewise, with only rare exceptions, Medicare does not use comparative effectiveness information to set payment rates. Instead, it links reimbursement in one way or another to the underlying cost of providing services. (p. 1796)
This is quite different from the way comparative effectiveness research is used in other counties at various regulatory stages, such as new drug approvals and approved protocols.
Congress maintained this status quo, in part, by placing a number of constraints on the use of comparative effectiveness research. Section 6301amended Section 1181 of the Social Security Act to establish the Institute with the following purpose:
The Institute is to assist patients, clinicians, purchasers, and policy-makers in making informed health decisions by advancing the quality and relevance of evidence concerning the manner in which diseases, disorders, and other health conditions can effectively and appropriately be prevented, diagnosed, treated, monitored, and managed through research and evidence synthesis that considers variations in patient subpopulations, and the dissemination of research findings with respect to the relative health outcomes, clinical effectiveness, and appropriateness of the medical treatments, services, and items described in subsection (a)(2)(B).
However, the ACA went on to specify:
SEC. 1182 o42 U.S.C. 1320e–1. (a) The Secretary may only use evidence and findings from research conducted under section 1181 to make a determination regarding coverage under title XVIII (Medicare) if such use is through an iterative and transparent process which includes public comment and considers the effect on subpopulations.
(b) Nothing in section 1181 shall be construed as—
(1) super ceding or modifying the coverage of items or services under title XVIII that the Secretary determines are reasonable and necessary under section 1862(l)(1); or
(2) authorizing the Secretary to deny coverage of items or services under such title solely on the basis of comparative clinical effectiveness research.
(c)(1) The Secretary shall not use evidence or findings from comparative clinical effectiveness research conducted under section 1181in determining coverage, reimbursement, or incentive programs under title XVIII in a manner that treats extending the life of an elderly, disabled, or terminally ill individual as of lower value than extending the life of an individual who is younger, nondisabled, or not terminally ill.
(2) Paragraph (1) shall not be construed as preventing the Secretary from using evidence or findings from such comparative clinical effectiveness research in determining coverage, reimbursement, or incentive programs under title XVIII based upon a comparison of the difference in the effectiveness of alternative treatments in extending an individual’s life due to the individual’s age.
The law further restricted the use of the findings in other sections:
(d)(1) The Secretary shall not use evidence or findings from comparative clinical effectiveness research conducted under section 1181in determining coverage, reimbursement, or incentive programs under title XVIII in a manner that precludes, or with the intent to discourage, an individual from choosing a health care treatment based on how the individual values the tradeoff between extending the length of their life and the risk of disability.
(2)(A) Paragraph (1) shall not be construed to—
(I) limit the application of differential copayments under title XVIII based on factors such as cost or type of service; or
(ii) prevent the Secretary from using evidence or findings from such comparative clinical effectiveness research in determining coverage, reimbursement, or incentive programs under such title based upon a comparison of the difference in the effectiveness of alternative health care treatments in extending an individual’s life due to that individual’s age, disability, or terminal illness.
(3) Nothing in the provisions of, or amendments made by the Patient Protection and Affordable Care Act, shall be construed to limit comparative clinical effectiveness research or any other research, evaluation, or dissemination of information concerning the likelihood that a health care treatment will result in disability.
(e) The Patient-Centered Outcomes Research Institute established under section 1181(b)(1) shall not develop or employ a dollars-per-quality adjusted life year (or similar measure that discounts the value of a life because of an individual’s disability) as a threshold to establish what type of health care is cost effective or recommended. The Secretary shall not utilize such an adjusted life year (or such a similar measure) as a threshold to determine coverage, reimbursement, or incentive programs under title XVIII.
Additional constraints in the law included:
• Section 6301 (d)(8)(iv), which states: “The Institute shall ensure that the research findings … do not include practice guidelines, coverage recommendations, payment, or policy recommendations.”
• Section 6301(j), which addresses the rule of construction, includes this language concerning coverage: “Nothing in this section shall be construed … to permit the Institute to mandate coverage, reimbursement, or other policies for any public or private payer.”
• Section 6301 adds a Section 937 on Dissemination and Building Capacity for Research to Title IX of the Public Health Service Act, which states: “Materials, forums, and media used to disseminate the findings, informational tools, and resource databases shall … not be construed as mandates, guidelines, or recommendations for payment, coverage, or treatment.”
1. What do you think are the interests that are being protected here?
2. How effective are these constraints likely to be?
3. Why is the United States constraining these analyses while other countries are using them?
4. What will be the impact of these constraints in the long run?