Abstract - Background Printed educational materials are widely used dissemination strategies to improve the quality of healthcare professionals' practice and patient health outcomes. Traditionally they are presented in paper formats such as monographs, publication in peer‐reviewed journals and clinical guidelines. This is the fourth update of the review. Objectives To assess the effect of printed educational materials (PEMs) on the practice of healthcare professionals and patient health outcomes. To explore the influence of some of the characteristics of the printed educational materials (e.g. source, content, format) on their effect on healthcare professionals' practice and patient health outcomes. Search methods We searched MEDLINE, Embase, the Cochrane Central Register of Controlled Trials (CENTRAL), HealthStar, CINAHL, ERIC, CAB Abstracts, Global Health, and EPOC Register from their inception to 6 February 2019. We checked the reference lists of all included studies and relevant systematic reviews. Selection criteria We included randomised trials (RTs), controlled before‐after studies (CBAs) and interrupted time series studies (ITSs) that evaluated the impact of PEMs on healthcare professionals' practice or patient health outcomes. We included three types of comparisons: (1) PEM versus no intervention, (2) PEM versus single intervention, (3) multifaceted intervention where PEM is included versus multifaceted intervention without PEM. Any objective measure of professional practice (e.g. prescriptions for a particular drug), or patient health outcomes (e.g. blood pressure) were included. Data collection and analysis Two reviewers undertook data extraction independently. Disagreements were resolved by discussion. For analyses, we grouped the included studies according to study design, type of outcome and type of comparison. For controlled trials, we reported the median effect size for each outcome within each study, the median effect size across outcomes for each study and the median of these effect sizes across studies. Where data were available, we re‐analysed the ITS studies by converting all data to a monthly basis and estimating the effect size from the change in the slope of the regression line between before and after implementation of the PEM. We reported median changes in slope for each outcome, for each study, and then across studies. We standardised all changes in slopes by their standard error, allowing comparisons and combination of different outcomes. We categorised each PEM according to potential effects modifiers related to the source of the PEMs, the channel used for their delivery, their content, and their format. We assessed the risks of bias of all the included studies. Main results We included 84 studies: 32 RTs, two CBAs and 50 ITS studies. Of the 32 RTs, 19 were cluster RTs that used various units of randomisation, such as practices, health centres, towns, or areas. The majority of the included studies (82/84) compared the effectiveness of PEMs to no intervention. Based on the RTs that provided moderate‐certainty evidence, we found that PEMs distributed to healthcare professionals probably improve their practice, as measured with dichotomous variables, compared to no intervention (median absolute risk difference (ARD): 0.04; interquartile range (IQR): 0.01 to 0.09; 3,963 healthcare professionals randomised within 3073 units). We could not confirm this finding using the evidence gathered from continuous variables (standardised mean difference (SMD): 0.11; IQR: ‐0.16 to 0.52; 1631 healthcare professionals randomised within 1373 units ), from the ITS studies (standardised median change in slope = 0.69; 35 studies), or from the CBA study because the certainty of this evidence was very low. We also found, based on RTs that provided moderate‐certainty evidence, that PEMs distributed to healthcare professionals probably make little or no difference to patient health as measured using dichotomous variables, compared to no intervention (ARD: 0.02; IQR: ‐0.005 to 0.09; 935,015 patients randomised within 959 units). The evidence gathered from continuous variables (SMD: 0.05; IQR: ‐0.12 to 0.09; 6,737 patients randomised within 594 units) or from ITS study results (standardised median change in slope = 1.12; 8 studies) do not strengthen these findings because the certainty of this evidence was very low. Two studies (a randomised trial and a CBA) compared a paper‐based version to a computerised version of the same PEM. From the RT that provided evidence of low certainty, we found that PEM in computerised versions may make little or no difference to professionals' practice compared to PEM in printed versions (ARD: ‐0.02; IQR: ‐0.03 to 0.00; 139 healthcare professionals randomised individually). This finding was not strengthened by the CBA study that provided very low certainty evidence (SMD: 0.44; 32 healthcare professionals). The data gathered did not allow us to conclude which PEM characteristics influenced their effectiveness. The methodological quality of the included studies was variable. Half of the included RTs were at risk of selection bias. Most of the ITS studies were conducted retrospectively, without prespecifying the expected effect of the intervention, or acknowledging the presence of a secular trend. Authors' conclusions The results of this review suggest that, when used alone and compared to no intervention, PEMs may slightly improve healthcare professionals' practice outcomes and patient health outcomes. The effectiveness of PEMs compared to other interventions, or of PEMs as part of a multifaceted intervention, is uncertain. Plain language summary Printed educational materials for healthcare professional practice and patient health What is the aim of this review? The aim of this review was to find out whether printed educational material distributed to healthcare professionals can improve their practice and in turn improve patient health. Key messages The results of this review indicate that printed educational materials probably improve the practice of healthcare professionals and probably make little or no difference to patient health. The results also suggest that computerised versions may make little or no difference to healthcare professionals' practice compared to printed versions of the same printed educational material. Further research with rigorous methodology is likely to have an important impact on our confidence in these estimates of effect, and may change the estimate. What was studied in the review? Medical journals and clinical practice guidelines are common channels to distribute scientific information to healthcare professionals, as they allow a wide distribution at relatively low cost. Delivery of printed educational materials is meant to improve healthcare professionals' awareness, knowledge, attitudes, and skills, and ultimately improve their practice and patients' health outcomes. What are the main results of this review? The review authors found 84 studies. Most of these studies compared healthcare professionals who had received printed educational material to healthcare professionals who had not received them. Results of this review suggest that printed educational material probably improves healthcare professionals' practice, and probably makes little or no difference to patient health compared to no intervention. Two studies (a randomised trial and a CBA) compared printed and computerised versions of the same educational material and suggest that computerised versions may make little or no difference to healthcare professionals' practice compared to printed versions. How up‐to‐date is this review? The review authors searched for studies that had been published up to 8 February 2019.
BACKGROUND: Many people advocate audit and feedback as a strategy for improving professional practice. The main results of an update of a Cochrane review on the effects of audit and feedback are reported.
DATA SOURCES: The Cochrane Effective Practice and Organisation of Care Group's register up to January 2004 was searched. Randomised trials of audit and feedback that reported objectively measured professional practice in a healthcare setting or healthcare outcomes were included.
REVIEW METHODS: Data were independently extracted and the quality of studies were assessed by two reviewers. Quantitative, visual and qualitative analyses were undertaken.
MAIN RESULTS: 118 trials are included in the review. In the primary analysis, 88 comparisons from 72 studies were included that compared any intervention in which audit and feedback was a component to no intervention. For dichotomous outcomes, the median-adjusted risk difference of compliance with desired practice was 5% (interquartile range 3-11). For continuous outcomes, the median-adjusted percentage change relative to control was 16% (interquartile range 5-37). Low baseline compliance with recommended practice and higher intensity of audit and feedback appeared to predict the effectiveness of audit and feedback.
CONCLUSIONS: Audit and feedback can be effective in improving professional practice. The effects are generally small to moderate. The absolute effects of audit and feedback are likely to be larger when baseline adherence to recommended practice is low and intensity of audit and feedback is high.
OBJECTIVE: To identify features of clinical decision support systems critical for improving clinical practice. DESIGN: Systematic review of randomised controlled trials. DATA SOURCES: Literature searches via Medline, CINAHL, and the Cochrane Controlled Trials Register up to 2003; and searches of reference lists of included studies and relevant reviews. STUDY SELECTION: Studies had to evaluate the ability of decision support systems to improve clinical practice. DATA EXTRACTION: Studies were assessed for statistically and clinically significant improvement in clinical practice and for the presence of 15 decision support system features whose importance had been repeatedly suggested in the literature. RESULTS: Seventy studies were included. Decision support systems significantly improved clinical practice in 68% of trials. Univariate analyses revealed that, for five of the system features, interventions possessing the feature were significantly more likely to improve clinical practice than interventions lacking the feature. Multiple logistic regression analysis identified four features as independent predictors of improved clinical practice: automatic provision of decision support as part of clinician workflow (P < 0.00001), provision of recommendations rather than just assessments (P = 0.0187), provision of decision support at the time and location of decision making (P = 0.0263), and computer based decision support (P = 0.0294). Of 32 systems possessing all four features, 30 (94%) significantly improved clinical practice. Furthermore, direct experimental justification was found for providing periodic performance feedback, sharing recommendations with patients, and requesting documentation of reasons for not following recommendations. CONCLUSIONS: Several features were closely correlated with decision support systems' ability to improve patient care significantly. Clinicians and other stakeholders should implement clinical decision support systems that incorporate these features whenever feasible and appropriate.
OBJECTIVE: To recommend effective strategies for implementing clinical practice guidelines (CPGs).
DATA SOURCES: The Research and Development Resource Base in Continuing Medical Education, maintained by the University of Toronto, was searched, as was MEDLINE from January 1990 to June 1996, inclusive, with the use of the MeSH heading "practice guidelines" and relevant text words.
STUDY SELECTION: Studies of CPG implementation strategies and reviews of such studies were selected. Randomized controlled trials and trials that objectively measured physicians' performance or health care outcomes were emphasized.
DATA EXTRACTION: Articles were reviewed to determine the effect of various factors on the adoption of guidelines.
DATA SYNTHESIS: The articles showed that CPG dissemination or implementation processes have mixed results. Variables that affect the adoption of guidelines include qualities of the guidelines, characteristics of the health care professional, characteristics of the practice setting, incentives, regulation and patient factors. Specific strategies fell into 2 categories: primary strategies involving mailing or publication of the actual guidelines and secondary interventional strategies to reinforce the guidelines. The interventions were shown to be weak (didactic, traditional continuing medical education and mailings), moderately effective (audit and feedback, especially concurrent, targeted to specific providers and delivered by peers or opinion leaders) and relatively strong (reminder systems, academic detailing and multiple interventions).
CONCLUSIONS: The evidence shows serious deficiencies in the adoption of CPGs in practice. Future implementation strategies must overcome this failure through an understanding of the forces and variables influencing practice and through the use of methods that are practice- and community-based rather than didactic.
OBJECTIVE: To review the literature relating to the effectiveness of education strategies designed to change physician performance and health care outcomes. DATA SOURCES: We searched MEDLINE, ERIC, NTIS, the Research and Development Resource Base in Continuing Medical Education, and other relevant data sources from 1975 to 1994, using continuing medical education (CME) and related terms as keywords. We manually searched journals and the bibliographies of other review articles and called on the opinions of recognized experts. STUDY SELECTION: We reviewed studies that met the following criteria: randomized controlled trials of education strategies or interventions that objectively assessed physician performance and/or health care outcomes. These intervention strategies included (alone and in combination) educational materials, formal CME activities, outreach visits such as academic detailing, opinion leaders, patient-mediated strategies, audit with feedback, and reminders. Studies were selected only if more than 50% of the subjects were either practicing physicians or medical residents. DATA EXTRACTION: We extracted the specialty of the physicians targeted by the interventions and the clinical domain and setting of the trial. We also determined the details of the educational intervention, the extent to which needs or barriers to change had been ascertained prior to the intervention, and the main outcome measure(s). DATA SYNTHESIS: We found 99 trials, containing 160 interventions, that met our criteria. Almost two thirds of the interventions (101 of 160) displayed an improvement in at least one major outcome measure: 70% demonstrated a change in physician performance, and 48% of interventions aimed at health care outcomes produced a positive change. Effective change strategies included reminders, patient-mediated interventions, outreach visits, opinion leaders, and multifaceted activities. Audit with feedback and educational materials were less effective, and formal CME conferences or activities, without enabling or practice-reinforcing strategies, had relatively little impact. CONCLUSION: Widely used CME delivery methods such as conferences have little direct impact on improving professional practice. More effective methods such as systematic practice-based interventions and outreach visits are seldom used by CME providers.
OBJECTIVE: To establish what is known about the role of feedback of statistical information in changing clinical practice. DESIGN: Review of 36 studies of interventions entailing the use of statistical information for audit or practice review, which used a formal research design. SUBJECTS: Papers identified from computer searches of medical and health service management publications, of which 36 describing studies of interventions designed to influence clinical care and including information feedback from clinical or administrative data systems were reviewed. MAIN OUTCOME MEASURES: Evidence for effect of information feedback on change in clinical practice. RESULTS: Information feedback was most likely to influence clinical practice if it was part of strategy to target decision makers who had already agreed to review their practice. A more direct effect was discernable if the information was presented close to the time of decision making. The questions of the optimum layout and quantity of information were not addressed; the 36 papers were insufficient for defining good formats for information to be used for audit or quality assurance. CONCLUSIONS: Given the cost of information processing and the current emphasis on closing the audit loop in the health services, it is important that the use of information in the audit process should be critically evaluated.
Abstract - Background Printed educational materials are widely used dissemination strategies to improve the quality of healthcare professionals' practice and patient health outcomes. Traditionally they are presented in paper formats such as monographs, publication in peer‐reviewed journals and clinical guidelines. This is the fourth update of the review. Objectives To assess the effect of printed educational materials (PEMs) on the practice of healthcare professionals and patient health outcomes. To explore the influence of some of the characteristics of the printed educational materials (e.g. source, content, format) on their effect on healthcare professionals' practice and patient health outcomes. Search methods We searched MEDLINE, Embase, the Cochrane Central Register of Controlled Trials (CENTRAL), HealthStar, CINAHL, ERIC, CAB Abstracts, Global Health, and EPOC Register from their inception to 6 February 2019. We checked the reference lists of all included studies and relevant systematic reviews. Selection criteria We included randomised trials (RTs), controlled before‐after studies (CBAs) and interrupted time series studies (ITSs) that evaluated the impact of PEMs on healthcare professionals' practice or patient health outcomes. We included three types of comparisons: (1) PEM versus no intervention, (2) PEM versus single intervention, (3) multifaceted intervention where PEM is included versus multifaceted intervention without PEM. Any objective measure of professional practice (e.g. prescriptions for a particular drug), or patient health outcomes (e.g. blood pressure) were included. Data collection and analysis Two reviewers undertook data extraction independently. Disagreements were resolved by discussion. For analyses, we grouped the included studies according to study design, type of outcome and type of comparison. For controlled trials, we reported the median effect size for each outcome within each study, the median effect size across outcomes for each study and the median of these effect sizes across studies. Where data were available, we re‐analysed the ITS studies by converting all data to a monthly basis and estimating the effect size from the change in the slope of the regression line between before and after implementation of the PEM. We reported median changes in slope for each outcome, for each study, and then across studies. We standardised all changes in slopes by their standard error, allowing comparisons and combination of different outcomes. We categorised each PEM according to potential effects modifiers related to the source of the PEMs, the channel used for their delivery, their content, and their format. We assessed the risks of bias of all the included studies. Main results We included 84 studies: 32 RTs, two CBAs and 50 ITS studies. Of the 32 RTs, 19 were cluster RTs that used various units of randomisation, such as practices, health centres, towns, or areas. The majority of the included studies (82/84) compared the effectiveness of PEMs to no intervention. Based on the RTs that provided moderate‐certainty evidence, we found that PEMs distributed to healthcare professionals probably improve their practice, as measured with dichotomous variables, compared to no intervention (median absolute risk difference (ARD): 0.04; interquartile range (IQR): 0.01 to 0.09; 3,963 healthcare professionals randomised within 3073 units). We could not confirm this finding using the evidence gathered from continuous variables (standardised mean difference (SMD): 0.11;
IQR:
‐0.16 to 0.52; 1631 healthcare professionals randomised within 1373 units ), from the ITS studies (standardised median change in slope = 0.69; 35 studies), or from the CBA study because the certainty of this evidence was very low. We also found, based on RTs that provided moderate‐certainty evidence, that PEMs distributed to healthcare professionals probably make little or no difference to patient health as measured using dichotomous variables, compared to no intervention (
ARD:
0.02;
IQR:
‐0.005 to 0.09; 935,015 patients randomised within 959 units). The evidence gathered from continuous variables (
SMD:
0.05;
IQR:
‐0.12 to 0.09; 6,737 patients randomised within 594 units) or from ITS study results (standardised median change in slope = 1.12; 8 studies) do not strengthen these findings because the certainty of this evidence was very low. Two studies (a randomised trial and a CBA) compared a paper‐based version to a computerised version of the same PEM. From the RT that provided evidence of low certainty, we found that PEM in computerised versions may make little or no difference to professionals' practice compared to PEM in printed versions (
ARD:
‐0.02;
IQR:
‐0.03 to 0.00; 139 healthcare professionals randomised individually). This finding was not strengthened by the CBA study that provided very low certainty evidence (
SMD:
0.44; 32 healthcare professionals). The data gathered did not allow us to conclude which PEM characteristics influenced their effectiveness. The methodological quality of the included studies was variable. Half of the included RTs were at risk of selection bias. Most of the ITS studies were conducted retrospectively, without prespecifying the expected effect of the intervention, or acknowledging the presence of a secular trend. Authors' conclusions The results of this review suggest that, when used alone and compared to no intervention, PEMs may slightly improve healthcare professionals' practice outcomes and patient health outcomes. The effectiveness of PEMs compared to other interventions, or of PEMs as part of a multifaceted intervention, is uncertain. Plain language summary Printed educational materials for healthcare professional practice and patient health What is the aim of this review? The aim of this review was to find out whether printed educational material distributed to healthcare professionals can improve their practice and in turn improve patient health. Key messages The results of this review indicate that printed educational materials probably improve the practice of healthcare professionals and probably make little or no difference to patient health. The results also suggest that computerised versions may make little or no difference to healthcare professionals' practice compared to printed versions of the same printed educational material. Further research with rigorous methodology is likely to have an important impact on our confidence in these estimates of effect, and may change the estimate. What was studied in the review? Medical journals and clinical practice guidelines are common channels to distribute scientific information to healthcare professionals, as they allow a wide distribution at relatively low cost. Delivery of printed educational materials is meant to improve healthcare professionals' awareness, knowledge, attitudes, and skills, and ultimately improve their practice and patients' health outcomes. What are the main results of this review? The review authors found 84 studies. Most of these studies compared healthcare professionals who had received printed educational material to healthcare professionals who had not received them. Results of this review suggest that printed educational material probably improves healthcare professionals' practice, and probably makes little or no difference to patient health compared to no intervention. Two studies (a randomised trial and a CBA) compared printed and computerised versions of the same educational material and suggest that computerised versions may make little or no difference to healthcare professionals' practice compared to printed versions. How up‐to‐date is this review? The review authors searched for studies that had been published up to 8 February 2019.