Systematic reviews related to this topic

loading
6 References (6 articles) loading Revert Studify

Systematic review

Unclassified

Authors Ferguson J , Wakeling J , Bowie P
Journal BMC medical education
Year 2014
Loading references information
BACKGROUND: Multisource feedback (MSF) is currently being introduced in the UK as part of a cycle of performance review for doctors. However, although it is suggested that the provision of feedback can lead to a positive change in performance and learning for medical professionals, the evidence supporting these assumptions is unclear. The aim of this review, therefore, was to identify the key factors that influence the effectiveness of multisource feedback in improving the professional practice of medical doctors. METHOD: Relevant electronic bibliographic databases were searched for studies that aimed to assess the impact of MSF on professional practice. Two reviewers independently selected and quality assessed the studies and abstracted data regarding study design, setting, MSF instrument, behaviour changes identified and influencing factors using a standard data extraction form. RESULTS: A total of 16 studies met the inclusion criteria and quality assessment criteria. While seven studies reported only a general change in professional practice, a further seven studies identified specific changes in behaviour. The main professional behaviours that were found to be influenced by the feedback were communication, both with colleagues and patients and an improvement in clinical competence/skills. The main factors found to influence the acceptance and use of MSF were the format of the feedback, specifically in terms of whether it was facilitated, or if narrative comments were included in the review, and if the feedback was from sources that the physician believed to be knowledgeable and credible. CONCLUSIONS: While there is limited evidence suggesting that MSF can influence professional performance, the quality of this evidence is variable. Further research is necessary to establish how this type of feedback actually influences behaviours and what factors have greatest influence.

Systematic review

Unclassified

Journal BMC medical education
Year 2012
Loading references information
BACKGROUND: With recent emphasis placed on workplace based assessment (WBA) as a method of formative performance assessment, there is limited evidence in the current literature regarding the role of feedback in improving the effectiveness of WBA. The aim of this systematic review was to elucidate the impact of feedback on the effectiveness of WBA in postgraduate medical training. METHODS: Searches were conducted using the following bibliographic databases to identify original published studies related to WBA and the role of feedback: Medline (1950-December 2010), Embase (1980-December 2010) and Journals@Ovid (English language only, 1996-December 2010). Studies which attempted to evaluate the role of feedback in WBA involving postgraduate doctors were included. RESULTS: 15 identified studies met the inclusion criteria and minimum quality threshold. They were heterogeneous in methodological design. 7 studies focused on multi source feedback, 3 studies were based on mini-clinical evaluation exercise, 2 looked at procedural based assessment, one study looked at workplace based assessments in general and 2 studies looked at a combination of 3 to 6 workplace based assessments. 7 studies originated from the United Kingdom. Others were from Canada, the United States and New Zealand. Study populations were doctors in various grades of training from a wide range of specialties including general practice, general medicine, general surgery, dermatology, paediatrics and anaesthetics. All studies were prospective in design, and non-comparative descriptive or observational studies using a variety of methods including questionnaires, one to one interviews and focus groups. CONCLUSIONS: The evidence base contains few high quality conclusive studies and more studies are required to provide further evidence for the effect of feedback from workplace based assessment on subsequent performance. There is, however, good evidence that if well implemented, feedback from workplace based assessments, particularly multisource feedback, leads to a perceived positive effect on practice.

Systematic review

Unclassified

Journal Academic medicine : journal of the Association of American Medical Colleges
Year 2011
Loading references information
PURPOSE: The effect of patient feedback interventions as a method of improving physicians' consultation (i.e., communication, interpersonal) skills is equivocal; research is scarce, and methods and rigor vary. The authors conducted this systematic review to analyze the educational effect of feedback from real patients on physicians' consultation skills at the four Kirkpatrick levels. METHOD: The authors searched five databases (PubMed, EMBASE, Cochrane, PsycInfo, ERIC; April 2010). They included empirical studies of all designs (randomized controlled, quasi-experimental, cross-sectional, and qualitative) if the studies concerned physicians in general health care who received formal feedback regarding their consultation skills from real patients. The authors have briefly described aspects of the included studies, analyzed their quality, and examined their results by Kirkpatrick educational effect level. RESULTS: The authors identified 15 studies (10 studies in primary care; 5 in other specialties) in which physicians received feedback in various ways (e.g., aggregated patient reports or educator-mediated coaching sessions), conducted in the United States, the Netherlands, the United Kingdom, Australia, and Canada. All studies that assessed level 1 (valuation), level 2 (learning), and level 3 (intended behavior) demonstrated positive results; however, only four of the seven studies that assessed level 4 (change in actual performance or results) found a beneficial effect. CONCLUSIONS: Some evidence for the effectiveness of using feedback from real patients to improve knowledge and behavior exists; however, before implementing patient feedback into training programs, educators and policy makers should realize that the evidence for effecting actual improvement in physicians' consulting skills is rather limited.

Systematic review

Unclassified

Authors Miller A , Archer J
Journal BMJ (Clinical research ed.)
Year 2010
Loading references information
Objective: To investigate the literature for evidence that workplace based assessment affects doctors' education and performance. Design: Systematic review. Data sources: The primary data sources were the databases Journals@Ovid, Medline, Embase, CINAHL, PsycINFO, and ERIC. Evidence based reviews (Bandolier, Cochrane Library, DARE, HTA Database, and NHS EED) were accessed and searched via the Health Information Resources website. Reference lists of relevant studies and bibliographies of review articles were also searched. Review methods: Studies of any design that attempted to evaluate either the educational impact of workplace based assessment, or the effect of workplace based assessment on doctors' performance, were included. Studies were excluded if the sampled population was non-medical or the study was performed with medical students. Review articles, commentaries, and letters were also excluded. The final exclusion criterion was the use of simulated patients or models rather than real life clinical encounters. Results: Sixteen studies were included. Fifteen of these were non-comparative descriptive or observational studies; the other was a randomised controlled trial. Study quality was mixed. Eight studies examined multisource feedback with mixed results; most doctors felt that multisource feedback had educational value, although the evidence for practice change was conflicting. Some junior doctors and surgeons displayed little willingness to change in response to multisource feedback, whereas family physicians might be more prepared to initiate change. Performance changes were more likely to occur when feedback was credible and accurate or when coaching was provided to help subjects identify their strengths and weaknesses. Four studies examined the mini-clinical evaluation exercise, one looked at direct observation of procedural skills, and three were concerned with multiple assessment methods: all these studies reported positive results for the educational impact of workplace based assessment tools. However, there was no objective evidence of improved performance with these tools. Conclusions: Considering the emphasis placed on workplace based assessment as a method of formative performance assessment, there are few published articles exploring its impact on doctors' education and performance. This review shows that multisource feedback can lead to performance improvement, although individual factors, the context of the feedback, and the presence of facilitation have a profound effect on the response. There is no evidence that alternative workplace based assessment tools (mini-clinical evaluation exercise, direct observation of procedural skills, and case based discussion) lead to improvement in performance, although subjective reports on their educational impact are positive.

Systematic review

Unclassified

Journal Medical education
Year 2007
Loading references information
CONTEXT: Continuous assessment of individual performance of doctors is crucial for life-long learning and quality of care. Policy-makers and health educators should have good insights into the strengths and weaknesses of the methods available. The aim of this study was to systematically evaluate the feasibility of methods, the psychometric properties of instruments that are especially important for summative assessments, and the effectiveness of methods serving formative assessments used in routine practise to assess the performance of individual doctors. METHODS: We searched the MEDLINE (1966-January 2006), PsychINFO (1972-January 2006), CINAHL (1982-January 2006), EMBASE (1980-January 2006) and Cochrane (1966-2006) databases for English language articles, and supplemented this with a hand-search of reference lists of relevant studies and bibliographies of review articles. Studies that aimed to assess the performance of individual doctors in routine practise were included. Two reviewers independently abstracted data regarding study design, setting and findings related to reliability, validity, feasibility and effectiveness using a standard data abstraction form. RESULTS: A total of 64 articles met our inclusion criteria. We observed 6 different methods of evaluating performance: simulated patients; video observation; direct observation; peer assessment; audit of medical records, and portfolio or appraisal. Peer assessment is the most feasible method in terms of costs and time. Little psychometric assessment of the instruments has been undertaken so far. Effectiveness of formative assessments is poorly studied. All systems but 2 rely on a single method to assess performance. DISCUSSION: There is substantial potential to assess performance of doctors in routine practise. The longterm impact and effectiveness of formative performance assessments on education and quality of care remains hardly known. Future research designs need to pay special attention to unmasking effectiveness in terms of performance improvement.

Systematic review

Unclassified

Journal Medical teacher
Year 2006
Loading references information
BACKGROUND AND CONTEXT: There is a basis for the assumption that feedback can be used to enhance physicians' performance. Nevertheless, the findings of empirical studies of the impact of feedback on clinical performance have been equivocal. OBJECTIVES: To summarize evidence related to the impact of assessment and feedback on physicians' clinical performance. SEARCH STRATEGY: The authors searched the literature from 1966 to 2003 using MEDLINE, HealthSTAR, the Science Citation Index and eight other electronic databases. A total of 3702 citations were identified. INCLUSION AND EXCLUSION CRITERIA: Empirical studies were selected involving the baseline measurement of physicians' performance and follow-up measurement after they received summaries of their performance. DATA EXTRACTION: Data were extracted on research design, sample, dependent and independent variables using a written protocol. DATA SYNTHESIS: A group of 220 studies involving primary data collection was identified. However, only 41 met all selection criteria and evaluated the independent effect of feedback on physician performance. Of these, 32 (74%) demonstrated a positive impact. Feedback was more likely to be effective when provided by an authoritative source over an extended period of time. Another subset of 132 studies examined the effect of feedback combined with other interventions such as educational programmes, practice guidelines and reminders. Of these, 106 studies (77%) demonstrated a positive impact. Two additional subsets of 29 feedback studies involving resident physicians in training and 18 studies examining proxy measures of physician performance across clinical sites or groups of patients were reviewed. The majority of these two subsets also reported that feedback had positive effects on performance. HEADLINE RESULTS: Feedback can change physicians' clinical performance when provided systematically over multiple years by an authoritative, credible source. CONCLUSIONS: The effects of formal assessment and feedback on physician performance are influenced by the source and duration of feedback. Other factors, such as physicians' active involvement in the process, the amount of information reported, the timing and amount of feedback, and other concurrent interventions, such as education, guidelines, reminder systems and incentives, also appear to be important. However, the independent contributions of these interventions have not been well documented in controlled studies. It is recommended that the designers of future theoretical as well as practical studies of feedback separate the effects of feedback from other concurrent interventions.