Reliability of the PEDro Scale for rating quality of randomized controlled trials.

Category Primary study
JournalPhysical Therapy
Year 2003

This article is included in 2 Systematic reviews Systematic reviews (2 references)

This article is part of the following publication threads:
Loading references information

BACKGROUND AND PURPOSE:

Assessment of the quality of randomized controlled trials (RCTs) is common practice in systematic reviews. However, the reliability of data obtained with most quality assessment scales has not been established. This report describes 2 studies designed to investigate the reliability of data obtained with the Physiotherapy Evidence Database (PEDro) scale developed to rate the quality of RCTs evaluating physical therapist interventions.

METHOD:

In the first study, 11 raters independently rated 25 RCTs randomly selected from the PEDro database. In the second study, 2 raters rated 120 RCTs randomly selected from the PEDro database, and disagreements were resolved by a third rater; this generated a set of individual rater and consensus ratings. The process was repeated by independent raters to create a second set of individual and consensus ratings. Reliability of ratings of PEDro scale items was calculated using multirater kappas, and reliability of the total (summed) score was calculated using intraclass correlation coefficients (ICC [1,1]).

RESULTS:

The kappa value for each of the 11 items ranged from.36 to.80 for individual assessors and from.50 to.79 for consensus ratings generated by groups of 2 or 3 raters. The ICC for the total score was.56 (95% confidence interval=.47-.65) for ratings by individuals, and the ICC for consensus ratings was.68 (95% confidence interval=.57-.76).

DISCUSSION AND CONCLUSION:

The reliability of ratings of PEDro scale items varied from 'fair' to 'substantial,' and the reliability of the total PEDro score was 'fair' to 'good.'
Epistemonikos ID: 4560c9d3ee3ed3f5a396658432e9635a1c950005
First added on: Oct 21, 2016