Patricia A Patrician
University of Alabama, USA
Title: Determining inter-rater reliability of an innovation implementation checklist
Biography
Biography: Patricia A Patrician
Abstract
Inter-rater reliability is an important consideration in instrument development as well as in the ongoing fidelity of measurements that can be somewhat subjective.The Cohen’s kappa statistic takes chance into consideration and thus, provides a more robust measurement of agreement than inter-rater reliability. This analysis was an important step in a program evaluation of an innovative, multi-faceted professional nursing frameworkthat incorporated a newly developed instrument. In order to evaluate the implementation and diffusion of the innovation, site visits were conducted by a team of two investigators using the instrument comprised of six unit-level components. The two investigators met separately with nursing staff and leaders on all study units in 50% of the military hospitals included in the program evaluation. Using the “Optimized Performance Checklist,†each rated the implementation as met, not met, or partially met. Each of the 34 units was rated separately on 20 data elements, or items, in the checklist, generating 675 pairs of data elements for the observers. The formula for the kappa statistic (observed-expected agreement/1-expected agreement) was applied. The observers agreed on 652 of the 675 ratings, resulting in 97% agreement. However, when taking into consideration chance agreements and disagreements, the Cohen’s kappa statistic was .91. The Cohen’s kappa indicates a very high level of agreement even when chance is considered. The kappa is an easy to calculate statistic that provides a more conservative and realistic estimate of inter-rater reliability. It should be used when attempting to verify observer fidelity.