Case Study Brief: Promising Practices to Build Evaluator Capacity

Citation

Comstock, M., & Hsieh, T. (2015). Promising Practices to Build Evaluator Capacity. Case study brief. Menlo Park: SRI International.

Abstract

In 2012, the Massachusetts Department of Elementary and Secondary Education contracted with SRI International and its research partners, Abt Associates, Nancy Brigham Associates, and J Koppich and Associates, to conduct an independent study of the implementation of Massachusetts’ Educator Evaluation Framework. During the 2014-15 school year, the research team administered a statewide survey of principals and school staff and conducted educator interviews and focus groups in seven case study districts. In the case study districts, the team focused on exploring promising practices related to use of evaluation data for human resources decisions, implementation of district-determined measures (DDMs) of student learning, and capacity of evaluators to conduct fair and thorough evaluations. This brief is the third in a three-part series dedicated to sharing these promising practices with other districts in Massachusetts.

The case study brief highlights three districts’ (Revere, West Springfield, and Northbridge Public Schools) efforts to calibrate evaluators’ feedback and ratings for school staff and to reduce evaluators’ workloads associated with the new evaluation system. These districts have implemented promising strategies to promote evaluator consistency within and across schools and to relieve burden on evaluators, including analyzing anonymized feedback and teacher videos and revisiting district guidelines on observations, though these areas remain a challenge. Findings from these case studies may be useful for other districts:

  1. Districts provided varied and ongoing opportunities for evaluators across schools to become consistent in their understanding of the performance rating levels and in their manner of providing feedback. They supplemented these efforts with formal and informal collaboration among evaluators at the same schools. Implementing this holistic approach helped ensure that all educators receive clear communications about evaluator expectations for feedback and timing of observations and ultimately heightens educators’ perceptions of the system’s fairness overall.
  2. Districts used technology in a variety of ways to increase consistency across evaluators in the feedback and ratings that they provided to educators.
  3. District administrators served as additional evaluators, and the district adjusted guidelines for the quantity of observations and evidence—strategies that helped to ease evaluator burden.

This brief outlines a series of promising strategies from the three districts for increasing consistency in evaluator practices and reducing evaluator workload, two issues with the EEF that continue to challenge districts across the Commonwealth. In fact, in a recent statewide survey of educators conducted as part of this study, 81 percent of staff agreed or strongly agreed that their evaluator’s assessment of their own practice was fair, but 58 percent disagreed or strongly disagreed that educators were evaluated consistently across grades, subjects, and schools. Furthermore, 72 percent of principals disagreed or strongly disagreed that they had adequate time to evaluate
teachers at their school. The brief begins with a description of a holistic approach for achieving evaluator consistency and then describes methods for relieving evaluator burden. The brief ends with a set of considerations for other districts that are looking for ways to improve the capacity of their evaluators to conduct fair and consistent evaluations under the new system.


Read more from SRI