I am cross posting something I wrote for VIVA
Multiple Measures
Perhaps no profession is as endlessly fascinated with evaluation as
teaching. The concepts of transparency and accountability are woven
into the very fabric of our work as educators in a way that is unique
among professions. On the one hand this is laudable.
On the other hand it leads to building of elaborate evaluation
systems, systems that are costly, time consuming, and which are
frequently criticized for efficacy. Too often these systems became
exercises in bureaucratic hoop jumping, disconnected from improvements
in actual practice.
The trend during the Race to the Top/NCLB “flexibility” has been for
states and localities to go down a rabbit hole of “multiple measures”,
where a variety of components are added together, producing a number by
which teachers can supposedly be compared, and which becomes the basis
for various high stakes employment decisions, including hiring, firing,
promotion, tenure and compensation.
Is this whole less than the sum of the parts?
In many places student test scores (including the dreaded value added
or VAM approach) have become a large (or even largest) component of the
evaluation score. This has created (at least) two problems:
1) The majority of teachers teach in subjects without standardized
tests. How do you capture a test score component for these folks?
2) The use and misuse of student testing has spiraled out of
control. Parents are starting to wake up to fact that their children
are being tested not diagnostically and for their own benefit, but for
the purpose of sorting and firing their teachers.
Because of the history and culture of our profession, we must be
practical: teacher evaluation is not going away. So how can we build an
evaluation model that is time and cost effective, objective, and
connects to improvements in professional practice?
Multiple data points.
In this approach, you put something at the center of the system. In
many cases this would be traditional administrator observation, but it
could easily be a Tripod style student survey, or a National Board portfolio, for example. Then you admit other data into the conversation for confirmation.
The variety of these data points and what they reveal is in a variety of books and research papers, including notably Everyone at the Table and Getting Teacher Evaluation Right.
We know that no one data point is a silver bullet that provides a
complete, valid and reliable picture of professional practice.
Professional practice is a complex and sophisticated enterprise that
must be viewed through a variety of lenses. Observations, student
achievement, surveys, artifacts, portfolios, etc, talk to each other in
this scenario and become mutually reinforcing.
There is one other key piece – you need a research-based rubric,
which everyone accepts and understands, to provide a basis for
professional conversation, and a roadmap for improving practice. In our
district we recently agreed upon using Danielson’s Framework.
It is important to understand that a rubric is not in and of itself
an evaluation system. Rather, it provides the language to talk about
practice, and you build the evaluation system around that language.
Within the rubric, “anchor components”
are individual components in each of the four domains that drive the
other components of that domain. These anchor components are different
for new and experienced teachers. Examination of practice within the
anchor component provides reasonable assurance that things are OK in the
other components of that domain.
This simple idea has two important implications: first, it provides a
way to differentiate evaluation for the career stage of the educator by
looking first at key areas of practice. Second, it streamlines the
process – by focusing an administrator’s attention, it reduces the data
that needs to be looked at. One need only look at the full spectrum of
components in a domain if an issue is detected in the anchor component.
By using multiple data points, a research-based rubric and anchor
components, it is possible to create teacher evaluation which is
streamlined, accurate, and useful for planning professional growth. If
you can take some of the stress out the experience, educators will
naturally embrace a good rubric and internalize it. Why? Because
teachers spend a huge amount of time with their students, and if they
are more successful in this endeavor their lives will be better in very
concrete ways. When educators take ownership of the profession, it
reduces the need for elaborate teacher evaluation systems because the
work is embedded in practice. A virtuous cycle ensues.
Then the trick is how to connect this with professional development – but that’s subject for another blog!
What improvements in teacher evaluation would help you in your work?