24
using a standard research-based rubric for
professional practice with at least 30% of the final
evaluation rating based on the teacher’s students
academic growth. The law also requires that teachers
be reduced in force (RIF) by not only seniority but also
by their performance ranking.
The CPS Teachers union mobilized 90% of its
teachers to vote to strike (you
may remember that the
reform law had a requirement
that 75% of all teachers had
to vote to authorize a strike,
this was a direct message to
Chicago teachers by Stand
for Children’s Jonah Edelman
that they could not muster this
type of support).
The
support
and
organizing ability of the
Chicago teachers union was
overwhelming when they
went on strike. It seemed as if
every
Chicago
teacher
manned the picket lines on a
daily basis and were very
passionate about their cause.
In my opinion the strike was
more about PERA and SB 7 than about wages, air
conditioning and leaky roofs. Teachers knew that
Mayor Emanuel had plans to close 80 to 120 schools
and teachers were going to lose jobs. Many of these
teachers would be veteran teachers who would have
lower performance ratings than less experienced
teachers. And not only would they lose their jobs they
would lose their teaching careers. After all, who would
hire a veteran teacher with say 25 years experience,
two masters’ degrees, and the knowledge that this
teacher had been RIF’d and had a lower performance-
based rating than other less experienced teachers?
The message for the rest of the state is that every
school district will be dealing with these same issues
as we head into teacher performance evaluation
reality. My suggestion is to get your teachers involved
now in the process. If you have not converted to the
Danielson evaluation system start a pilot process as
soon as possible and include your teachers every step
of the way. Start looking at methods to measure
student academic growth and talk to teachers about
possible assessments and measures for these
assessments. 2016 will be here before you know it so
get started now.
There is considerable debate in the assessment
research world about the cause and effect relationship
between the way to measure student growth and the
evaluative rating of the teacher. In the November
2012, Educational Leadership Journal writers
Goodwin and Miller state that there are several pitfalls
in using value-added models. These include: 1) non-
teacher effects may cloud the results; 2) data may be
inaccurate; 3) student placement in classrooms is not
random; 4) student’s previous teachers can create a
halo (or pitchfork) effect; and 5)
teacher’s year-to-year scores
vary widely.
In the November 29, 2012
article in The Huffington Post
titled “Teacher Evaluation
Systems Not Fully Supported In
Many States,” the Center For
American Progress reports
Still, arguably the biggest
challenge posed by these
mandated evaluation reforms is
that the majority of teachers do
not teach in tested subjects or
grades, and as a result
standardized
student
achievement data is not
available to be used in their
ratings.” This is another huge
obstacle that needs to be solved before districts can
use student growth scores for teacher performance
evaluation purposes.
Until the research world can agree on a valid and
reliable way to measure student growth and assign it
to a particular teacher; administrators should be leery
of the amount of weight put on this indicator. After all,
a teacher’s career is in jeopardy.
Interpretation of what “Distinguished” means
in the Frameworks for Teaching (FFT)
I think one of the most debated and
misunderstood concepts from the Danielson
Frameworks for Teaching (FFT) is the question of how
many teachers will earn the “Distinguished” rating. I
have heard administrators say no one is distinguished
and teachers should be pleased to be rated proficient.
Danielson herself states both in her book and in her
videos that “…teachers visit “Distinguished” they do
not live there.”
Illinois administrators learned in their training that
they are to record evidence in informal and formal
evaluations. Evidence does not have bias,
interpretation or judgment. I often describe
observation evidence as the following: Observation
SB 7: Growing pains
—–—————————————————————————
1...,14,15,16,17,18,19,20,21,22,23 25,26,27