For the first time a pilot program by the Ohio Department of Education is ranking the abilities of individual teachers in what is being called “a landmark measure in the school reform battle.” This new “grade card” which was recently issued for some of Ohio’s public school teachers supposedly shows which educators made a measurable difference in the classroom last year.
Currently, the reports are only for about 30% of Ohio’s reading and math teachers who teach fourth through eighth grade, which means that about 7,500 teachers received a report with details as to what effect they had on their students’ learning last year. Thirteen Ohio districts and two charter schools were participants in this first round of effectiveness reporting. Using the “value-added measure,” these reports link data regarding student growth to the teachers who worked with these students.
The value-added system, which has been rejected by many analysts who have studied it, determines how much growth students have shown within a school year after first determining where they started. It is considered to be an “equalizer” because it assumes teachers will make progress with each student no matter what their ability level might be.
In spite of the fact that unions have opposed using student data to judge teacher effectiveness, Ohio is required, under the federal Race to the Top initiative, to change the way principals and teachers are evaluated. Additionally, the state budget bill now requires Ohio’s Education Department to devise an evaluation tool that would base half of a teacher’s job evaluation on data regarding student growth.
The Columbus Dispatch pointed out the following implications of this new evaluation system:
* Students could be assigned to classrooms based on teachers’ abilities — by placing low-, middle- or high-performing students with the educators best able to help them learn. Data showing the effect that teachers had with different types of students are included in each report.
* It will distinguish good teachers from great ones, and mediocre ones from good ones.
* Over time, schools will use the effectiveness ratings to weed out teachers who aren’t making the grade.
Matt Cohen, who oversees policy and accountability at the Ohio Department of Education, said, “This will help confirm good teaching. It will help identify in an objective way some of the issues that people are very uncomfortable about in terms of trying to characterize poor teaching from average teaching.”
Interestingly, officials who helped produce the new ratings said that they shouldn’t be used as a way of labeling teachers either good or bad. Mary Peters, senior director of research and innovation at Battell for Kids, a Columbus-based nonprofit organization helping the Department of Education to develop the evaluation system, said that this year’s rating is nothing more than a statement of a teacher’s effectiveness with their students for last year.
“We need to be careful about making judgments about one year of data. These measures were intended for diagnostic purposes, to provide information to help teachers reflect on their practice and determine with whom they are being successful,” Peters explained.
And while officials agree that the data should primarily be used to improve schools, as more years of data becomes available, they admit that teachers consistently earning “least effective” ratings will be scrutinized closely by their administrators.
Cohen admitted as much when he said, “Our hope, anyway, is that what you end up with is a better work force. And when you do have teachers who are really consistently doing poorly with results for kids, that they might not belong there.”
Rhonda Johnson, president of the Columbus teachers union, one of the districts that were included in this first wave of evaluations, said that teachers in Columbus already use data to help determine how much they are accomplishing with their students, (as I think most school systems do) but she stated that valued-added data should not be the only tool in judging teacher effectiveness.
“It doesn’t tell the whole picture. This is only a fraction,” Johnson said.
I am not necessarily opposed to long-term analyzing of a teacher’s ability to get most students to show a year’s progress each year, with all students starting at a variety of different levels. I see some inherent problems however, that will most definitely need to be addressed along the way to avoid misusing this effectiveness system.
First, I do not feel that the same criterion should be used to judge progress for students on IEPs, as they tend to progress at a slower rate. If that is not taken into consideration with this new evaluation system, very few brave souls will volunteer to work with these students, which would be a travesty.
Second, administrators must be open to looking at mitigating circumstances which may have affected student growth on a class-by-class basis in a given year. Anyone who has been a teacher for any length of time knows that there are some years that you remember with a shudder, when your classroom seemed to be the dumping ground for so many behavior and/or academic issues that instruction was a constant battle. In situations such as these, administrators must look beyond data to see the reality that teacher faced in his or her classroom.
Third, I fear that some administrators will misuse these reports to condone the firing of teachers who may have had one bad year, or who could have become more effective with proper mentoring and guidance.
Finally, I worry that this information, in the hands of the media, will be used to vilify teachers and hold them up to public scrutiny and ridicule. We all remember what happened when the L.A. Times published their article ranking teachers and 39-year old Rigoberto Ruelas, Jr. committed suicide shortly after receiving a less effective ranking based on his students’ English and math scores.
So, what do you think? Are these “report cards” a good idea, or do you predict problems?