Skip to content

DOE’s Accountability Albatross

July 29, 2010

By Norm Fruchter

In a July 26 Daily News article, New York City’s Deputy Schools Chancellor Shael Polakow-Suransky objected to criticism that the school system’s accountability program is based almost completely on state testing results. Since experts have found deep flaws in the state’s testing system, it seems very likely that the city’s accountability system would also have significant problems. Yet Suransky insisted that, “By any measure, on national assessments and compared to the rest of New York State, our accountability system has led to real, demonstrable progress.”

Curious reasoning.  First, accountability systems don’t lead to progress; at their best, they can demonstrate progress, or the lack of it. Improving instruction at classroom, school, and system levels is what leads to progress in student achievement. Accountability systems, when they’re reliable, simply measure the amount of progress achieved.  Second, because the NYC student achievement results, often celebrated as miraculous by the mayor and the Chancellor, are based entirely on New York State tests, it is hardly rocket science to assume that if the state tests have major flaws, celebrating the city’s progress based on those scores would be a highly questionable practice. Today’s test score release confirms that these claims of miraculous growth have been wildly overstated.

Worse, Suransky asserts that national assessment results also demonstrate the school system’s progress. But the nation’s testing program, the National Assessment of Educational Progress (NAEP), has shown limited gains in 4th grade, flat outcomes in 8th grade, and no narrowing of the achievement gap across grades and subjects. It was this lack of significant progress on the city’s NAEP testing that led many critics to question the miraculous increases in city scores on the state tests.

Finally, many critics have questioned the validity of the city’s accountability system because the annual ranking of schools is based on year-to-year variations in state testing outcomes. Testing experts universally caution against relying on one year’s testing results to draw conclusions about performance, because of significant variation, or “noise” in such limited outcomes. Indeed, prominent New York City researchers have demonstrated that random methods  – such as rolling dice to predict schools’ accountability grades – would produce outcomes just as valid as the city system’s accountability grades.

Surely Suransky knows all this. He would do better to focus on designing a more valid, consistent, reliable and verifiable system, rather than defending the current jury-rigged patchwork.  Now that the state has adjusted its scoring cut-offs and revised its testing content and strategies to more accurately reflect student capacity, the city’s accountability system will become an increasingly indefensible albatross that will eventually be discarded.  What we need are accountability metrics that accurately assess school-level instruction and the resulting student outcomes, so that we can identify and implement school-level policies that genuinely improve what students know and are able to do.

Norm Fruchter is a senior policy analyst at the Annenberg Institute for School Reform

No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: