A data-driven argument to reduce testing in schools

The U.S. is off to a bad start when it comes to using data to improve schools, concludes a National Education Policy Center October 2013 report entitled Data Driven Improvement and Accountability by Andy Hargreaves and Henry Braun of Boston College. The authors urge U.S. policy makers to reduce the amount of testing in schools and to measure student growth during the year rather than arbitrary proficiency thresholds at year’s end.

“One of the objections to increasing the level of sophistication of tests and indicators is the increased cost. But it is counterproductive to control costs by settling for lower test quality that impedes improvement, diminishes authentic accountability, and undermines the system’s credibility. A widely used and successful alternative is to reduce the scope and frequency of testing. This can be achieved by testing at just a few grade levels (as in England, Canada and Singapore), rather than at almost every grade level. Another option is to test a statistically representative sample of students for monitoring purposes (as in Finland), rather than a census of all students. Yet another route is to test different subjects in a rotating cycle (e.g. math is centrally tested and scored once every 3 or 4 years), with moderated teacher scoring of assessments occurring during the intervening years (as in Israel). All these options lower the costs of testing and create opportunities for compensatory improvements in quality. At the same time, not testing all students, every year, reduces the perverse incentives to teach to the test and to concentrate disproportionately on easily “passable” students.”

 


POSTED BY Jill Barshay ON October 23, 2013

Your email is never published nor shared.

You must be logged in to post a comment.