Ed Data Geek David Stewart says tracking interim assessments isn’t useful.

In a Wired blog, “Inside the Educational Data Revolution,” David Stewart, CEO of Tembo, laments how in the largest US school systems, “millions of dollars are being spent on interim assessment systems, intended to track student performance throughout the year and adapt teaching strategies in advance of the high-stakes year-end tests. The problem is that there’s almost zero correlation between Common Core skills scores on the interim test and the end-of-year test. The difficulty levels are different, and the magnitudes of those discrepancies even differ among subject areas. Just because a student does well on a mid-year test doesn’t mean she will do well at the end of the year, making it impossible to track improvement. ”

I keep wishing that ed data geeks would use their data analytics to figure out which ways of teaching are more effective. But lower down in this blog post, there’s an explanation of why this may also be fool hardy.

“Steve Cartwright, the company’s Director of Analytics, “we really need to bring the individuals that are doing the teaching along for the ride.” Because, even to the data geeks at Tembo, it’s still ultimately about the classroom, where the rubber hits the road. “There are a lot of smart people all over the country trying to figure out the perfect lesson, the perfect way of instructing, and then replicating it for all students,” explains Stewart. But it’s more personal than that, and education is still struggling with escaping the one-size-fits-all approach. Given the widely differing starting points of each individual – learning style, home environment, motivation level – “you’re never going to solve it with an algorithm.”

Still, if we can do clinical studies for medicine in which each body processes a pill differently, why can’t we do it for teaching algebra?


POSTED BY Jill Barshay ON February 7, 2014

Comments & Trackbacks (1) | Post a Comment

Brian Preston

Sorry, my prior post was meant to comment on your “Top NAEP Cities Have Lowest PISA Scores” blog. Hope you catch my error during moderation.

Re: interim assessments. Using them properly is the key here–no commercial interims are yet adequate to match the increased complexity of CCSS assessments. It will be interesting to see if Smarter Balanced consortia interims, which are going to be available, match at all with end of year summative tests. Not sure, logically, if they should, since what a child knows in November should not correlate highly with an end of year assessment. The usefulness of an interim is how the teacher uses the results to alter instruction and fill in the gaps the interim is designed to identify.

Many folks wonder, like you do, about why we can’t figure out via a clinical study and consequent model how to teach algebra. I suggest two things here. First, kids are not widgets in which learning can be deposited–they are too individual and bring too many extraneous elements into the classroom. Second, clinical models in medicine aren’t all that more effective in percent success, and perhaps less so. We have lots of clinical models used to develop medicines, yet we feel really good when a cancer patient survives 5 years, and great about those who survive 10 years, while often during that period the patient is miserable with side effects of drugs, the cost is extreme, and in the end the treatment fails anyway. So when we have 70+% graduation rates in K-12 education, we’re actually doing better than the expensive medical, clinical model.

I’ve oversimplified, of course. We have a lot of research about what works with kids, but nothing works all the time or with every kid. The ‘reformy’ movement, as Bruce Baker calls it, is trying to use that number crunching clinical notion to criticize education, and it’s an oversimplified effort with absolutely no research suggesting it’s effective. At least in medicine, when something doesn’t work, we don’t keep putting it on the market as a cure.

Your email is never published nor shared.