Three lessons from data on the best ways to give feedback to students

FeedbackimageProponents of computerized instruction often point out that software can give instant feedback to students. And that helps students learn more. That’s why a personal tutor can be so powerful. He or she can immediately react when there’s a misunderstanding and provide an explanation or a hint. But the truth is, educators don’t really understand how a teacher’s feedback leads to learning and exactly what kinds of feedback work best.

A team of researchers led by Fabienne M. Van der Kleij from the Cito Institute for Educational Measurement and the University of Twente in the Netherlands set out to see if the universe of computerized instruction might offer some clues about what kinds of feedback are most effective. Their paper, “Effects of Feedback in a Computer-Based Learning Environment on Students’ Learning Outcomes: A Meta-Analysis,”  was published online January 8, 2015 in the Review of Educational Research.

This article also appeared here.

This article also appeared here.

Though the researchers initially found more than 1600 studies that looked at how students learned from computer responses to their answers, they determined that only 40 of these studies were high quality ones that directly compared different types of feedback to see which were most effective.  Most of the studies were aimed at university students and the researchers lamented how few studies looked at how younger students respond to computerized feedback. 

But from analyzing the 40 high-quality studies, here’s what they learned.

1) Rethinking “try, try again.”

Many software programs alert a student when an answer is wrong, often asking the student to try again until he gets the right answer before moving on to the next question. (For example, the popular Raz-Kids reading program used in many elementary schools asks students a series of multiple choice comprehension questions about each book. The computer marks incorrect answers with an X). 

You’d think that getting a student to discover his mistake and correct his error would be incredibly effective. But just the opposite is true. Simply marking wrong answers was the worst form of feedback. In some cases, students examined after receiving this kind of try-again feedback had learning outcomes that were lower than students who hadn’t received any feedback at all on the same initial set of questions. 

Why doesn’t it work? The authors explain that students typically click on a different answer, without thinking, and keep clicking until the computer marks it right. The lead researcher, Van der Kleij, said that her findings here about computerized feedback echo what other researchers have found in an an ordinary classroom environment. “Over time research has recognized that a trial-and-error procedure was not very effective in student learning, because it does not inform the learner about how to improve,” she wrote in her paper. 

Perhaps teachers should reconsider the common practice of flagging incorrect answers on homework. I’ve often wondered what it does to a student’s motivation to see work marked with red x’s but no insight on how to improve.

Spoon-feeding the correct answer to a student worked better. For example, if a student got “what is 10 x 10?” wrong, telling him that the answer is 100 was helpful, at least on simple learning tasks, such as this type of math drilling or learning foreign vocabulary words.

2) Explanations are the most effective

Spoon-feeding doesn’t work as well for more complicated things, such as using new vocabulary words in an essay. More learning occurs when the computer system offers some sort of explanation or a hint to help the student understand what he got wrong.

But the boost to student learning varied widely, the Dutch researchers found, perhaps because the quality of the hints or explanations varied widely too. In some of the underlying studies that Van der Kleij looked at, an explanation consisted of the working out of an entire math problem, step by step. In others, it merely suggested a procedure that could be used. Still other times, the computer gave what educators call “metacognitive” feedback, such as asking the student, “Can you think of any similar tasks you have solved in the past?

In one of the most successful of the 40 feedback studies reviewed by the authors, Alfred Valdez, a professor at New Mexico State University taught a basic statistics lesson to university students using instructional software. But before the lesson began, he told the students they had to get 90 percent of the questions right. When students got a question wrong, a hint automatically popped up so that they could try again. (For example, if a student erred on the question, “Would an unusually large number in a data set affect the median or the mean more?”,  the computer reminded the students what the definitions of mean and median are.) Valdez believes the key to his experiment’s success was the goal-setting, an idea he took from the business world.

Hints “are the most difficult. Learners don’t typically like that kind of feedback,” Valdez said in an interview. “They have to work more, so you need to give them an incentive to use the feedback and not just ignore it.” 

A big problem that Valdez had was coming up with a good hint ahead of time. “Humans are much better equipped to get into a student’s head and figure out where the misconception is coming from and guide them,” he said. “The problem with computer-based instruction is that I had to come up with general principle that might be good for everyone, but wasn’t [necessarily] good for each individual student.”

Customizing feedback isn’t easy. Valdez said he once saw an experiment where students were offered a multitude of feedback choices and they could pick the ones they found most useful. Naturally, students picked the explanation that required the least thinking on their part. 

3)  Later is sometimes better

When to give feedback depends upon how complicated the material is, the researchers found. When doing simple things like memorizing vocabulary or learning times tables, immediate feedback after each question was best. But when absorbing something more complicated, students learned more when the feedback was delayed a bit, perhaps until after the student had answered all the questions.

In our email exchange, Van der Kleij cautioned against making any computer-to-human leaps of logic and applying these lessons to ordinary classrooms. Students might ignore feedback more on a computer, for example — although there’s also evidence that students ignore much of the feedback that teachers write in the margins of their papers. But she did find it interesting that the research on computerized feedback is confirming what education experts already know about ordinary feedback. What’s interesting to me is why education technology makers aren’t  taking more advantage of that research to improve feedback. 

 


POSTED BY Jill Barshay ON January 19, 2015

Your email is never published nor shared.

You must be logged in to post a comment.