Blog Post Heading
Blog Post Content
By Heather Morton
Senior Editor, MindEdge Learning
For me, one of the pleasures of the science of learning—a feature, not a bug—is the tension between experimental results and real-world experiences.
Most practitioners of the science of learning, a/k/a “teachers,” experience a gap between established best practices and what they can successfully pull off in the classroom. Teaching is still an art.
Take one of the most robust results in the science of learning: spaced practice. Practice sessions spaced out over time are more effective than cramming the same amount of practice time into a single session. Druckman and Bjork, quoted in Ruth Clark and Richard Mayer’s e-Learning and the Science of Instruction, call spaced learning, “one of the most reliable phenomena in human experimental psychology” (282). It’s effective because it strengthens students’ ability to retrieve the new information from their memories. Spaced practice is not only one of the most established principles in the science of learning, it’s also one of the easiest to implement—in theory.
Someone teaching an introductory psychology course might provide a list of key concepts from the first class session and test them through multiple-choice questions at the beginning of the second class. The third class session could begin with a quiz that tests concepts from both earlier classes, and so on. Concepts from the first class could appear less and less frequently as students gain mastery.
Because learning is cumulative, spaced practice leads to covering more than one topic at a time. After all, a teacher is providing spaced practice on all the new concepts, often in the form of daily or weekly quizzes that together might constitute 10% of a student’s final grade. This low-stakes testing on a variety of topics naturally leads to interleaving, which means mixing up several different concepts, rather than grouping like topics together.
The benefits of interleaving are also well established in the science of learning. Although students perform worse on interleaved quizzes than they do on quizzes where similar questions are grouped together, or massed, they do better on tests that assess longer-term learning, compared to their peers who took the massed concepts quizzes.
The benefits of spaced practice and interleaving are widely known in learning science circles, given currency by Peter Brown, Henry Roediger III, and Mark McCaniel’s best-selling book Make it Stick: The Science of Successful Learning.
It was with great pleasure, therefore, that I came across a conference poster that seemed to—but didn’t really—suggest that neither of these practices work. Presented at the Psychonomic Society in 2011, “Using low-stakes repeated testing can improve student learning: How (some) practice makes perfect” by Sarah Grison, Steven Luke, Aya Shigeto and Patrick Watson drew on 297 students in 30 sections, to study the effects of spaced and interleaved practice. In the first experiment, students were given either four or eight multiple-choice questions to respond to in each class. Students were then tested on the course material two weeks and then 12 weeks after the course ended.
The conference paper caught my attention because it added a real-world twist: the researchers broke out students according to how much of the textbook they had reported reading before class. In this experiment, contrary to well-established learning principles, only “low readers” benefited from the extra in-class practice. This is exactly the kind of split outcome that happens to real teachers who see only some students benefiting from a new activity.
Perhaps “high readers” in the experiment were not only reading the textbook but also doing additional practice on their own, swamping the benefits of the four extra in-class questions. Perhaps the “low readers” were depending on the teacher to make the material stick. What should a teacher do when in the long term the responsible students (“high readers”) who had fewer in-class multiple-choice questions slightly outperformed the high readers who had more in-class multiple-choice questions? Should a teacher tailor a class to the students who don’t do the reading?
A second experiment by the same group led to another quirky real-world outcome: massed practice was more beneficial than interleaved practice for students who got the lowest practice quiz grades. In their discussion of results, the researchers suggested that the experiment ended up testing the value of creating a coherent schema for students rather than the value of retrieving information already lodged in the students’ heads. That is, the value of massed practice was that it was teaching the material to the students who hadn’t bothered to read the textbook (my interpretation) rather than asking students to remember information they had already learned. It’s easier to teach a new concept, I imagine, when you present multiple questions on it all in a row than when you mix up the questions with questions on other topics the students are also clueless about.
These contrarian results, of course, could come from something mundane like a small sample size. But I like to think they mirror the kind of trade-offs teachers deal with all the time: Do I take time in class to review the material students should be reviewing at home? Do I design my course for the students who have done the reading, or for those who haven’t?
At the end of the day, I come back to a very important point made by Daniel Willingham, professor of psychology at the University of Virginia. In the vlog The Daniels on Research, “She Blinded me with Reading Science,” he points out that the science of reading is robust and well established—it describes what people do when they are reading. But, he cautions, “the science of teaching reading is altogether different.”
Copyright © 2021 MindEdge, Inc.