Is NSD bad for ed-tech research?

When two methods of teaching are compared with one another in an experiment; for example, pitting traditional classroom instruction with instruction by television, there is often more or less no difference in effectiveness found from some sort of test or assessment. This is called having no significant difference (NSD) and is not unusual in the world of educational research. The competing methodology is often dismissed, with the implied judgment that is not being worth being considered. But a finding of NSD is more a reflection of what the original research question actually was, rather than a blanket indictment that it should not be employed. In addition, assessments are very blunt instruments in teasing out effectiveness, and are notoriously bad at gauging less empirically-oriented skills, such as those used for critical thinking.

The vast preponderance of studies on the use of media and/or technology do not show significant gains in student achievement, at least in regards to what the assessments are assessing. As far back as 1928, studies that measured the effectiveness of the correspondence schools reported, “no differences in test scores of college classroom and correspondence study students enrolled in the same subjects.”[i] But there was a difference, perhaps not in what the students accomplished, but in how they accomplished their learning.

The average correspondence school learner was not likely to be able to attend or afford a traditional residential college education. He was 27 years old, worked in a coal mine, and had a family to support. The student would study at home and “use the kitchen table as a desk, and often rock the cradle with one hand to keep baby quiet, while holding the lesson paper in the other.” The educational goals were finely focused, as one correspondence school executive explained: “the regular technical school or college aims to educate a man broadly; our aim, on the contrary, is to educate him only along some particular line.”[ii]

So perhaps we are looking for differences along the wrong dimension at this point. Obviously, it would be wonderful for researchers to report their interventions have truly positive impacts on student learning, but there may be some other important factors to consider when judging a technology or media’s worth. Even Richard Clark, who made a career of attacking these kinds of media comparison studies agrees that “There are benefits to be gained from different media. The benefits are economic. If researchers and practitioners would switch their concerns to the economics of instruction, we would discover all manner of important cost contributions from the media.”[iii]

Another dimension to evaluate is the amount of time that technology and media infused instruction takes, relative to the traditional classroom. Many studies of intelligent tutoring systems and programmed instruction tools have shown to reduce the time it takes to teach the same content by factors of two and three fold. I personally experienced the time savings in middle school when I took a paper-based programmed instruction tool to learn a semester’s course in geometry in only a couple of months.

Education researcher Thomas Russell published a slim book in 1999 which looked at 355 media comparison studies, all of which yielded no significant difference results, and found a number of other potential benefits that technology and media could potentially offer beyond better economics, including that the number of course offerings made available could be substantially increased; courses could be rapidly provided to fulfill specific instructional needs; and finally, that media and technology solutions can serve large and small populations with the same basic offering.[iv]

I am currently involved in an interesting research effort to look deeper into how these kinds of efficacy studies are used, which will culminate in a symposium this coming May. We are talking with the various stakeholders: educators, entrepreneurs, venture funds, and philanthropies to see how they define and value efficacy studies, and what might be done to make them more useful to all involved. For more information, see http://curry.virginia.edu/symposium.

About Bill Ferster

Bill Ferster is a research professor at the University of Virginia and a technology consultant for organizations using web-applications for ed-tech, data visualization, and digital media. He is the author of Sage on the Screen (2016, Johns Hopkins), Teaching Machines (2014, Johns Hopkins), and Interactive Visualization (2012, MIT Press), and has founded a number of high-technology startups in past lives. For more information, see www.stagetools.com/bill.


Excerpted From Sage on the Screen: Education, media, and how we learn by Bill Ferster. Johns Hopkins University Press, 2016

[i] Crump, R. (1928). Correspondence and class extension work in Oklahoma. Doctoral Dissertation. Teachers College Columbia University. New York, NY.
[ii] Clark, J. (1906). The Correspondence School–Its Relation to Technical Education and Some of Its Results. Science 24(611).
[iiii] Clark, R. (1999).  Bloodletting, Media, and Learning. In T. Russell (Ed.) The no significant difference phenomenon: A comparative research annotated bibliography on technology for distance education: As reported in 355 research reports, summaries and papers. North Carolina State University.
[iv] Russell T. (1999). The no significant difference phenomenon: A comparative research annotated bibliography on technology for distance education: As reported in 355 research reports, summaries and papers. North Carolina State University.