Intelligent Tutoring Systems: What happened?

Intelligent Tutoring Systems (ITS) promised to the dream of a truly adaptive learning experience almost 30 years ago, but in spite of millions of dollars spent and promising student learning outcomes, they have not flourished. The more pedestrian use of the computer as a content and assessment deliverer is the dominant way computers serve education today. At best, most systems simply alert learners of wrong answers and do little to help guide the learning process.

Researchers began to look into “a more sensitive” method to diagnose not merely if answers were wrong, but why they were wrong. This concept of accurate error diagnosis is fundamental to all successful tutoring, and developers began to incorporate the understanding of the nature of errors in a new generation of intelligent teaching tools.

While the definition of an intelligent tool is the subject of much debate, there was some consensus, at least in the context of education, that “a system must behave intelligently, not actually be intelligent, like a human,” according to educational psychologist Valerie Shute [1].

Intelligent tutoring systems

Intelligent Tutoring Systems designs differ significantly from their historical computer-driven predecessors. Rather than the one-size-fits-all strategy of delivering content to a passive learner in those designs, ITS designs are able to customize the learning experience the student receives based, on factors such as pre-existing knowledge, learning style, and the student’s progress through the content material.

A typical ITS will contain a number of conceptual components, or models, that interact with one another. The content model contains a web-like mapping of the content to be learned, defining the prerequisites and dependencies between the content elements. The student model is unique to each learner and works in parallel with the content model to record what the student does, and does not yet understand. Finally, there is a method of delivering the instruction to the learner, known as the pedagogical model.

Most ITS systems begin the instructional process by determining what knowledge the student already knows, typically through an assessment, and then updating the student model’s status as instruction occurs. The system compares what is needed to know with what is known (i.e. comparing the student model with the content model) and delivers the pedagogically appropriate unit of instruction to the student.

The instruction is often embedded with assessment and/or highly interactive problem-solving capabilities so that the student model is dynamically updated to always reflect the student’s current knowledge level. The ITS takes advantage of the fact that the granularity of the content is so fine and well-matched to the student model, that just the right amount of remediation is offered, theoretically yielding shorter learning times.

One of the most successful efforts, at least in terms of its longevity, is Carnegie Mellon University’s series of mathematics tutors for middle schoolers. Psychology and computer science professor John Anderson was able to marry ITS engineering to a cognitive science theory for simulating and understanding human cognition. His ACT* theory of learning was used to undergird a number of successful ITS programs in the early 1980s for teaching the Lisp computer programming, called the Lisp Tutor, and ultimately the successful Geometry and Algebra Tutors, which are sold today by Carnegie Learning Corporation.

Valerie Shute developed a popular series of computer modules in 1994 that used an allusion to the “Church Lady” from the popular Dana Carvey skit on NBC’s Saturday Night Live television show to teach introductory statistics. Portions of Stat Lady’s student model design were influenced by Anderson’s ACT* theory. Stat Lady was innovative beyond its humorous digitally animated host, in that the student model was very tightly aligned to the content model and was coded into procedural, symbolic, conceptual elements, and tracked with a very fine level of granularity in order to deliver appropriate curriculum sequencing and remediation to the student at precisely the most valuable time.

 Can intelligent tutoring systems teach?

In spite of the lack of visibility of ITS systems in the real world outside the rarified air of university research labs, there is a modest amount of research suggesting that intelligent tutoring systems can achieve remarkable increases in student learning over traditional classroom instruction.

For example, Stat Lady’s performance as compared with the same introductory statistics material taught in a traditional classroom and she found the much sought after two-sigma improvement with the ITS. Sherlock, an ITS designed to teach field maintenance procedures to Air Force ground crew mechanics on F-16 fighters, was able to yield the same level of competency after 20-25 hours of instruction as those who took traditional training over a four-year span. Carnegie Learning Corporation reported that students taking their Algebra I Tutor performed 85% better on assessments of complex problem-solving skills, 14% better on basic mathematics skills, and 30% better on TIMSS assessments.

Why intelligent tutoring systems have not flourished

Intelligent Tutoring Systems have clearly not lived up to their potential, at least when judged by their adoption by the education community, in spite of seeming to have the right combination of features. But it would be unfair to discount some 30 years of research for what appears to be issues of execution. The results of the studies on the efficacy of ITS systems suggest that they can be effective in achieving student learning, but a number of factors have aligned to deliver “defeat from the jaws of victory.”

Perhaps the most important hurdle to overcome is the difficulty in authoring courseware used by ITS programs. Historically, most systems had their content “hard-coded” into the ITS’s software, which had to be done by skilled programmers at great expense. This also meant that instructors and other subject matter experts were not able to participate directly in the development of the content portions of the systems.  The issue of diagnosing wrong answers turns out to be an exceedingly difficult, time-consuming, and expensive problem to solve; it requires tediously connecting by hand a large number of potential wrong answers with specific remedial instruction.

A  better mousetrap?

In thinking about ITS, it is hard to envision a potentially more effective system for instruction. Such systems contain a semantically connected conceptualization of the content to be taught, a way of knowing what the learner does and doesn’t understand, and a delivery method that adapts that instruction accordingly. It would appear that the early systems were not executed well enough to become mainstream; but they should, nonetheless, provide a rich foundation for future teaching machines to draw lessons from, as these systems begin to use the computer’s power for more than simply delivering instruction.

Excerpted From Teaching Machines: Learning from the Intersection of Education and Technology
by Bill Ferster, 2014, Johns Hopkins University Press
Teaching Machines website

About Bill Ferster

Bill Ferster is a research professor at the University of Virginia and a technology consultant for organizations using web-applications for ed-tech, data visualization, and digital media. He is the author of Sage on the Screen (2016, Johns Hopkins), Teaching Machines (2014, Johns Hopkins), and Interactive Visualization (2012, MIT Press), and has founded a number of high-technology startups in past lives. For more information, see

[1] Shute, V. (1994). Regarding the I in ITS: Student Modeling. Proceedings of ED-MEDIA 94. World Conference on Educational Multimedia and Hypermedia, Vancouver, p. 50.