What people get wrong about learning technologies

There is a fascinating pattern of repeated forecasting and product errors that people have been making about the impact of technology on education for over 40 years now. I know, because I made it myself on two separate occasions with two separate products.

Here is a tempting but wrong hypothesis: the best way to use technology to improve learning is by building products that try to replicate a great human tutor, i.e. automated tutors. The people who make this error start with two good premises: 
  1. Private tutors are orders of magnitude better in teaching someone than any other method (see e.g. Bloom's 2 Sigma Problem).
  2. Computers in general and AI in particular are getting better, so we should be able to increasingly emulate what a human tutor does.

This idea of building intelligent tutors has been so mesmerizing to generations of researchers and product people that there is even a book detailing the history of the various (largely failed) attempts to do so [1].  However, the same people completely miss two other things that are also true: 

  1. A lot of what a good private tutor does has nothing to do with selecting the right exercise or hint, and everything to do with providing excitement, motivation, self-confidence, and discipline (which computers can do only so much about).
  2. Given the status quo, there is a lot of far lower hanging fruit in using technology for learning, notably (a) giving people unlimited free/cheap access to high quality content, and (b) using messaging, video, and social media to put people in touch with others, be they human tutors or peers passionate about the same things.
The result is that high-tech platforms with adaptive learning "AI" engines like Knewton fail, whereas products that succeed look more like Lambda School [2] – which, if necessary, could probably be run over Zoom, Slack, and Google Docs, without any custom software at all. Another great example is a company like VIPKid, which correctly realized there was no need to build an AI-tutor, they could build a huge business by just making it easier to hire a human one.

The failure of Knewton and the success of Lambda School or VIPKid are therefore a reflection of two factors – first, building AI tutors is harder than people think, and second, there is a lot of far lower hanging fruit to be picked before one moves to the more challenging technical problems. Of course, this will eventually flip when the low hanging fruit is gone and AI has progressed sufficiently to make AI tutors more effective. However, even with the accelerating progress in AI, I suspect that for the foreseeable future, any "AI tutors" are likely to be glorified exercise banks selected and used in teaching by – you guessed it – human tutors on platforms like Lambda or VIPKid.

[1] One notable class of exceptions are domains where Spaced Repetition works reasonably well, which generally includes standardized testing and some aspects of foreign language learning, leading to success of products like Duolingo and Quizlet.

[2] See also my earlier post, Why Lambda School Works.