Teaching Machines: The history of personalised learning


Teaching Machines Cover
Teaching Machines: The History of Personalized Learning
Author: Audrey Watters
Publisher: MIT Press
Published: August 2021

Review date: 10th October 2021
Posted: 26/12/21

David Longman, TPEA   (word count: 1600)

An important theme to emerge from reading ‘Teaching Machines’ by Audrey Watters (MIT Press, 2021) is that the ‘industrial age’ of mechanised educational technology has not come to an end, as some might believe. Instead, it is thriving.

In her introduction Watters summarises, among others, the argument of Sal Khan (the creator of Khan Academy) that for the first time [my italics] online learning enables a truly personalised approach to learning in or out of school. At last, online learning enables us to break free from the stifling effects of an outmoded education based on regimented, bureaucratic organizations that fail to enable effective learning. It is, however, a too familiar critique of education that has served its time as a rationale for disruptive innovation in education, one often taken up by those proponents of learning technology who argue that schooling is somehow broken.

As Watters argues and aims to demonstrate in this book, the ‘end of history’ story on which Khan bases his claims is wrong. She goes further. Not only is he wrong about the unchanging face of schooling but he denies history. It reflects a general ‘Silicon Valley’ inspired ideology that the past is irrelevant and that only the future matters. In this way, old ideas can be recast as unprecedented, innovative and disruptive to a moribund educational system:

“What today’s technology-oriented education reformers claim is a new idea – ‘personalised learning’ – that was unattainable if not unimaginable until recent advances in computing and data analysis has actually been the goal of technology-oriented education reformers for almost a century.” (p9)

Teaching Machines is a well-researched and largely well written illustration of this history, providing critical commentary on the notion that machines can be so designed as to afford effective learning with minimal intervention from teachers. It also illustrates that implementing machine-based learning at scale has long been a challenging ambition (again an issue not always acknowledged by contemporary disruptors). Although her focus is American education the story is relevant to the design and implementation of technology in similar school systems.

The scope of the book is confined to key figures and projects in the quest to develop mechanical tools for school learning and teaching covering the years from about 1920 to 1970. The technology under discussion is literally boxes containing gears, rolls of paper or similar media, occasionally electric lights (and in an early version even a dispenser offering chocolate bars), all controlled by simple levers and keys for user input. Although now superseded, many of the main features of these pre-digital devices remain familiar today in much educational software:

    • to present a ‘unit’ of content, usually offering a question or task;
    • to provide a means for a learner to respond;
    • to provide feedback on the response (ideally immediate);
    • to move to a new ‘unit’, or repeat the current one, based on that feedback.

A central figure running through this story is B.F. Skinner, not so much the originator of Behaviourist psychology (its origins trace back to the early 1900s), as its post-war bête-noire doggedly promoting his mechanical teaching machine as a device to change the manner and quality of school-based learning. The story of Skinner’s machines covers a surprisingly short period from about 1949 to about 1969, with antecedents in the 1930s. However, by the end of this period there was limited tangible effect on education beyond a few successful if not entirely rigorous trials.

Although today mechanical teaching machines tend to be known as ‘Skinner machines’, he was not the inventor of the concept. First World War recruitment revealed low levels of health and education in the population. Then, as now, education was perceived to be poorly managed with overworked and ineffective teachers. At war’s end a new emphasis was placed on testing for attributes such as intelligence or retention of learning.  A key figure here was Sidney Pressey and his ‘automatic teaching machine’ that first appeared in 1923. Pressey foresaw an “industrial revolution” in an education system he regarded as stuck at the “crude handicraft stage”. However, like Skinner after the Second World War, he experienced frustration with efforts to develop his device as a commercial enterprise (though he was unfortunate  to be doing this in the midst of the Great Depression). 

Skinner’s own early work at Harvard in the 1930s focused on developing the idea of behavioural conditioning using devices he made known famously as “Skinner boxes”. With these he trained pigeons to get food when the correct lever was pressed. His insight that these techniques might be applied to school learning came in 1953 when he visited his daughter’s fourth grade class (the top-end of UK Key Stage 3). As Skinner told the story he was shocked to see that the classroom teaching of basic arithmetic failed to meet what he regarded as the minimum conditions for learning, namely progression matched to ability (i.e. ‘stimuli’ in behaviourist terms) and timely feedback (for the pigeons an edible reward). Good students were held back, he observed, while those who needed help could not keep up. “The teacher”, he declared, “is out of date” and cannot provide adequate and timely feedback (i.e. reinforcement) to many children at once. Echoing Pressey (who he met and corresponded with during these post-war years), Skinner also declared that an “industrial revolution in education” is needed.

Interestingly, even as Skinner promoted his approach to Behaviourism as the true science of learning a paradigm shift was already beginning to take place that would challenge this viewpoint. Cognitive Science began its emergence as the new science of psychology and human learning (an early conference was organised by Jerome Bruner in 1959) and the term ‘Artificial Intelligence’ as a framework for understanding human thinking was adopted by a community of scientists and engineers at the now notorious Dartmouth College conference in 1965.

There is much in this book  to interest students of the history and origins of contemporary education technology as well as commentators on the current scene. In 1958 Sputnik stunned American national pride in its cultural and scientific prowess leading to many reforms. In education a new focus was brought to the teaching and learning of critical subjects as mathematics and science. While this gave a boost to the idea of teaching machines many problems about their design and usefulness remained to be solved. Content was a major issue and the field of Programmed Instruction (PI) grew in importance as more work was done to enhance the quality of both what was ‘taught’ by machine and how it was taught through sequence and structure.

It is a familiar story to contemporary readers that efforts to verify the value of this new technology through large-scale school-based investigations were largely ineffective or that by contrast projects by publishers to produce cheaper textbooks designed on PI principles (what today we might call branching texts) were relatively successful. Like Pressey before him Skinner had chronic difficulties in persuading his manufacturing partners to produce machines to a quality benchmark that satisfied him. Even so, it was hard to compete because his machines still lacked content. He later entered into partnerships with encyclopedia publishers because their established door-to-door sales model helped to forge a strong link with the idea of home-based learning and ‘free’ teaching machines were offered with every encyclopedia sale.

However, these efforts did little to enlarge the market or make teaching machines more acceptable to educators. In schools, the familiar issues of staff training, machine reliability and the scarcity of curriculum content dogged the enterprise. Moreover, by the turn of the decade in 1960-70 new ideas about pedagogy were emerging. This included a backlash against Behaviourism and increased dissatisfaction with post-war efforts to reform the teaching of mathematics and science. But Watters argues that  teaching machines did not simply die out; they were absorbed:

“… many of the key figures in the teaching machine movement did not suddenly stop working in teaching or training when the focus turned to computer-based education. Many of the ideas that propelled programmed instruction persisted and spread into new practices and new technologies.” (249)

Here she makes a case for the idea that behaviourism and its core concept of conditioning did not disappear from the mainstream of educational technology, as its most articulate critics might argue, but continues to inform the design aims of present-day educational technology. Indeed, it is fundamental to the massively successful “industrial” character of our new digital technology culture for it too relies on sophisticated techniques of “behavioural engineering” to actively nudge (or push) our preferences, desires, ideas and opinions towards ends that we often serve unwittingly. This is an important point of view and one that needs careful consideration.

It’s a case of: “Behaviourism is dead! Long Live Behaviourism!”

If there are limitations to this valuable critique of teaching machines it is, perhaps, that the idea of personalised learning, the subtitle of this book, does not get quite enough detailed critical attention that the reader might expect. We are also left unsure about the details of programmed instruction methods using the ‘old’ media of paper rolls, punched card discs, projected slides and so forth so that we might ponder how much of that early work has been carried forward. There is also a lack of illustrations leaving the reader to somehow imagine the various machines described in the text but which are nowadays quite unfamiliar. Perhaps MIT Press could have included more. Audrey Watters has curated a good collection of images and technical descriptions on her blog (for example see this page) and more of that material could have been included to enhance this noteworthy book.

The Charisma Machine: The Life, Death, and Legacy of One Laptop per Child

The Charisma Machine: The Life, Death, and Legacy of One Laptop per ChildMorgan G. Ames

Publisher: MIT Press

Published: October 2019

Review date: 4th April 2020

bDavid Longman, MirandaNet

Teachers who work with and think about the role of computing in education either as a tool or as an object worthy of study will find this is an important and interesting book. For some (including the author of this review) the idea that programming a computer could be a novel way to learn about important ideas in mathematics, language, physics or, a little recursively, computing has been a captivating way to think about the educational value of computers. This was particularly so when personal computers were becoming affordable for individuals and, with funding, for schools.

Such was the influence of Logo and the enormously creative work of Seymour Papert, Cynthia Solomon and many others at MIT during the 1960s and 1970s. Embedded in the artificial intelligence culture of the day, programming was viewed as a representational tool through which the mysteries of human thought could be unravelled. Hence Papert’s slogan, one of many, that learning through programming is a process of ‘thinking about thinking’.

We live in a different world today, but at that time when the power of computing was moving beyond its origins in electronics, mathematics and engineering into wider cultural arenas, there seemed no limit to the possibilities opened up by computation. In education, Seymour Papert became something of a prophet for this new tool that could bring a new perspective to ongoing philosophical and political debates about the purpose and control of education, especially at the school level.

The computer, it was said, could enable children to be free of a deterministic style of schooling that reduced learning to a prescribed sequence leading to predetermined outcomes regardless of individual potential or preference. School, in other words, was seen by many as a factory system steering children towards things that others had decided they should know and understand. Computers and programming, on the other hand, offered the potential for self-fulfilment through expressive thought experiments which reduced, if not eliminated, the need for instructional teaching. Papert and his milieu painted the computer as a tool of liberation and intellectual freedom through a philosophy of learning he termed ‘constructionism’.

Needless to say, it is the seductive ideas of liberation and freedom that is a key source of the charismatic power of constructionism. In The Charisma Machine, Morgan G. Ames presents a fascinating case study of a major effort to implement constructionist learning with computers at scale. Starting in 2007 she researched the project, including conducting ethnographic field work in Paraguay, following the experiences of participants and planners who were included in a pioneering implementation of the One Laptop Per Child (OLPC) project. The brain-child of Nicholas Negroponte, a like-minded enthusiast and colleague of Papert and a co-founder of the MIT Media Lab in 1985, the OLPC project was a significant experimental project for constructionist learning. The ‘charisma machine’ of the book’s title is the XO laptop which was created specifically for the OLPC project and drew on the intellectual and cultural background at MIT which informed its seductive rhetoric and ambitious educational goals.

The XO was intended to be a personal machine owned by each participating student in the project, for their use. Each machine included pre-loaded open-source software designed on constructionist principles to exemplify the core philosophy behind OLPC. Thus, it included an early version of Scratch, Turtle Art, and some other useful tools. The XO was also WiFi enabled and included a browser. Alongside the fieldwork looking at the use of the XO in school settings, Ames also explores a ‘historical anthropology’ of the intellectual culture at MIT. While the fieldwork illustrates a significant mismatch between the intentions of OLPC and the everyday uses to which the XO was put, her exploration of the cultural assumptions of the project is equally revealing and helps to explain many of the project’s shortcomings. 

Some of the weaknesses of OLPC in Paraguay are by now familiar because they apply to almost any school trying to integrate laptop use into students’ lives. A lack of supporting infrastructure, particularly those in rural areas, for battery charging and for WiFi combined with the overstated robustness of the XO undermined the one-to-one use of the machines. Thus in classroom settings the XO had often to be shared, sometimes among three or more students, thus limiting the intended usefulness of the device as a personal machine, or teachers would have to design a curriculum that students could do either on paper or with a laptop. This, combined with a lack of professional development and training, left many teachers poorly prepared to work with the constructionist style of learning that ran counter to the more structured expectations of the school curriculum.

Urban schools were better placed but even here the expectation of a constructionist model of learning where kids would learn about computing through their own tinkering with software was thwarted by the overwhelming preference of the students for downloading music and videos from the internet. The centrality of the more entertainment-focused aspects of the world wide web as a source of cultural interest seems to have been thoroughly, perhaps mistakenly, underestimated by the OLPC project designers. This was almost certainly related to the strength of the constructionist convictions on the part of the project designers that children would naturally indulge their curiosity and explore ways to make the computer do interesting things.

The exploration of the assumptions of the project designers forms an valuable and fascinating aspect of this book and Ames shows convincingly that an implicit cultural hubris had a deep effect on the OLPC’s design. The notion of constructionism as conceived by the MIT community rested, Ames argues, on a gendered view of the idealised mind-set of the constructionist learner, namely the “technically precocious boy”. This image of the socially isolated boy, tinkering away in his bedroom on fascinating projects independently of the formalised schooling that he is obliged to endure undoubtedly pervades the cultural history of Silicon Valley where the household garage or bedroom den was a frequent site of technological invention and insight. However, the two main facets of this story – the socially isolated boy and the curiosity-crushing nature of the school curriculum – turn out to be almost entirely untrue. Almost without exception all the great minds inhabiting MIT (mostly men) experienced expensive and lengthy formal education. While many also tinkered none were lone geniuses.

This sexist myth clearly influenced OLPC projects on the ground. In one telling example Ames describes how, as the project developed, the project leaders searched for examples of constructionist learners who were using their XO to unpick the power of computation for learning. Few were found but one boy and one girl, in particular, were identified as concrete examples of the reality of constructionism at work. Yet when it was time to promote this positive effect it was the boy who was selected to travel to MIT and participate in an OLPC conference where he was, perhaps, regarded as a specimen of constructionism’s success.

This timely, well written  book will be of interest to anyone working in the field of educational technology. Even for readers who have never fully subscribed to constructionist learning there is much here to ponder about how various myths and assumptions can influence our actions in the field of educational technology. The values that shaped the intellectual and social culture of the OLPC project remain influential to this day. On the positive side, constructionism as a model of teaching and learning remains active. However, sexist prejudice continues to be rife throughout the technology industry and cultural imperialism coupled with billionaire libertarianism continues to taint the force for good that so many global platforms, including OLPC at its inception, see themselves as representing.

Ames’ analysis does not provide a predictive tool. While she shows that the same charismatic stories do tend to recur, we cannot say in advance how charisma will appear and then seduce us nor the particular forms that persuasive rhetoric may take. The importance of this book is to alert us to how ideas about technology are situated in a context of assumptions and prejudices. It is up to us to be more confident about putting forward a critique that helps to balance overstated aspirations if and when they appear. This may be especially relevant in today’s educational landscape where the new curriculum emphasis on acquiring knowledge about computer science fuels expectations as strong as those promoted by Papert and Negroponte.

Should Robots Replace Teachers? (v2)

Should Robots Replace Teachers?

Neil Selwyn

Publisher: Polity

ISBN: 78-1509528967

Published: Sep 2019

Review date: 20th October 2019

bDavid Longman, TPEA

The automation of professional expertise is at a tipping point and education may be the next to succumb. Machine learning (often loosely referred to as AI) lies at the heart of this transformation. It makes possible the automation of judgements that usually rely on teachers’ accumulated wisdom in both knowledge and relationships. Through teachers, students acquire an understanding of subjects and disciplines and they do so with the essential social and personal support that teachers can provide.

Neil Selwyn’s new book is a measured and accessible discussion about how new computational tools might change or diminish a teacher’s professional expertise in both knowledge and relationships. He argues that a wider critical debate about the impact of automation in crucial areas such as assessment, pastoral support, and content teaching is overdue for “ it is worrying that … [it] is not already provoking great consternation and debate throughout education”.

While the traditional boundaries of the management and organisation of higher education have been opened to the influence and investment of many external agencies and entrepreneurs, the pace of change may still be regarded by many as too slow. Change is needed but it can leave the education profession vulnerable to bad ideas as well as good. Unless we pay attention, the automation of teaching could lead to the diminishment of teachers and student learning everywhere.

For example, in higher education ‘intelligent agents’ (aka chatbots) are already making their mark in responding conversationally to student enquiries. Purportedly, chatbots improve student engagement and motivation (including offers of counselling support) while freeing tutors or administrators from supposedly burdensome FAQs about course content and related issues. Time saving is often a key marketing pitch but the effectiveness of these software machines is less well understood and time-saving may be illusory.

The use of Turnitin, a widely used system for detecting plagiarism in student assignments, may seem on the face of it a boon to assessment quality. However, Turnitin also illustrates the potential risks associated with such automation. First, all student writing is treated as a potential fraud and this undermines the crucial bond of trust between teacher and student. Second, Turnitin’s ongoing development is bringing us to a tipping point where, by applying machine learning to the recognition of a student’s writing style, the value of an educator’s expertise in evaluating this critical aspect of learning is diminished. Such a tool may (or may not) lead to improvements in plagiarism detection but it also represents a first step in the automation of academic judgement that can subvert a key element of academic expertise.

Selwyn has little to say about how these technologies find their way into our classrooms, workshops and lecture halls. What kind of policy-making processes drive their implementation? The usual forms seem to have given way to influential organisational and corporate networks such as Apple, Microsoft or Google, all capable of working at scale. These ‘fast policy’ networks are able to influence practice directly with tempting technologies and considerable investment.

Selwyn’s book is timely. The extraordinarily rapid emergence of influential AI-based technologies in higher education should generate significant debate and help us to keep ahead of the machines:

“… debates about AI and education need to move on from concerns over getting AI to work like a human teacher. The question, instead, should be about distinctly non-human forms of AI-driven technologies that could be imagined planned and created for educational purposes.” (127-8)

While it is not clear what such “distinctly non-human forms” might look like or be capable of it is an important idea. Teachers need to work together with machines “…on their own terms…” to improve the quality of education. This is not a replacement but a partnership that preserves and amplifies the important qualities of human teachers. Above all, for this partnership to work, educators must ensure that they have a clear and articulate voice that guides the changing landscape of professional practice with technology.

Should Robots Replace Teachers? (v1)

Should Robots Replace Teachers?

Neil Selwyn

Publisher: Polity

ISBN: 78-1509528967

Published: Sep 2019

Reviewed: 10 Oct 2019

bDavid Longman, TPEA

Neil Selwyn is a well known academic who has written several careful books on education technology. Once again, with his new book “Should Robots Replace Teachers?” he provides a clearly written, accessible book that can lead the time-limited educator and teacher to consider key issues and the questions raised by a new generation of emerging education technologies built around techniques of machine learning and artificial intelligence.

It is a lively time to be involved in teaching and learning. What are effective ways to teach and for students to learn in a world that, by all accounts, is changing dramatically and, for some observers, an education system that is not well adapted to the times? Intense debate on these broad issues often prevails, as it should, and Selwyn’s book ought to encourage well-informed contributions about many of the significant features of the rapidly evolving education technology environment.

Over the last decade in the UK there has been a noticeable shift away from conventional information technology in education. In the school curriculum there is now a stronger and more central role for computer science as a taught subject (though this development has its critics), and across all phases of education new kinds of computational tools are finding their way into classrooms and lecture halls. This new generation of tools aspire to provide active and rich cognitive support for student learning and aim to provide useful automation for at least some aspect of a teacher’s job (usually in the name of easing workload). Education technology is becoming less passive than it used to be when it did nothing until someone used it for some purpose. These new forms of education technology have the potential to become active agents in the relationship between teaching and learning, guiding, diagnosing and providing feedback on progress to teacher and learners.

And there lies the challenge! This book is for teachers and educators who want to be equipped with a critical understanding of the looming transformation of education that new AI-based digital technologies intend to bring to classrooms, workshops and lecture halls. Selwyn, ever constructive but critical, notes rightly, that:

“… it is worrying that the growing presence of AI in classrooms is not already provoking great consternation and debate throughout education.” (25)

 “Despite the concerns raised in this book, AI in education is still not seen as a particularly contentious issue amidst broader debates around education.” (119)

There are many ways to explain this lack of concern: topic fatigue (we’re all a bit tired of hearing about education technology); information overload (there’s just too much information to absorb or understand); private sector promotion (maybe it’s all just overblown marketing hype); or expert endorsement (academics and technologists in the know say it’s great so why argue). Perhaps, too, we simply remain wedded to a belief in the essentially human nature of teaching so that, in our minds, teaching lies beyond significant automation and therefore there is minimal risk.

But these are precisely the reasons why educators should be critical and this book will provide a strong foundation for considering the implications of this new generation of education technologies but without submerging the reader in technical detail. 

Responses to the questions raised by Selwyn are urgently required. The social and political context of education is both dynamic and, in the UK at least, more contested perhaps than it has been for at least three decades. The traditional boundaries of management, ownership and organisation of education have been opened up to the influence and investment of external agencies and entrepreneurs of all kinds. The free-for-all of digital capitalism is seeking new horizons to exploit while the changing status of teachers has left the profession vulnerable to the influences of politically and commercially motivated marketing.

Selwyn can only point us in the right direction, to equip us to ask good questions for which there may be no ‘right’ answers.. The important goal is provoke debate: “… there is plenty of reason to expect the increased AI-driven automation of teaching to lead to the diminishment of teachers, teaching and education.” (121).  This book, says Selwyn, “…is best seen as a provocation … “(131) and it takes us up to the important stage of helping to make it clear what the issues are. It raises “…a host of informed and pointed questions…” and is “…an important first step in achieving meaningful and sustainable change.” (132). These claims for the book’s purpose are justified and it is exactly what the book achieves.

Education technology is fast becoming a more active agent in the relationship between teachers and learners. It is no longer a matter of knowing how to curate or limit one’s ‘digital footprint’ (vital though this is) because data extraction and automated interpretation have become so powerfu. Software machines now engage in various forms of predictive modelling, extrapolating from what is already known about teachers or students to recommendations and judgements about where they should be in the future. Of course, this is what teachers do on a daily basis but that machines might do this automatically should concern every educator: the primary teacher enthusiastically using apps like ClassDojo, a secondary school teacher seeking more effective oversight of pupil behaviour with AS Tracking, a college lecturer aiming to personalise learning with tools like Knewton or a university professor lecturer concerned about student plagiarism

Overall, the book brings and order and clarity to the substantial arguments and questions about the purpose, value and efficacy of AI-based education technology. Chapter 2 is the only one that examines actual robots, i.e. devices that in whole or in part resemble a human form and behaviour. These are either programmable, (their value for curriculum learning is in the process of creating behaviours through coding), or they arrive ‘out of the box’ pre-programmed to respond interactively with children and students. While such devices are perhaps not so common in classrooms the chapter raises profound issues about the role of robot ‘companions’, particularly when these devices purport to offer emotional support to needy individuals and/or guide a teacher’s attention towards those individuals it has identified.

These themes of bot-like behaviour pervade the entire book. Chapter Four discusses intelligent tutoring systems and personal assistants which are simply more abstract, less obviously humanoid software systems that nevertheless engage with children and students in various forms of emulated dialogue that relate to a curriculum. Such tools are by no means new and prototypes began to appear in the early 1960s. Today they have become both more powerful in terms of software and much cheaper to implement using everyday kit such as PCs, tablets and smartphones all connected to ubiquitous cloud infrastructures provided by all commercial providers of such systems.

In spite of the lofty claims that are sometimes made for these tools, they are not much more than what was once known as computer-assisted instruction or programmed learning, being built typically on a coached instruction model of one-to-one teaching. Where they differ is in the use they make of the data they absorb from interactions with students based on myriad data signals. These data, used to guide and structure the learning pathways a learner might follow, can include many types of biometric data to infer a range of more subtle personality features such as motivation, attitude or emotional states.

This is entering new territory where algorithms attempt to characterise the state of mind of learners beyond their local position in a sequence of curriculum content. In turn, this can lead to decisions about the attainment, capability or disposition of the student and may invoke actions such as repetition of material, testing and assessment, promotion to the next stage of curriculum content or, in the name of diagnostic assessment, flagging problematic issues that may require alternative interventions.

However and thankfully, as Selwyn correctly notes, teaching is not simply a matter of directing learners what to do next. It also involves explanations and reasoning which, at their present state of development, are probably incapable of providing these, if indeed they will ever be capable of doing so. For as Selwyn also reminds us, the role of the teacher’s personality and the performative, body-centred character of much good teaching is presently well beyond the capabilities of such technologies.

Moreover, and importantly for those concerned about ethics and privacy, these data are inevitably captured  and integrated into cloud-based collective representations of learner behaviour. As will be well-known to readers of this review, the social and political issues surrounding the gathering of personal data into private ownership in pursuit of economic gain are some of the most toxic in public discourse today. Selwyn acknowledges these issues indirectly and they cannot be ignored:

“The suggestion of intelligent tutoring being rolled out across education systems needs to be taken seriously. if we are going to allow ourselves to have a learning companion for life then we need to think carefully about what we are letting ourselves in for.” (75) 

The book ends with many questions but no straightforward answers; just plenty to think about. It provides an agenda that should frame any discussions about how computational technologies might be deployed for teaching and learning. Is the continuous monitoring and ‘nudging’ of learner behaviour the right way to go? How does this change our definitions of teaching? If AI is pragmatically ‘better’ at certain types of activity are we confident that the data it uses to generate its outputs are useful and accurate?

Above all, are we exchanging the technically smart for the socially stupid? Today, these new technologies are not usually transparent in the sense that the basis for judgements about attainment or capability can be explained clearly. Teachers are people who have learned what they know, so they also know something about how to learn it and have empathy with others doing the same. Teachers are, by definition, social beings, and they can use the whole range of social activity from thinking aloud to bodily performance to enable and encourage effective learning. They can compromise, negotiate, be spontaneous, or deviate when necessary. None of these important qualities are yet possible for advanced computational tools but their absence must qualify discussions about when and how to use them.

If there is an omission in Selwyn’s book he has nothing to say about how these technologies find their way into classrooms, workshops, studios or lecture halls. What kind of policy-making processes drive their implementation? According to more recent work on this topic (e.g. by Ben Williamson or Stephen Ball and his colleagues), we are no longer steered by traditional forms of policy making at governmental level and instead have entered an era of a type of policy-making-on-the-go driven by non-governmental policy networks comprising organisations with strong vested interests. We have simply to observe the prominent role of influential corporations such as Apple, Microsoft or Google in fostering the use of powerful cloud supported devices in schools,

Moreover, these actors are not only extraordinary developers and manufacturers of computational technologies but they are also extraordinarily powerful lobbyists on behalf of the idea of the ‘robotisation’ of education. It ought to be clear to the concerned professional that today the real players in framing, selling and implementing the education technology agenda are those same ‘surveillance capitalists’ that Shoshana Zuboff has discussed at length.

However, Selwyn suggests there could be a way forward in the professional race to keep ahead of the machines, and this is his final provocative thought but one that should keep us busy:

“Public policy and professional debates about AI and education need to move on from concerns over getting AI to work like a human teacher. the question, instead, should be about distinctly non-human forms of AI-driven technologies that could be imagined planned and created for educational purposes.” (127-8)

It is not clear what such “distinctly non-human forms” might look like or be capable of, but it is an important idea. For, as he also writes, “… it is crucial that teachers work together with machines on their own terms … in ways that …improve the quality of and the nature of the education that results.”(126)

These two provocations work together. They speak of partnership rather than replacement, a partnership that both preserves and amplifies the important qualities of the human being without attempting to replace or automate them. Above all for this partnership to work, educational professionals must strive to ensure that they have a clear and loud voice in the changed landscape of policy making.