Home > Culture and media > Technology > Machine unlearning: AI, neoliberalism and universities in crisis
  • Feature

Machine unlearning: AI, neoliberalism and universities in crisis

Could Artificial Intelligence render the university obsolete? Katy Hayward explores what is lost when human thought is made subordinate to the machine

10 to 12 minute read

An illustration of a hand writing out binary code on a paper with a pen, with a robotic hand controlling the pen from above

Technology has replaced humans in certain functions (eg manufacturing), enhanced their capabilities in others (eg microsurgery) and enabled them to do things that are entirely new (eg to fly). What purportedly makes Artificial Intelligence (AI) different in technological terms is that it does all the above – replaces, enhances and enables what we can do – plus it can ‘learn’. AI centres on the ability of a machine to categorise and to calculate, ie to recognise patterns and make predictions based on the data it has in its corpus. Because machines can now learn by themselves – for example, through a large language model (LLM) algorithm that can find patterns and generate content unsupervised – the capacities of AI can increase without human intervention.

The apocalyptical scenarios are all-too-readily imagined, but what really matters is not so much the apparent cognitive powers gained by the machine, but how that changes the way we behave. In a world in which AI is integrated into every sphere of human productivity and social activity, the ones learning the most and the fastest are the machines. This will constitute a social revolution on an extraordinary scale not because AI is ‘thinking’ (it is categorising data, identifying patterns and making reasonable guesses) but because we are beginning to treat it as if it were.

As I will explain below, this is becoming common practice in higher education (HE) in the UK. Such behaviour poses a direct challenge to the university as the public institution best placed to protect and nurture independent, informed and free human thought. Unfortunately, the inculcation of neoliberalism in UK HE, and the subsequent rationalisation of university education and expertise in terms of market functionality, means that not only is the university less able to meet the challenge, it has hastened its own demise.

Acquiring not inquiring

The predominant response of the HE sector in the UK to the rapid expansion of AI has been to embrace its capacity to replace, enhance and enable various academic functions. This is not due to a lack of imagination among university leaders but more substantively a consequence of the way the modern university understands knowledge itself. A hundred and twenty years ago, the philosopher and educationalist John Dewey bemoaned the educational practices in which ‘acquiring takes the place of inquiring’. Today, this same problem manifests in the neoliberal university, in which students’ learning is primarily about accumulating information and optimising it for application in the workplace.

This is reflected in a passive conception of learning, in which the lecturer provides information for the student, who then reproduces it for assessments, which then gives them credit towards the qualification necessary for employment. An upshot of this is that, if students can access information themselves (and in greater quantities than the lecturer could possibly offer), the role of the lecturer qua teacher is apparently redundant. Nevertheless, the role of the academic as an assessor remains – so far – valued by the university. Although there is an expectation that AI will increasingly be used to assess student assignments, for the moment the HE edifice rests on being able to measure the academic capability of each student in relation to others. Measurability is a core tenet of neoliberalism because it enables comparison and competition. This is why, to date, most debate in HE around AI has focused on its impact on assessment.

Academic assessment, by and large, seeks to test the ability of an individual to identify relevant material, summarise it and apply it to answer a specific question. But these capabilities are precisely the ones that generative AI can now perform for us, at least in elementary form. Evidence of independent, ‘critical thinking’ and well-founded analysis will see a student climb into higher grade bands. But for the majority, being able to demonstrate skills equal to those of AI will get them a passable degree. Which begs the question: why wouldn’t they use AI?

If the working assumption of the university is that graduates need skills to use AI more than to expand and apply their own intelligence, it begs the question as to how the powers of critical and creative thought might be nurtured among the future body politic

Generative (and free to access) models such as ChatGPT 3.5 can respond to a submitted prompt and return a plausible-seeming output. It will contain the essentials of what academic assessors are looking for: a structured response with key points, main schools of thought, some critique and a list of sources (though rarely actually used). As AI systems improve and ‘hallucinate’ less, it is becoming increasingly difficult to distinguish between the work of an AI ‘assistant’ and that of a moderately competent student. On the face of it, this development destabilises the very foundations of HE.

Rather than considering why the model of learning and teaching it has mindlessly perpetuated for so long is inadequate and limited, the university’s response has been to welcome the virus that has exposed the model’s shortcomings as if it were an antidote. Thus, the HE sector now actively discourages academics from using assessment means in which ‘the use of generative AI tools are not part of the assessment brief’. Instead, the Quality Assurance Agency for Higher Education advises having coursework assignments ‘that integrate generative AI by design’, eg with use of the tools being part of the assessment brief. The implication, of course, is that graduates need skills to use AI more than they need to hone their skills in expanding and applying their own intelligence. If this is the working assumption of the university, it begs the question as to how the powers of critical and creative thought might possibly be nurtured among the future body politic.

The university’s response to AI

Aware of the significance of the AI challenge to my role as an academic, I look for guidance from my employer and am directed to peruse its online ‘AI hub’. The principal staff guidance offered here is a short pamphlet that claims to set out ‘the principles we commit to uphold’. These are: ‘responsible use’, ‘AI best practice’, ‘integrity’, ‘support’ and ‘equitable access’. Even if we were to accept these headings as ‘principles’, they do not seem particularly well-considered for an institution for which information, accuracy and veracity should be all-important. Throughout the guidance, the reader is reminded ‘care must be taken’ and ‘consideration must be given’ when using AI, but no detail is offered as to what such care or consideration would entail or be guided by. The same is true of the other main pamphlet for staff on the AI hub, The Trailblazer’s Guide: Surviving and Thriving in an AI Era.

Turning over a front cover featuring AI-generated images of a shapely Lara-Croftesque academic and a giant humanoid robot roaming a deserted moonscape under the glow of nuclear apocalypse, I read how the university would like me to ‘adopt AI’. The document is thin on text and substance. However, closer analysis reveals that academic standards, as we would understand them, are being quietly set aside. Bearing in mind that the purpose of assessment is purportedly to measure students’ learning, the guidance on it is telling:

‘Obviously, it is essential we use assessment that is robust and trustworthy. Alongside this, it is equally essential that we use assessment that is authentic and meaningful – assessment that validates skills that are unneeded or unimportant is of little value in itself. This is particularly important in a world where AI will replace certain job functions.’

Authenticity, meaning and value, in this instance, are determined solely by functionality in the job market. And this is of equal importance to trustworthiness and robustness – qualities that would typically have been associated with academic rigour and standards. Furthermore, the academic is not to assess skills that are ‘unneeded or unimportant’ in the workplace. To do so, it is implied, would be to make graduates all the more likely to be redundant in a world of AI-everywhere, where the function of the human is determined by that of the machine. No consideration is given to what human intelligence can do that AI cannot – let alone to the fact that such human talents may not be ‘employability skills’, and thus very unlikely to be assessed or taught in the present-day university anyway.

The responsibility should not be on individual students to deal with the use of AI as a moral dilemma but on universities to collaborate to establish explicit rules on its ethical use in education and assessment

If academic rigour and freedom (for example, to decide what students should be taught) are being subtly undermined (if not openly curtailed), it is unsurprising that the same is true of academic integrity. This ethical practice entails ‘commitment to the fundamental values of honesty, trust, fairness, respect, responsibility and courage’ in the investigation and completion of academic tasks. This topic is addressed in another staff guide via the AI hub – however not in a reassuring way:

‘To effectively prevent cheating, it is essential to address its root causes. Research shows this can be achieved by promoting a strong culture of academic integrity that emphasises the importance of honesty and integrity. Our ultimate objective is to prepare students for the dynamic and evolving employment contexts of the future.’

Here the university authorities tell us that the root causes of cheating should be addressed by promoting a ‘culture of academic integrity’. However, that is not addressing a root cause but proposing a counterbalance. To say, as it does, that ‘research shows this’ (without any reference or elaboration) demonstrates that, even for the university, the word ‘research’ is merely a token of quality and evidence. There is no definition of what academic integrity is, merely the use of rhetorical devices to indicate we should take it seriously.

In another sentence eerily reminiscent of the style of AI-generated text is the tautological statement that a culture of integrity emphasises… ‘integrity’. But before we get too concerned about academic integrity, the third and final sentence on the topic sets it in perspective. The ‘ultimate objective’ of the university is nothing more than to prepare students for the labour market. The use of the adjectives ‘dynamic and evolving’ to describe the ‘employment context’ that the university is endeavouring to serve underscores the point: we can no longer do things the way we used to. The assumption is that the AI tools will get better (in contrast to human intelligence) and that resistance is – worse than futile – self-defeating.

Human intelligence in the advance of humankind

While universities rush to try to advise on how to measure the performance of students (all-important for the neoliberal model), it has not been considered what all this means for the academics themselves. What function is it that we perform in the post-AI marketplace? It seems the university believes that its employees’ intelligence as authors, researchers, teachers and assessors can be convincingly mimicked or replaced and that only full-scale adoption of AI can prevent our professional redundancy. HE thus becomes about training human intelligence to serve Artificial Intelligence. The implications are grave. The production of work by staff and students, and the means by which we judge its quality and value, are to have an increasingly remote relationship to human thought and judgement.

AI is not necessarily or inevitably a direct threat to the university and its purpose. What makes it dangerous is the disjunction between how we think of the university and what it is actually doing. Writing as neoliberalism began to take hold in the US academy, the sociologist Alan Wolfe urged social scientists to remind people of their role as ‘moral agents’ and that they should ‘work actively and deliberately at protecting what is social about themselves’. Such exhortations seem particularly profound in the digital age. AI machines are neither moral nor social. Their integration with human intelligence does not make them so either but instead raises new questions about our own responsibility as moral and social beings.

Academic integrity is a good test for this. Research shows that students’ own sense of integrity makes them much less likely to resort to using ChatGPT, despite all the reasons why they might (e.g. performance expectancy, social influence, technology self-efficacy). The responsibility, however, should not be on individual students to deal with this as a moral dilemma but on universities to collaborate to establish explicit rules on the ethical use of AI in education and assessment. Something along these lines, the AI in Education initiative, is already underway for schools and colleges in the UK.

Transformation as a result of learning and knowledge is what enables us to be agents of history, as well as moral and social actors, as only humans can be. As Thomas Docherty has brilliantly argued, to advance humankind we need to transcend the boundaries of the present as well as the boundaries of the self. After all, to think, according to Dewey, is to ‘consider the bearing of the occurrence upon what may be but is not yet’.

If we continue to view AI primarily in terms of the relative redundancy of human intelligence, we are refusing to think – and the battle is lost already. The university, with its privileges of age, autonomy and authority, holds the key to liberating human intelligence for the AI challenges ahead. Whether it does so depends on its willingness to foster renewed freedom of thought among staff and students alike – not for the marketplace but for society itself.

This article first appeared in Issue #245 Beyond the Ballots. Subscribe today to support independent socialist media and get your copy hot off the press!

Katy Hayward is professor of political sociology at Queen’s University Belfast and 2023/24 Europe’s Futures fellow (IWM/ERSTE Stiftung)

For a monthly dose
of our best articles
direct to your inbox...