
Excitement about AI is matched in equal measure by scepticism, uncertainty, even fear. Seemingly before anyone knew what was happening, students were experimenting with free online AI tools and concerns about cheating and plagiarism emerged. Yet, AI is prevalent in the workplace, and enterprises are embracing its potential to increase productivity and innovation. Higher education has a central role to play in preparing students for a world of work with AI. Yet, they must adopt AI safely, understand its limitations and implement responsibly.
AI gains momentum in the workplace
Technology moves quickly, and AI moves quickest of all. In Q4 2024, KPMG reported that 68% of leaders will invest $50 to $250 million in GenAI over the next 12 months, up from 45% in less than a year.
They anticipate transformational change from such a substantial investment; in Q2 2025, more than eight out of ten (82%) leaders agreed AI will change their industry’s competitive landscape.
They expect productivity, profitability, performance and quality improvements as a result, but they have a skills hurdle to overcome first. Over half (59%) say they have technical skills gaps and 47% that the workforce is resistant to change.
It is never easy to pivot to new technologies, for workers to learn new skills and operate in a different way. But enterprises may reasonably expect that tomorrow’s graduates – their future employees – will bring new abilities to the workplace that they can unite with the established workforce’s expertise to create a formidable team.
AI pressure on education
This expectation puts pressure on higher education establishments to grasp AI’s potential. Indeed, graduates think they should. Seventy percent in one survey said basic training in GenAI should be part of courses, yet more than half (55%) said their degrees didn’t prepare them for AI in the workplace. Another survey tells us only 36% of students received training in AI skills from their university.
The VP for Information Technology at the University of Michigan has called for tuition that builds AI literacy, saying no student should graduate without at least one core course in AI or substantial exposure to AI tools.
Yet, AI has had a rocky start in education. Institutions with reservations that students would pass AI-generated work off as their own may feel vindicated by news of almost 7,000 cases of AI-based cheating in UK universities in 2023/24.
Higher education, like all organisations and indeed individuals, must recognise AI’s limitations, without dismissing its potential. All online tools have their purpose; it is a case of recognising AI’s and enforcing appropriate boundaries for its safe and beneficial use.
To take on this risk, higher education must stand to gain significantly from AI. Of course, it doesn’t have much of a choice. As already explored, AI is out there, in the world – our workplaces and our home lives. Universities take their responsibility to equip students for the future extremely seriously, as well they might. To neglect AI is to neglect employability and enlarge a skills gap that can only damage the UK economically on the world stage.
It isn’t all doom and gloom, though. Far from it. Whilst shouty headlines play up AI’s downside, evidence quietly mounts for its upside. An upside that includes helping students to learn effectively, so they are more likely to perform well and achieve good learning outcomes.
Evidence that AI can benefit students and educators
A Harvard study under peer review compared students learning with AI with those learning without and found higher levels of engagement, motivation and efficiency, alongside enhanced learning gains, in the AI group.
A college in Indiana explored using AI to identify online behaviour patterns that indicated students who were at risk of failing. They identified 16,000 individuals and quickly put support in place for them. This resulted in 3,000 of the contacted students averting failure and achieving a grade C or above. Since the initial pilot, the college estimates the project has helped over 34,000 students.
Using AI positively in this way helps notoriously time-poor educators. Demands on them extend beyond classroom teaching to lesson planning, grading, one-to-one student interaction and a host of administrative tasks – all before they’ve given any attention to maintaining their own subject knowledge and development.
AI can help here by being a real time-saver, lightening the educator’s load of marking, administration and lesson planning. All of which gives tutors more time for tuition and supporting students, aided by advanced data insights such as those the Indiana college drew on, that AI can provide.
Research has shown that teachers using AI for grading work can save an average three and a half hours a week, and those using it to assist in lesson planning can save another three hours a week.
Anything that streamlines activities, automates repetitive tasks or cuts the time it takes to achieve something is a bonus. It can free up time from inputting data and sending out emails to spend on personalised tutoring to benefit students.
In the right hands, AI can enrich rather than corrupt the learning experience. It can help develop dynamic content, such as simulations and interactive quizzes, to capture and keep students’ attention.
It can help personalise learning by tailoring materials according to individual learning styles and needs, for optimised outcomes. Importantly, it can also assist in adapting learning content for accessibility.
How to integrate AI responsibly
These positive aspects of AI in teaching and learning present a compelling counterargument to the risks AI brings, but it must be integrated responsibly.
It’s no good producing AI-literate educators and students if they don’t know how to assess AI with a critical eye and use it ethically. As universities evolve their strategies to equip graduates for jobs that will use AI, they must safeguard their academic standards.
Institutions must challenge AI-related misconduct, but to do so they need unambiguous policies and guidelines that make it clear misconduct has occurred.
Students and tutors need training to recognise bias that can be present in AI-generated responses and not to implicitly trust that content. AI can assist in brainstorming and gathering information – it is ideally placed to scan, retrieve and analyse large data sets – but its output shouldn’t be presented as original work. There are issues of plagiarism, copyright infringement and inaccuracy to understand and protect against.
‘Conversations’ with AI tools shouldn’t be mistaken for private interactions. Confidential and personal data should be protected, which means keeping it out of large language models.
Higher education, like all organisations and indeed individuals, must recognise AI’s limitations, without dismissing its potential. All online tools have their purpose; it is a case of recognising AI’s and enforcing appropriate boundaries for its safe and beneficial use.
AI exploded onto the scene and early adopters were experimenting with it whilst rules and procedures were catching up. As AI now becomes widespread across universities, educators and decision makers have a vital role to play in adopting it responsibly and helping tutors and students unlock its full potential. Transparent, principled implementation is a must for education to achieve an important objective – a next generation of AI-literate professionals, equipped to use this exciting technology effectively and ethically.
