Earlier this year I started delivering a series of webinars on AI for AHEP – the Association for Higher Education Professionals. The attendance has been strong. Consistently strong. And the questions coming in have told me something I think is worth sharing publicly, because it shapes why I’ve built the Responsible AI Toolkit the way I have.
The people attending these sessions are not novices. They’re experienced professionals working in higher education – in registry, in quality, in student services, in data and governance roles. They are, by any reasonable measure, exactly the kind of people who should feel equipped to navigate the AI landscape.
And yet the questions keep coming back to the same places.
Can I use this tool for that task? What happens to my data when I type something into ChatGPT? Is it safe to use AI at home, even for personal things? What if I make a mistake – who is responsible?
What this tells us
The first thing it tells us is that demand for clarity hasn’t peaked. If anything, it’s growing. As AI tools become more embedded in everyday working life – and in everyday personal life – the number of people who need to understand what responsible use looks like is expanding, not contracting.
The second thing it tells us is that uncertainty is not a knowledge problem alone. The people asking these questions have access to information. They can read the ICO’s AI guidance. They can find policy documents. What they’re lacking is something more specific: a confident sense of what the rules mean for them, in their role, in their institution, doing their actual work.
That distinction matters, because it means that publishing a policy document – even a good one – is not sufficient on its own.
The gap between policy and practice
Most institutions are aware that they need an AI policy. Many have one, or are in the process of drafting one. What’s much less common is the infrastructure around that policy that turns it from a document into actual governance.
Think about what a member of staff needs to use AI responsibly at work. They need to know what the policy says, yes. But they also need practical guidance on what that means in day-to-day scenarios. They need procurement teams who know what questions to ask before a new AI tool is signed off. They need senior leaders who understand the risk landscape well enough to provide meaningful oversight. They need a process for assessing new use cases before they’re deployed. And they need a way of raising concerns if something doesn’t feel right.
A policy document doesn’t provide any of those things on its own. It’s the foundation – but a foundation isn’t a building.

Why the questions about personal use matter
One of the things that struck me most in all the AI webinars I have done was how often the uncertainty extended beyond the workplace. People asking whether it was safe to use AI tools at home. Whether their personal data was at risk. Whether they could trust the outputs for personal decisions.
This isn’t a distraction from the professional governance question – it’s directly connected to it. If people don’t have a confident mental model for how AI tools work and what the risks are, they can’t apply that model in a professional context either. Governance structures sit on top of individual understanding, not instead of it.
It’s partly why I developed the ICE/BERG framework alongside the governance toolkit. ICE/BERG is designed to give individuals a practical structure for every AI interaction – not as a compliance exercise, but as a way of developing genuine confidence and judgement over time. The two things work together: institutional governance sets the boundaries and structures, individual literacy is what makes those structures real in practice.
What the toolkit is designed to do
The Responsible AI Toolkit was built from the ground up for higher education – not adapted from a generic template, and not written for compliance specialists alone. It covers six documents, each addressing a different part of the governance gap.
The Responsible AI Policy gives institutions the foundational framework for board or SLT approval. The AI Use-Case Risk Assessment Framework gives DPOs and operational leads a structured process for evaluating new tools before deployment. The Staff Guidance and Acceptable Use document gives all staff the plain-English clarity that a policy document alone doesn’t provide – including answers to the kind of questions coming up in the AHEP webinars. The AI Procurement Checklist equips procurement teams with the specific questions to ask AI suppliers, many of which standard due diligence processes miss entirely. The AI Governance and Oversight Checklist gives senior leaders a self-assessment they can take to committee with an action plan attached. And the Senior Leaders Briefing gives governing bodies the overview they need to provide meaningful scrutiny rather than nodding through an executive report they don’t have the context to interrogate.
Six documents, each doing a different job, designed to work together as a coherent system rather than a folder of separate files.
Who it’s for – and when
The institutions that need this most urgently are those where AI use is already happening without adequate governance in place. That’s most of them. Staff are using tools, departments are adopting platforms with AI features baked in, and the policy – if it exists at all – hasn’t kept pace.
It’s also particularly relevant for small and specialist providers, where there’s rarely a dedicated AI governance function, where the DPO is often doing three other jobs simultaneously, and where the idea of building a comprehensive governance framework from scratch feels overwhelming. The toolkit is designed to make that achievable without a large consultancy budget or months of internal development time.
And for institutions that already have a policy: the honest question is whether the rest of the infrastructure is in place. A policy without a risk assessment process, without staff guidance, without procurement controls, without committee-level oversight is a policy that exists on paper rather than in practice.
The sector is asking for this
The AHEP webinar attendance isn’t an anomaly. It reflects something real: the sector wants to get this right, people are actively looking for help, and the uncertainty is genuine rather than performative. What’s missing for most institutions isn’t awareness that AI governance matters. It’s a practical, proportionate, sector-specific way of doing something about it.
That’s what the Responsible AI Toolkit is designed to be.