
Using AI Safely with Student Data
Artificial intelligence is already part of everyday professional life in higher and further education. Staff are using it to draft communications, summarise documents, analyse data, and support decisions. Most of the time, that happens without any formal review of whether it is appropriate.
That gap matters. Student data is sensitive, regulated, and consequential. Decisions influenced by AI can affect progression, support, outcomes and reputation. The question is not whether to use AI — it is whether you are using it in a way you could confidently defend.
This page brings together two resources to help with that.
Should I Use AI for This Data?
The quickest way to assess whether a specific AI task is appropriate is to use the interactive tool below. It takes around three minutes, asks thirteen questions about the data, the tool, and the intended use, and gives you a risk rating with practical guidance on what to do next.
This tool provides practical guidance only, not legal or compliance advice. Always follow your organisation’s own policies and approval processes.
Download the Checklist
The PDF version of this resource sets out the full scoring checklist for use in team meetings, training sessions, or as a reference when developing institutional AI guidance.
The Principles Behind Safe AI Use
If you want to go deeper, share this thinking with your team, or use it as a foundation for policy or guidance, the principles below set out the key questions to ask before using any AI tool with organisational data.
These are not a substitute for your institution’s own governance frameworks. They are a way of building the habit of asking the right questions before you start.
1. What data are you actually using? Before anything else, be precise about what you are sharing. Many risks arise not from deliberate decisions but from underestimating how identifiable data can become — particularly in small cohorts, or when contextual detail is combined with partial identifiers.
2. Is this use lawful and transparent? Personal data can only be processed where there is a clear lawful basis, and where that use is compatible with the purpose for which the data was originally collected. Students should reasonably expect their data to be used in the way you are proposing.
3. Do you trust the tool? Institutionally approved AI systems are subject to contractual and security review. Public or free tools may process, store, or reuse your data in ways that are not transparent. Understanding where your data goes — and under what safeguards — is not optional.
4. Have you minimised the data? Data minimisation is one of the most effective risk controls available. Before sharing anything, ask whether you could reduce, aggregate, anonymise, or replace real data with synthetic examples. Often you can.
5. Could this output affect a person? The higher the stakes of the decision being informed by AI, the greater the scrutiny required. AI should support professional judgement, not replace it. If the output could materially affect a student’s progression, support, or outcomes, human review is not a nice-to-have — it is essential.
