Your HESA return knowledge is probably your institution’s biggest single point of failure

There is a scenario that plays out somewhere in the HE sector almost every year. A small institution is approaching the Student Record return cycle. The person who has managed the return for the last several years is unavailable – on long-term sick leave, has left, is managing a concurrent crisis, or simply cannot remember the reasoning behind a set of decisions made eighteen months ago under time pressure.

What follows is rarely a disaster. But it’s rarely comfortable either. Decisions get made without the context that should inform them. Judgement calls that should be straightforward take much longer than they should. And somewhere in the background, there’s a quiet awareness that the return being submitted reflects a best guess in places where institutional knowledge should have provided certainty.

This scenario is more common than most institutions would like to admit. And it’s almost entirely preventable.


Why HESA return knowledge concentrates

The Student Record is not straightforward. The technical specification is detailed, the field-level guidance requires careful interpretation, and the relationship between the spec and an institution’s actual data model involves a series of decisions – about scope, about population, about derived fields, about how edge cases are handled – that are made over time and that accumulate into a body of institutional knowledge.

That knowledge doesn’t distribute itself evenly. It concentrates. Usually in one person, occasionally in two, rarely in more. The reasons are structural: HESA data work is specialist enough that most institutions don’t need more than one or two people with deep expertise, and specialist knowledge in small teams naturally accrues to whoever does the work most consistently.

The concentration isn’t a problem while the person is available and the context is fresh. It becomes a problem the moment either of those things changes.

And in small providers in particular, the margin for error is thin. There’s rarely a second specialist to step in, rarely a knowledge base that captures the reasoning behind past decisions, and rarely enough time in the return cycle to reconstruct the institutional context from scratch.

HESA return knowledge

What makes it worse

The Student Record doesn’t stand still. The specification evolves. Guidance is updated. Regulatory priorities shift – what OfS is looking at most closely changes, which affects how certain fields and populations need to be handled. An institution whose HESA knowledge was built two or three years ago and hasn’t been actively maintained may not realise how much has moved in the interim.

This is compounded by the fact that the consequences of errors in the return aren’t always immediately visible. Some issues are caught in validation. Others are flagged by Jisc during the collection. But some only become apparent later – in B3 metrics that don’t reflect reality, in access and participation data that can’t be reconciled, or in regulatory conversations where the institution can’t explain its own numbers.

By the time the consequences are visible, the person who would have understood the decisions that led there may be long gone.


The difference external support makes

Working with an external specialist on the HESA return isn’t just about filling a gap when your internal resource is stretched. Done well, it does something more durable: it introduces a second body of knowledge into the process, one that isn’t tied to a single individual’s availability, and one that brings perspective from working across multiple institutions and return cycles.

That perspective matters because it brings a calibrated sense of where risk sits. Not every decision in a HESA return carries the same consequence. An experienced specialist knows which field-level judgements warrant careful attention and why, which edge cases have been contentious in recent collections, and where the relationship between the specification and OfS regulatory priorities makes precision particularly important.

It also provides something that internal teams – however competent – often can’t provide for themselves: an independent review of whether the approach being taken is actually right, rather than just consistent with what was done last year.


Building resilience rather than dependency

The goal isn’t to create a different kind of dependency – external rather than internal. It’s to build a situation where the institution’s HESA return capability is resilient: where the knowledge is documented, where the reasoning behind key decisions is recorded, and where no single person’s unavailability creates an institutional crisis.

That looks different for different institutions. For some it’s hands-on return support during the collection cycle. For others it’s a review of current approach and documentation. For others still it’s building internal capability so that the knowledge is distributed rather than concentrated – and having external expertise available as a sense-check rather than a dependency.

Whatever the starting point, the right moment to address key person dependency is before it becomes a problem, not during the return cycle when it already is one.

Find out more about HESA Return Support →

Scroll to Top