Once you’ve started making your data more usable and understandable, the next step is knowing whether you’re making progress. That’s where metrics come in.
We’re used to measuring performance in higher education. But we rarely measure the quality of the data behind those performance measures – and that’s often where things go wrong.
Good data isn’t just accurate. It’s ready for use, understood by others, and reliable across contexts. But how do you know if your data is fit for purpose?
You Don’t Need a Complex Scorecard
There’s no one-size-fits-all list of data KPIs. What matters is that your metrics are:
- Relevant to how the data will be used
- Simple enough to explain
- Easy to track over time
It’s not about assigning a grade to every dataset. It’s about spotting weak areas and making them stronger.
Choosing the Right Metrics for Your Context
Here are a few common “fitness indicators” I’ve seen work well in higher education settings. You won’t need all of them – just pick the ones that spark the right kind of conversations in your team.
1. Completeness
How much of your data is missing? Are certain fields regularly blank?
→ Example: A pre-HESA dashboard that checks for blank disability codes or unexpected gaps in tariff data.
2. Consistency
Are categories used in a consistent way over time or across departments?
→ Example: Course titles or level of study may drift if manually entered. You can track the number of unique values over time.
3. Time to Insight
How long does it take to turn raw data into something that can be used for decision-making?
→ Example: Track how many manual fixes are needed before producing your standard reports.
4. Trust Score (Subjective but useful)
How confident are users in the data? Do they question the results, or rely on them?
→ Tip: Ask stakeholders periodically – would you base a decision on this dashboard?
5. Reuse Rate
How often is the same dataset used across multiple reports, teams or systems?
→ A sign that you’ve made something genuinely useful and understandable.
A Real-World Example
I’ve used these kinds of metrics when building reports based on HESA data. Rather than waiting until the final submission, I created dashboards that flagged issues ahead of time – missing entries, coding inconsistencies, and values that looked out of place.
It wasn’t about blaming anyone. It was about starting conversations: “What does this field actually represent?” “Why is it different here?” “Who can help fix this at source?”
That’s where the value lies. Not in the metric itself, but in the awareness it creates.
Start Small, Talk Often
You don’t need a master plan to begin measuring data fitness. Just pick one or two aspects that matter to your team and start tracking them. Share what you’re seeing. Ask others what’s useful. Iterate.
In the next post, we’ll talk about governance – not the heavyweight kind, but the everyday policies and practices that help teams keep data on track once it’s cleaned up.