My predictions for 2026: AI, data, and the gap between ambition and reality

At the start of 2026, it feels like the conversation around AI in higher education is shifting. The early excitement has not disappeared, but it has settled into something more serious and, in some cases, more uncomfortable. The question is no longer whether institutions will use AI, but whether they are ready to use it well.

Based on the work I am doing with higher education professional services teams, here are six predictions for the year ahead.

predictions

1. Data quality, not AI capability, will be the main blocker to meaningful AI adoption

By 2026, the tools themselves will rarely be the limiting factor. Most organisations already have access to AI that is powerful enough for everyday professional services work. The real constraint will be the data that feeds those tools.

Inconsistent definitions, missing context, poor documentation, and unclear ownership will limit what AI can do safely or usefully. In many cases, AI will simply surface problems that have existed for years, only now they are harder to ignore.

AI does not fix broken data. It exposes it.


2. AI and data literacy will matter more than advanced technical skill, especially for managers

Very few people in higher education need to become AI specialists. What far more people need is the ability to understand how AI works with data, what good inputs look like, and how to judge outputs with confidence.

For managers, this combined AI and data literacy is becoming essential. It underpins sensible decision-making, realistic expectations, and the ability to challenge outputs rather than accept them at face value. Without it, there is a real risk of false confidence, where decisions feel evidence-based but are built on weak foundations.

Competence in asking good questions will matter more than technical sophistication.


3. The gap between institutions that invest in data foundations and those that do not will widen sharply

Institutions that have invested time in data standards, governance, documentation, and ownership will move faster with AI and with less risk. Those that have not will struggle to scale beyond isolated use cases.

This gap will not always be visible from the outside. It will show up in quieter ways, in staff confidence, in decision quality, and in how often work has to be re-done. By 2026, this difference will be difficult to ignore.

Data foundations are becoming a form of institutional infrastructure.


4. Dashboards will be questioned more, not less

This is not a rejection of dashboards or data-driven decision-making. It is a demand for trust.

Users are increasingly asking where numbers come from, how often they are updated, and what assumptions sit behind them. Dashboards that cannot answer these questions clearly will lose credibility, regardless of how well they are designed visually.

The most valuable dashboards in 2026 will not just show metrics, they will explain them.


5. “AI strategy” will quietly become “data strategy with AI attached”

Many institutions will move away from standalone AI strategies, not because AI is unimportant, but because it cannot be separated from data, systems, and processes.

The most effective work will happen under less fashionable labels. Data quality projects, process redesign, governance reviews, and staff capability building will do more to enable AI than any high-level strategy document.

The best AI work will often not be called AI at all.


6. Shadow AI use will continue to grow

Staff will keep using AI tools to draft documents, analyse information, and think through problems, often without formal approval or guidance. This will not be driven by bad behaviour, but by workload pressure and a desire to work more effectively.

Institutions that treat shadow AI as a disciplinary issue will struggle. Those that acknowledge it as a systems and support issue will be better placed to manage risk, build trust, and support safe use.

Ignoring shadow AI will not make it disappear.


Looking ahead

So those are my predictions for 2026. If 2026 proves anything, it will be that AI is not primarily a technology challenge. It is a data challenge, a capability challenge, and a leadership challenge.

The institutions that make progress will not be the ones chasing the newest tools. They will be the ones doing the quieter, less glamorous work of improving data, building literacy, and supporting staff to exercise judgement.

That work may not always look innovative, but it is what will make AI genuinely useful.

Scroll to Top