What the New York Times v OpenAI Case Means for UK Universities

If you’re in higher education and not following US court cases about AI, you’re not alone. But one lawsuit in particular – The New York Times Co. v OpenAI & Microsoft – is shaping up to be a landmark. And while it’s happening in the US, it could have a real impact on how we in the UK use generative AI in teaching, research, and day-to-day university life.

Here’s what’s happening – and why it matters to us.

What’s the Case About?

The New York Times filed a lawsuit in December 2023 (case no. 23-cv-11195, now part of MDL 25-md-3143) accusing OpenAI and Microsoft of using its copyrighted articles to train and power tools like ChatGPT and Copilot – without permission or payment.

Crucially, the Times claims that when users asked ChatGPT certain prompts, it reproduced nearly identical passages from behind its paywall. The case is now part of a wider bundle of lawsuits in the US, covering everything from books to coding documentation to journalism.

The court hasn’t ruled on whether OpenAI actually broke the law yet – but it has said the Times’ claims are strong enough to proceed to the next stage.

New York Times OpenAI lawsuit

Why Should UK Universities Care?

Even though the case is playing out in New York, its ripple effects reach across the Atlantic. Here are ten reasons UK HE providers should pay attention:

1. AI outputs are now under serious legal scrutiny.

ChatGPT’s reproduction of full paragraphs from the New York Times isn’t just bad PR – the judge thinks it may constitute copyright infringement.

➤ In the UK, this raises concerns for teaching staff using AI to generate slides, reading lists or assessments. Quoting GenAI text without checking it could risk secondary copyright infringement under UK law.

2. All copyright lawsuits against OpenAI are now being handled in one place.

In April 2025, US courts grouped over a dozen similar lawsuits into a multi-district litigation (MDL).

➤ A single ruling – or settlement – could shape international practice. If courts order licensing deals or block certain outputs, universities worldwide may have to respond quickly.

3. Chat logs are now legal evidence.

A US judge has ordered OpenAI to retain all user chat data, even deleted messages.

➤ That clashes with UK GDPR expectations about data minimisation and deletion. Institutions should warn staff and students that their prompts may be stored and legally scrutinised.

4. AI tools are likely to get more expensive.

Major publishers are pushing for licensing fees from AI companies. Some have already signed deals (e.g., Axel Springer).

➤ If costs rise, free educational access to tools like ChatGPT or Copilot may end. Contract managers should plan for price changes or licence restrictions.

5. UK text-and-data-mining rights are limited.

The UK Copyright, Designs and Patents Act 1988 only permits TDM for non-commercial research with legal access to the source.

➤ That means student projects or academic research may be fine, but spin-outs or short-course use could breach the law.

6. National guidance already flags IP risks.

The Department for Education’s June 2025 guidance lists copyright infringement as a key hazard when using generative AI in education.

➤ If a university doesn’t have clear AI policies and training in place, it risks being seen as non-compliant.

7. Sector bodies are warning about legal boundaries.

Jisc advises institutions to set “clear and specific” limits on GenAI use, including around copyright.

➤ That means having internal guidance and model prompts – not leaving it to individual staff to guess what’s OK.

8. University policies are tightening.

Institutions like York now require full disclosure of GenAI use in postgraduate research – including compliance with copyright law.

➤ Expect similar expectations for staff using AI in research, publishing or assessment design.

9. Risk is shifting to end-users.

The legal small print of many AI tools puts the responsibility (and liability) on the user.

➤ Universities need to review licence agreements and ensure indemnity or insurance is in place – especially for staff uploading third-party content into AI tools.

10. The case may take years – but policy shifts faster.

The original complaint in December 2023 triggered lobbying, policy drafts, and deals – all before any ruling.

➤ HE providers should monitor developments now and prepare to adapt quickly.

What Should Universities Be Doing?

If you’re responsible for digital strategy, academic governance, or data protection, here’s a practical checklist:

  • Audit GenAI use across teaching, research, and admin.
  • Update policies to reflect national guidance and local risk appetite.
  • Train staff and students to understand copyright risks.
  • Review contracts for AI services, including liability and data handling.
  • Monitor MDL 25-md-3143 developments and be ready to adjust.

Want to Read More?

Here are some sources to explore:

Bottom Line: AI legal battles are no longer abstract. They’re shaping the future of what’s allowed – and affordable – in UK education. It’s time to pay attention.

Scroll to Top