When people talk about artificial intelligence today, it often sounds as though it appeared fully formed in the last few years. In reality, AI has a much longer and messier history. Long before chatbots and large language models, researchers were already trying to work out whether machines could think, learn, or reason.
In the week I am delivering my first AI course for AHEP, it feels like a good moment to go back to the beginning. Not to the present day, but to the early decades when many of the ideas we still rely on were first explored.
The question that started it all
In 1950, mathematician and logician Alan Turing published a paper called Computing Machinery and Intelligence. Rather than asking whether machines could think, he reframed the problem. Could a machine behave in a way that was indistinguishable from a human in conversation?
This became known as the Turing Test. It did not define intelligence, but it set a practical challenge. From the very beginning, AI was as much about behaviour and perception as it was about internal mechanisms.
That tension has never really gone away.
The birth of artificial intelligence as a field
The term “artificial intelligence” was coined in 1956 at a summer workshop at Dartmouth College in the United States. The organisers believed that every aspect of learning or intelligence could, in principle, be described precisely enough for a machine to simulate.
This early optimism is striking in hindsight. Researchers assumed that human reasoning could be broken down into rules, symbols, and logic, and that computers would simply follow those rules faster and more reliably than people.
This assumption shaped the next two decades of AI research.

Early successes and symbolic AI
In the late 1950s and 1960s, AI researchers focused heavily on symbolic reasoning. Systems were built to solve logic problems, prove mathematical theorems, and play games like chess.
Some of these systems worked surprisingly well in narrow contexts. They reinforced the belief that intelligence was primarily about rule-following and formal reasoning.
However, these systems struggled outside tightly controlled environments. They lacked common sense, struggled with ambiguity, and could not easily adapt to new situations.
What looked like intelligence was often brittle.
Expert systems and the promise of knowledge
By the 1970s and early 1980s, attention shifted towards expert systems. These were programs designed to replicate the decision-making of human experts in specific domains, such as medical diagnosis or chemical analysis.
Expert systems relied on large sets of hand-coded rules provided by specialists. In some contexts, they delivered real value and were deployed in organisations.
But they came with significant costs. Capturing expert knowledge was slow and expensive. Maintaining rules as knowledge changed was difficult. Most importantly, these systems still lacked understanding. They followed rules without knowing why those rules existed.
This was a turning point.
Early limits and the first AI winters
By the late 1980s, expectations had once again outpaced reality. AI systems worked well in narrow domains but failed to generalise. Funding dried up. Interest waned. This period became known as an AI winter.
These cycles of hype and disappointment are not new. They are part of the history of AI, and they are worth remembering when enthusiasm runs high.
Crucially, the early decades showed that intelligence could not be reduced to rules alone. Human judgement, context, learning, and uncertainty were harder to formalise than expected.
Why these early decades still matter
Although the technology has changed dramatically since those early years, many of the core questions remain the same. What does it mean to understand? How do we deal with ambiguity? Where does judgement sit in a system built on data and rules?
The first 30 to 40 years of AI research did not give us the answers, but they shaped the questions. They also remind us that AI has always been a socio-technical endeavour, not just a technical one.
Understanding this history helps temper both fear and overconfidence. It shows that progress is real, but rarely straightforward.
Looking ahead, with perspective
As AI becomes more embedded in everyday work, especially in higher education, it is tempting to focus only on what is new. But there is value in remembering how long we have been trying to build intelligent systems, and how often simple solutions have failed.
AI did not arrive overnight, and it will not settle neatly either.
That perspective matters.