History of AI

Back to Main Page

Modern AI has its roots in the 1950s, when the emergence of digital computing machinery made it possible to express information processing as computation, but it builds on an age-old dream of understanding human thinking that dates back to antiquity, when philosophers first tried to capture the patterns of logical thought, and builds on centuries of advances in philosophy, mathematics, linguistics, and psychology.

Defining AI precisely is notoriously hard, as our expectations on what behaviour we consider “intelligent” in machines keep shifting – one hundred years ago that would have included pocket calculators and programmable alarm clocks; asking a handheld device to recommend songs using spoken language would have been considered almost impossible even a couple of decades ago.

Along general lines, however, AI research tries to emulate aspects of intelligence by developing methods inspired by our intuitions and observations of human, and, in some cases, animal behaviour. Neural networks, for example, use a highly simplified model of biological neurons which adapt their behaviour to patterns in input and output fed into them; planning algorithms try to use rules of thumb to find a fast solution to get from one location to another on a map, for example getting to a point that is closer to the destination, just as we would expect a human to do.

AI is both about understanding intelligence and engineering solutions, and despite its recent successes in real-world applications, much of AI research still focuses on investigating fundamental problems:

  • How do we best capture human knowledge and intuition in such a way that computers can process it?
  • How can we enable machines to interpret observations correctly, taking contextual information into account?
  • How should they learn from observation and transfer their knowledge from one problem to another?  

AI Today

Much of the current interest in AI has been fuelled by the success of Machine Learning, an area that  develops algorithms to detect patterns and extract knowledge and insights from data for analysis and prediction purposes. While many different techniques are used in machine learning, they all have a core methodology in common, which is that they try to derive a “model” from real-world data that best captures the regularities in this data, so that this model can be used make a reliable prediction when presented with new situations.

The success of machine learning has been fuelled by the exponential growth of data available in virtually any application domain with the emergence of the Internet, mobile devices, and sensing technology since the 1990s, and the equally impressive growth in computing power that enables massive amounts of data to be processed by these algorithms. At the same time, researchers have greatly improved techniques like deep learning, understanding how to train more and more complex models effectively from structured and unstructured (sensor, audio/video, speech, text) data.

Machine learning has become so ubiquitous that it is now commonly used across many other areas of AI and computing more widely – often as part of some larger application that also involves other techniques, or to solve pattern recognition, analysis and prediction tasks in many application domains (e.g. in medicine or finance). It is also regarded a key component of data science, which is, however, broader than AI in that it does not necessarily focus on achieving capabilities that will enable machines to make decisions and act autonomously.

Driven by the increasingly widespread use of machine learning, AI methods are finding their way into different types of applications. However, the range of technologies that are mature enough is still rather limited. Widely deployed, everyday life applications are largely to be found in natural language processing (e.g. voice-controlled assistants), image processing (e.g. tracking objects in photos and videos), and robotics and autonomous/semi-autonomous systems (e.g. autonomous vehicles, manufacturing). AI also often operates behind the scenes, e.g. driving personalisation in recommendation systems, internet search, social media, video games or educational software – where it is used to perform very specific, narrow functions.

AI - A Broad Church

Despite the rapid rise of Machine Learning, it is important to highlight that AI involves a range of subfields, each with its own specific methods and contribution.

Some of these focus on very specific types of capabilities, for example natural language processing, which is concerned with understanding, processing, and generating language.

For others, the focus is on developing techniques that can be applied across many types of problems, for example knowledge representation and reasoning, or planning and problem-solving algorithms. Naturally, there are strong connections between these overlapping areas. AI research comes in many flavours,  ranging from the speculative to the very application-specific, where big communities have emerged that develop AI for use in areas such as medicine, engineering, aducation, and the humanities.

Beyond the ever-increasing connectivity of AI to other disciplines relevant to this research, AI naturally also has close connections to most other areas of computing, such as databases, cybersecurity, hardware design, and software engineering.

Looking Ahead

Throughout its history, AI has undergone several “hype cycles”, where impressive successes led to inflated expectations that were later followed by disillusionment. Despite these changes in public opinion, continued research efforts often led to success in the long-term.

Neural networks, for example, which were first invented in the 1950s, fell out of fashion in the 1990s, but are now enabling major advances in many AI-driven applications. Robust recognition of human speech seemed almost unattainable thirty years ago, but is now used daily by billions of users around the world on their mobile phones. Only recently, mathematicians reported major advances toward solving long-standing fundamental problems in their field using AI proof assistants, a technology developed by a relatively small, specialised community.

The University of Edinburgh has shaped the evolution of AI since the 1960s, and has exceptional strengths across all its subfields. We are committed to supporting the whole breadth of AI research, and advancing the state of the art by integrating different techniques.

 

Back to Main Page