Tietokeskus ei ole vielä kielelläsi, voit jatkaa englanniksi tai valita jonkin muun saatavilla olevan kielen.

A brief history of artificial intelligence

Alan Turing is one of the most well-known names in the history of computing. During the Second World War, Turing’s work on cracking the Enigma code, used by the German army to send secure messages, formed the foundation of machine learning.

Turing suggested that machines, like people, could use reasoning to solve problems or make decisions. In 1950, he described a way of measuring whether we can declare that a machine is intelligent — Turing called it The Imitation Game. Commonly referred to as The Turing Test, the method involves a human, a machine and a participant that determines which is which. If a computer converses with a human without the individual realising it is a machine, the computer passes the test.

However, Turing and the rest of the industry were held back by the limitations of computers at the time, computers suffered from a lack of memory and storage and were extremely expensive. They were therefore limited to big companies and top universities.

The Dartmouth Conference

In 1956, John McCarthy, an American computer scientist, organised The Dartmouth Conference, an event where top minds from leading universities came together to brainstorm. It was here that the term artificial intelligence was officially coined, bringing together several terms including cybernetics, automata theory and information processing.

In the years following the conference, AI development went from strength to strength. One promising development occurred in 1966, the first chatbot. Known as ELIZA and developed by Joseph Weizenbaum at the Massachusetts Institute of Technology (MIT). ELIZA communicating via text in human language, rather than in computer code, was an early example of natural language processing. ELIZA was an early ancestor of today’s chatbots, such as Alexa and Siri, who can now communicate using speech as well as text.

Due to the progress achieved during the time between 1956 and 1973, this period became known as the first AI summer. Researchers were optimistic in their predictions about the future of AI and computers were performing more and more tasks, from speaking English to solving algebraic equations. 

Based on early successes, research and funding were channelled into furthering AI, but at the time, computers still could not process the amount of data required for successful application. For example, one program for analysing the English language could only handle an unhelpful vocabulary of 20 words. The period from 1974 to 1980 became known as the first AI winter — funders realised that research was under-delivering on its goals and withdrew their support.

Summer comes back around

In 1981, a valuable commercial purpose for AI was found and this attracted investment back into the field. Ed Feigenbaum and others invented a new type of AI, called expert systems. Instead of focussing on general intelligence, expert systems were focussed on using a series of rules to automate specific tasks and decisions in the real world. The first successful implementation, known as the RI, was introduced by the Digital Equipment Corporation to configure the company’s orders and improve accuracy. Japan also heavily invested into computers designed to apply AI and America, the UK and the rest of Europe followed suit.

Unfortunately, the excitement ended in disappointment. Apple and IBM introduced general purpose computers more powerful than those designed for AI, demolishing the AI industry. Funding in America dried up, as it did in Japan following the failure of a flagship project.

A change of approach

In 1988, researchers at IBM published a paper that introduced principles of probability when tackling automated translation between French and English. The approach then switched to determining the probability of outcomes based on data, rather than training them to determine rules. This is closer to the cognitive processes of humans and has formed the basis of today’s machine learning.

AI developed radically during the 1990s, particularly due to the increasing level of computational power. One highlight was in 1997, when a computer software known as Deep Blue beat the world chess champion, Garry Kasparov. Another AI vs expert milestone came in 2016, when Deep Mind’s AlphaGo beat the 18-time world champion Lee Sedol.

The future of AI

The developments of new technology, such as autonomous vehicles, provide high profile use cases for AI. In 2018, Waymo commercialised the first driverless taxi service in Phoenix. AI has become a common day-to-day technology for smartphone users, Googlers and manufacturing professionals.

The history of AI has featured peaks and troughs, with both interest and funding fluctuating. It hasn’t been easy to get to where we are today, but AI is only going to get bigger.

Share