Artificial intelligence: between myth and reality

None
None
PARIS. KAZINFORM Are machines likely to become smarter than humans? No, says Jean-Gabriel Ganascia: this is a myth inspired by science fiction. The computer scientist walks us through the major milestones in artificial intelligence (AI), reviews the most recent technical advances, and discusses the ethical questions that require increasingly urgent answers.



A scientific discipline, AI officially began in 1956, during a summer workshop organized by four American researchers - John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon - at Dartmouth College in New Hampshire, United States. Since then, the term "artificial intelligence", probably first coined to create a striking impact, has become so popular that today everyone has heard of it. This application of computer science has continued to expand over the years, and the technologies it has spawned have contributed greatly to changing the world over the past sixty years, UNESCO official website reads.

However, the success of the term AI is sometimes based on a misunderstanding, when it is used to refer to an artificial entity endowed with intelligence and which, as a result, would compete with human beings. This idea, which refers to ancient myths and legends, like that of the golem [from Jewish folklore, an image endowed with life], have recently been revived by contemporary personalities including the British physicist Stephen Hawking (1942-2018), American entrepreneur Elon Musk, American futurist Ray Kurzweil, and proponents of what we now call Strong AI or Artificial General Intelligence (AGI). We will not discuss this second meaning here, because at least for now, it can only be ascribed to a fertile imagination, inspired more by science fiction than by any tangible scientific reality confirmed by experiments and empirical observations.

For McCarthy, Minsky, and the other researchers of the Dartmouth Summer Research Project (link is external)on Artificial Intelligence, AI was initially intended to simulate each of the different faculties of intelligence - human, animal, plant, social or phylogenetic - using machines. More precisely, this scientific discipline was based on the conjecture that all cognitive functions - especially learning, reasoning, computation, perception, memorization, and even scientific discovery or artistic creativity - can be described with such precision that it would be possible to programme a computer to reproduce them. In the more than sixty years that AI has existed, there has been nothing to disprove or irrefutably prove this conjecture, which remains both open and full of potential.

Uneven progress
In the course of its short existence, AI has undergone many changes. These can be summarized in six stages.

◼️ The time of the prophets

First of all, in the euphoria of AI's origins and early successes, the researchers had given free range to their imagination, indulging in certain reckless pronouncements for which they were heavily criticized later. For instance, in 1958, American political scientist and economist Herbert A. Simon - who received the Nobel Prize in Economic Sciences in 1978 - had declared that, within ten years, machines would become world chess champions if they were not barred from international competitions.

◼️ The dark years

By the mid-1960s, progress seemed to be slow in coming. A 10-year-old child beat a computer at a chess game in 1965, and a report commissioned by the US Senate in 1966 described the intrinsic limitations of machine translation. AI got bad press for about a decade.

◼️Semantic AI

The work went on nevertheless, but the research was given new direction. It focused on the psychology of memory and the mechanisms of understanding - with attempts to simulate these on computers - and on the role of knowledge in reasoning. This gave rise to techniques for the semantic representation of knowledge, which developed considerably in the mid-1970s, and also led to the development of expert systems, so called because they use the knowledge of skilled specialists to reproduce their thought processes. Expert systems raised enormous hopes in the early 1980s with a whole range of applications, including medical diagnosis.

◼️Neo-connectionism and machine learning

Technical improvements led to the development of machine learning algorithms, which allowed computers to accumulate knowledge and to automatically reprogramme themselves, using their own experiences.

This led to the development of industrial applications (fingerprint identification, speech recognition, etc.), where techniques from AI, computer science, artificial life and other disciplines were combined to produce hybrid systems.

◼️From AI to human-machine interfaces

Starting in the late 1990s, AI was coupled with robotics and human-machine interfaces to produce intelligent agents that suggested the presence of feelings and emotions. This gave rise, among other things, to the calculation of emotions (affective computing), which evaluates the reactions of a subject feeling emotions and reproduces them on a machine, and especially to the development of conversational agents (chatbots).

◼️Renaissance of AI

Since 2010, the power of machines has made it possible to exploit enormous quantities of data (big data) with deep learning techniques, based on the use of formal neural networks. A range of very successful applications in several areas - including speech and image recognition, natural language comprehension and autonomous cars - are leading to an AI renaissance.

For full version go to 

Currently reading