This is a book about embodiment -- the idea that intelligence requires a body -- and how having a body shapes the way we think. The idea that the body is required for intelligence has been around since nearly three decades ago, but an awful lot has changed since then. Research labs and leading technology companies around the world have produced a host of sometimes science fiction-like creations: unbelievably realistic humanoids, robot musicians, wearable technology, robots controlled by biological brains, robots that can walk without a brain, real-life cyborgs, robots in homes for the elderly, robots that literally put themselves together, and artificial cells grown automatically. This new breed of technology is the direct result of the embodied approach to intelligence. Along the way, many of the initially vague ideas have been elaborated and the arguments sharpened, and are beginning to form into a coherent structure. This popular science book, aimed at a broad audience, provides a clear and up to date overview of the progress being made. At the heart of the book are a set of abstract design principles that can be applied in designing intelligent systems of any kind: in short, a theory of intelligence. But science and technology are no longer isolated fields: they closely interact with the corporate, political, and social aspects of our society -- so this book not only provides a novel perspective on artificial intelligence, but also aims to change how we view ourselves and the world around us.
Credits:
Front Cover Design by Hakam El Essawy
Featuring the humanoid robot ‘EDS’
Photo: Patrick Knab
Robot construction: The Robot Studio (TRS)
ECCE Robot project: EU's 7th Framework Programme, ICT Challenge2,
'Cognitive Systems and Robotics'
Motors: maxon motor, Switzerland
Know-How Partner: Starmind.com
Contents
Preface
1. Intelligence, Thinking and Cognition
In this chapter we will introduce the concepts intelligence, thinking and cognition, discuss why intelligence has fascinated people from all walks of life throughout history, and introduce the field of artificial intelligence
2 Prerequisites for a Theory of Intelligence
Chapter 3 outlines what type of theory we are looking for and introduces a number of important notions such as diversity-compliance, frames of reference, the synthetic methodology, time perspectives, emergence, and real world agents
3 The Design Principles
In this chapter we sketch a set of design heuristics – what we call the design principles for intelligent systems – that can be used to guide us in building new agents and understanding biological ones
4. Development: From Locomotion to Cognition
This chapter explores design and analysis issues from a developmental perspective, and asks how high-level cognition can emerge as an agent matures
5. Evolution: Cognition from Scratch
Chapter 6 looks at how we can harness ideas from biological evolution in order to design agents from scratch
6. Collective Intelligence
This chapter discusses phenomena that come about when agents interact in groups
7. Ubiquitous computing and Everyday Robotics
In this chapter we discuss ubiquitous computing, a rapidly expanding discipline where the goal is to ‘put computers everywhere’, as well as exploring the development of robots that could enter into and participate in our everyday lives
8. Where is Human Memory?
Chapter 9 presents a case study on human memory that illustrates how embodiment provides a new perspective on challenging research problems old and new
9. Building Intelligent Companies
This chapter, written with management expert Simon Grand, applies the perspective of embodied intelligence to the business world, and in particular to the design and construction of new products, businesses
10. Conclusion: Principles and Insights
Lastly we will summarize the main points of our theory, and present a collection of examples illustrating how things can always be seen differently
Notes
Preface
This is a book about how having a physical body contributes towards being intelligent. The idea that the body is required for intelligence has been around since nearly three decades ago, but an awful lot has changed since then. Research labs and leading technology companies around the world have produced a host of sometimes science fiction-like creations: unbelievably realistic humanoids, robot musicians, wearable technology, robots controlled by biological brains, robots that can walk without a brain, real-life cyborgs, robots in homes for the elderly, robots that literally put themselves together, and artificial cells grown automatically. This new breed of technology is the direct result of the embodied approach to intelligence. Along the way, many of the initially vague ideas have been elaborated and the arguments sharpened, and are beginning to form into a coherent structure. Thus, several years ago it seemed a good opportunity to work out the first steps toward a theory of intelligence, which we (Rolf and Josh) did with our previous book, ‘How the Body shapes the way we think’.Abbildung in dieser Leseprobe nicht enthalten
Since then we have realized that the ideas contained in the previous book could be helpful to a wider audience: our last book was around 450 pages long and contained lots of technical vocabulary and detail. So we recruited Don Berry, a young Philosophy Research Student from University College London, to write a much shorter, popular version, recycling a lot of material from the previous publication. Science and technology are no longer isolated fields: they closely interact with the corporate, political, and social aspects of our society, and in this book we will not only provide a novel perspective on artificial intelligence but also change how we view ourselves and the world around us.
From a personal perspective, I (Rolf) have given many seminars and lectures to non-specialized audiences, and many of them were able to relate in very direct and natural ways to the ideas presented: they seemed to hold relevance for their own interests and specialties. What most people found intriguing was that this research demonstrates how things can always be seen differently. We all have our strong prejudices and often think, ‘It’s got to be like that, there is no other way!’ For example, if you want to build a fast-running robot you must have fast electronics; an object-collecting robot must have a means for recognizing the objects it is supposed to gather; or an insect with six legs needs a centralized control program somewhere in its brain to coordinate all its legs while walking. Surprisingly, it turns out that none of these are true, as we will see later.
Aims and Scope
The goal of this book is twofold: on the one hand it is to explore the implications of embodiment (how having a body affects intelligence), to work out the first steps toward a theory of intelligence, and finally to demonstrate the wide applicability of these ideas. On the other, we will try to show that things can always be seen differently. So, the book is conceptual, and is geared toward a broad audience in education, business, information technology, engineering, entertainment, the media, as well as academics from virtually all disciplines and levels, including psychology, neuroscience, philosophy, linguistics, and biology. And last but not least, this book is also intended for anyone interested in technology, its future, and its implications for society. No special training or education is required for understanding the ideas presented: we have tried to provide background information, examples, and further reading suggestions for the more difficult concepts.
The core of the theory consists of a set of design principles for intelligent systems. The reason for choosing the form of design principles for our theory is that they are a compact way of describing insights about intelligent systems in general and at the same time provide convenient heuristics for actually building artificial systems, like robots in particular. And actually building systems is crucial: we want to design and construct intelligent artificial systems in order to understand intelligent systems in general. This is the synthetic methodology – the basic methodology of artificial intelligence, which can be characterized as understanding by building. As we will show with many examples, by building artificial systems we can learn about biology, as well as about intelligence in general. An exciting prospect is that this enables us not only to study natural forms of intelligence, but to create new forms of intelligence that do not yet exist; ‘intelligence as it could be’, to adapt a quote by one of the founders of the field of artificial life, Chris Langton. So, the book is not so much about the intricacies of engineering or the details of how to build robots, but rather about the insights that arise as a result of these processes.
We have tried to show that the ideas developed in this book have broad applicability beyond the field of artificial intelligence proper by providing illustrations from the fields of ubiquitous computing and everyday robotics, human memory, and strategic management (chapters 7, 8 and 9). We hope that the reader will enjoy these case studies and will feel encouraged to apply the ideas to areas of his or her own interest. Also, the website for the bookAbbildung in dieser Leseprobe nicht enthalten contains many links to videos and other supporting material, as well as a discussion forum. We have engaged an artist and computer scientist, Shun Iwasawa of the University of Tokyo, who, with great talent, technical skill, and understanding, created Japanese Manga–style illustrations that, we hope, will stimulate the reader’s interest and communicate the fun, forward-thinking style of this field of study.
Acknowledgments
We would like to thank all the members of the Artificial Intelligence Laboratory of the University of Zurich for continued discussions, excellent research, and the many ideas that finally found their way into this book. Big thanks go to our friends Yasuo Kuniyoshi, Olaf Sporns, Akio Ishiguro, Hiroshi Yokoi, Koh Hosoda, Fumio Hara, and Hiroshi Kobayashi who kept the project moving. We would also like to express our thanks to the many funding agencies that have made the research described in this book possible, in particular the Swiss National Science Foundation and the IST Program of the European Union. Moreover, I (Rolf) would like to express my very personal thanks to Yasuo Kuniyoshi, Tomomasa Sato, Hirochika Inoue, Yoshi Nakamura, and all the other members of the Department of Mechano-Informatics, for inviting me to the University of Tokyo to be a twenty-first-century COE (Center of Excellence) professor of Information Science and Technology. Their perspective on intelligent agents as complex dynamical systems has strongly influenced the contents of the book.
Our thanks go also to Gabriel Gomez, who has researched many issues concerning the project. Also, we highly appreciate the contributions of Max Lungarella, who, in particular with his PhD thesis but also with many personal discussions, is largely responsible for the quality of chapter 4. Also, the ideas of Fumiya Iida, to whom we owe the title of chapter 4, ‘From Locomotion to Cognition’, have been instrumental. We are also extremely grateful to Shun Iwasawa for his outstanding, inspiring, and instructive illustrations. Thanks go also to all the researchers around the world from many different disciplines for their ideas that have provided inspiration for our arguments. And, of course, to Rodney Brooks, for having started this exciting research field in the first place.
There are many, many others – faculty, staff, students, friends, and family – who have provided support one way or another to all three authors: we are deeply in debt to all of you. I (Josh) would like to thank my family – Toby, Carol, and Ralph—for understanding and helping me in my long journey to get to this point. I (Rolf) would like to thank in particular my two sons, Serge and Mischa, who have always encouraged me to continue when times were hard. I (Don) would like to thank the other authors for the opportunity to become involved in this fascinating field, Pascal for making the introductions, and those of my friends and family that have encouraged me along the way.
1. Intelligence, Thinking, and Cognition
illustration not visible in this excerpt
Figure 2
Two ways of approaching intelligence. (a) The classical approach. The focus is on the brain and central processing. (b) The modern approach. The focus is on the interaction with the environment, as we will see throughout the book.
The idea that the mind controls the body is central to the way we like to think about ourselves. For example, I can decide in my mind to pick up a cup and drink a sip of coffee, and subsequently my arm and hand begin to perform the action. It implies that we are in control of our behaviour, and therefore our lives: this is the ‘Cartesian heritage’ of Western culture. As individuals we make a decision about something – a goal that we want to achieve, such as becoming a doctor, or catching a Frisbee – and then we make plans and go about doing it. Or when at a party, we decide that we would like to meet someone, so we start talking to that person. This picture of things comes very naturally: individualism – the importance of the individual – and a sense of control are two of our most cherished values. But is it correct?
As you may have guessed, our answer is for the most part a resounding ‘no’. While there may be some truth to this way of viewing ourselves, it actually turns out that to a surprising extent it is our bodies that determine our actions and thoughts. Although clearly of great importance, the brain is not the sole and central seat of intelligence: it is instead distributed throughout the organism. This is a major theme that we will explore throughout the book: how the body shapes the way we think. We are convinced that exploration of this relationship between body and thinking – the interaction between the body and the brain – will clarify core aspects of intelligence; indeed, we hope that it will lead to an entirely new view of intelligence itself.
1.1 Thinking, Consciousness and Cognition
Before we turn toward elucidating the mystery of intelligence, we must introduce some terminology. First off we will briefly examine the term ‘thinking’. Intuitively, thinking is associated with conscious or deliberate thought, with something high-level or abstract. This conception suggests that a process either is or is not thinking: but perhaps matters are not as clear as they might seem at first sight. For example: do newborns think? We cannot be sure, but perhaps they don’t. What about after a few days or weeks? Clearly after a few months or years, and certainly as adults, we do think, but this raises the difficult question at what age children actually do start thinking. Rather than looking for a black-and-white cutoff, we can view thinking as occurring in varying degrees: it is clear that babies’ skills gradually improve as they grow older; likewise, their ability to think also improves gradually over time as they grow and mature. From this perspective, the question is not whether someone (or something) is thinking or not, but how much thinking is going on. This immediately leads to the question of how we can tell how much thinking is going on: this problem will keep us busy throughout the book.
Consciousness is a peculiar, fascinating, but highly elusive phenomenon, and again we can imagine that it also registers on a continuum rather than being an all-or-nothing property. We would suspect that, for example, insects, birds, rats, dogs, chimpanzees, and humans are conscious to an increasingly large extent, rather than being either conscious or not. Because it is tied to subjective experience, it is hard to investigate consciousness scientifically. In this book we will not go much further into the subject: we hope to acquire a deep understanding of intelligence simply by pursuing the ideas of embodiment. However, because we discuss the issue of how cognition can emerge from a physically embodied system – and consciousness is intimately related to cognition – we feel that this way we will contribute to the understanding of consciousness too.
‘Cognition’, closely related to intelligence, is often used to designate those kinds of processes of an agent that are not directly related to sensory or motor (relating to muscles) mechanisms: neither sensing or perceiving things, nor moving about in any way. Examples of cognitive tasks are abstract problem solving, reasoning, memory, attention and language. Later we will see that often cognitive and sensory-motor processes cannot easily be separated out: perception, which is obviously directly related to sensory processes, is an important subfield of cognitive psychology, and even simple activities such as walking or grasping a cup have cognitive aspects. ‘Cognition’ is a more general term than ‘thinking’ because it does not necessarily imply consciousness. However, despite the more abstract connotations of ‘thinking’ as compared to ‘cognition’, we will see that thinking is not a disembodied process: it is also directly linked to sensory-motor and other physiological (i.e., bodily) processes.
We will use the term ‘agent’ to indicate that a claim applies not only to humans, but to other animals or robots as well: in this book much of what we have to say is about intelligence in general. Agents differ from other kinds of objects such as a rock or a cup, which are only subject to external forces and cannot act on their own. We are primarily interested in embodied agents: that is, those that have a physical body and can therefore interact with their environments.
Finally, we use the term ‘robot’ in quite a broad sense. The original sense of the word derives from the Czech ‘robota’, meaning something like ‘work’ or ‘forced labor’, and implies that robots were initially meant to do work for humans. However, the term ‘robot’ as used here refers to machines that have at least some agent characteristics in the sense discussed above, whether they do useful work for humans or not.
1.2 The Mystery of Intelligence
Intelligence is highly mysterious, and we all wonder what it is: how is it possible that something so sophisticated could have been produced by evolution? How does it develop as a baby grows to become an adult? How can we walk, talk, or solve problems? And how can we, without effort, recognize a face in a crowd, or play a piece of music? Intelligence is obviously of great importance: the fact that there is an enormous literature on the topic is not really surprising. Throughout human history, philosophers, psychologists, artists, teachers, and more recently neuroscientists and artificial intelligence researchers have been fascinated by it, and have devoted much of their lives to its investigation. And many of them have written books about it. Still, we feel it makes sense to write yet another book about this topic because we believe it presents some novel points that previously have not even been considered to be part of the field of intelligence. These novel points all relate, one way or another, to the notion of embodiment, the simple idea that intelligence requires a body.
Intelligence is a highly sensitive topic, and we tend to believe that it is what distinguishes us from the other animals. In our societies, Western or Eastern, an enormously high value is attached to intelligence: ‘you are very intelligent’ is one of the highest compliments one can give or receive, and we are constantly reminded that intelligence is positive and desirable. There has been a recent surge of interest in emotional intelligence, which argues that rationality is limited and that we should also take emotions into account. According to this view, outlined in e.g. American neuroscientist Antonio Damasio’s book ‘Descartes’ Error’, intuition and the ability to emotionally judge a situation are considered just as important as the ability to pass high school exams or achieve high scores on intelligence tests. However, regardless of these developments, rational, logical intelligence is still considered to be one of the most enviable characteristics a human being can possess.
There is also the question of why some people are more intelligent than others, which is related to the problem of whether intelligence is ‘in the genes’ or can be acquired – by going to the right schools, for example. This hotly debated topic is called the nature-nurture debate. Part of the reason this debate is so emotionally charged is because it is about intelligence: whether a person has an honest character or high moral standards, and how these traits are acquired, is not discussed as much, even though honesty and moral integrity are still considered desirable qualities. For our purposes, the interesting scientific question in this seemingly endless debate is not whether intelligence is inherited or acquired during the lifetime of an individual, but how evolution and development interact such that intelligence arises in an agent.
1.3 Defining Intelligence
So intelligence is important, sensitive, and mysterious. But what is it really? Intelligence closely resembles thinking and cognition, but the term is typically used in an even more general way. Everyone has some intuition of what intelligence is all about: it has to do with consciousness, thinking, and memory (as already mentioned), along with problem solving, intuition, creativity, language, and learning, but also perception and sensory-motor skills, the ability to predict the environment (including the actions of others), the capacity to deal with a complex world (which may result from a combination of other abilities), and performance in school and on IQ tests and the like.
Let us consider some different cases. Are ants intelligent? Perhaps only to a limited extent, although they can orient well in their environments and have interesting learning abilities. What about an entire ant colony? Is the intelligence of an ant colony comparable to the intelligence of a human? Ant colonies cannot speak, so if we consider language to be an important part of intelligence then we might conclude that humans are more intelligent. What about rats or dogs? These animals seem more intelligent than ants because they can do things that ants cannot, such as learning to navigate a maze or catching a Frisbee while running.
It seems obvious that humans are more intelligent than dogs, but perhaps dogs are actually more intelligent in certain respects. They cannot do long division or build cars, but when it comes to finding survivors at disaster sites or drugs in luggage at airports they are far superior to humans, which is why they are employed for these tasks. As alluded to earlier, some humans seem to be more intelligent than others – but what do we really mean by this? Is it that they do some things better than others, such as perform better on intelligence tests? Are they more successful in their careers? Or perhaps it is because they can do maths? But then what about those who can sing, or survive alone in the wild?
It is very hard to come up with a good definition of intelligence, and many people have tried their luck at it: there is a website with about 70 attempts that are all plausible. But looking for this kind of a formal definition is actually not very helpful. It is not very productive to discuss whether to consider ants as intelligent, because we can find both good reasons why they should (sophisticated orientation behaviour, complex social structures, impressive learning abilities) and should not be (they can’t talk, play chess, build cars or compose music). The more productive question is: given a behaviour that we think interesting, how does it come about? What are the underlying mechanisms?
To conclude this initial discussion, let us introduce the ‘diversity/compliance model’: not as a definition, but to characterize in a very general way what we intuitively expect from an intelligent agent. In the world around us, all humans, animals and robots have to comply with certain rules, such as the fact that there is gravity and friction, and that locomotion requires energy. There is simply no way around it. But adapting to these constraints and exploiting them in particular ways opens up the possibility of producing diverse behaviour, such as walking, running, drinking from a cup, putting dishes on a table, or riding a bicycle. Diversity means that the agent can do many different things so that he – or she or it – can react appropriately to a given situation. An agent that only walks, or only plays chess is naturally considered less intelligent than one that can also build toy cars out of a Lego kit, pour beer into a glass, and give a lecture in front of a critical audience. Learning, which is mentioned in many definitions of intelligence, is a powerful means for increasing behavioural diversity over time.
In this book we will study these kinds of behaviours by using the method of artificial intelligence, which is especially productive for this purpose. So, let us now get acquainted with it.
1.4 Artificial Intelligence
By ‘Artificial Intelligence’ we mean the interdisciplinary research field that has, in essence, three goals: (1) understanding biological systems (i.e., the mechanisms that bring about intelligent behaviour in humans and other animals); (2) the abstraction of general principles of intelligent behaviour; and (3) the application of these principles to the design of useful artefacts. The field was officially launched at the famous Dartmouth conference held in 1956 in the small town of Hanover, New Hampshire, USA)Abbildung in dieser Leseprobe nicht enthalten. It was here that the ‘fathers of AI’ – Marvin Minsky, John McCarthy, Allen Newell, Herbert Simon, and Claude Shannon – first discussed the question of whether or how human thinking could take place in a computer. They were convinced that, by using the notion of computation or abstract symbol manipulation, it would soon become possible to reproduce interesting abilities normally ascribed only to humans, such as playing chess, solving abstract problems, and proving mathematical theorems. What originated from this meeting, and what came to be the guiding principles until the mid-1980s, was what is now known as the classical, symbol-processing paradigm.
According to the classical approach to AI, when we study intelligent systems we should focus only on the programming or software. We can characterize this approach with the slogan ‘cognition as computation’. Thus intelligence can arise not only in biological systems on wet, biological brains, but also in artificial systems and on computers. This is a powerful idea and is one of the main reasons why computing has been so successful: all that matters are the programs that run on your computer; the hardware is irrelevant. Under this classical perspective, human intelligence was placed at centre stage: as a consequence, the favourite areas of investigation were natural language, reasoning, proving mathematical theorems, and playing formal games like checkers or chess. In the 1980s models called ‘expert systems’ became popular and were intended to replace human experts in tasks such as medical diagnosis and loan assessments.
By the mid-1980s, the classical approach had grown into a large discipline that could claim many successes: whenever you switch on your laptop computer or use a search engine on the internet you are starting up programs that have their origin in artificial intelligence, and computer games, home electronics, elevators, cars, and trains all abound with AI technology. However, this approach has not worked well when it comes creating more natural forms of intelligence, such as perception, manipulation of objects, walking and running over rough terrain, and other tasks that require direct interaction with the real world. Animals also move around with an astonishing flexibility and elegance: watching a cheetah running at great speed is an aesthetic pleasure; monkeys climb, swing, and run through the rainforest with uncanny talent, and humans can walk up and down stairs whilst looking around and smoking a cigarette. No robot can even come close to these feats of agility, and building a running robot that can move on uneven ground is still considered one of the great challenges in robotics.
Another issue that has attracted a lot of attention is common sense, because it is fundamental to mastering our everyday lives and crucial for understanding natural language. In the classical approach, common sense has often been viewed as ‘propositional’. This means that the building blocks of common-sense knowledge are thought to be statements – propositions – such as ‘cars cannot become pregnant’ or ‘if you drop a glass it will normally break’. But common sense is also about the implicit knowledge of folk physics that enables even small children to drink from a cup, to walk, or to throw a rock – and this crucial type of common-sense knowledge is clearly not propositional in nature.
In this book we will argue that the notion of intelligence as computation is in many ways misleading. Classical artificial intelligence has been successful at those tasks that humans normally consider difficult – playing chess, proving mathematical theorems, or solving abstract problems. However, many actions we experience as very natural and effortless, such as seeing, walking, riding a bicycle, drinking from a glass, or brushing our teeth have proved notoriously hard. In conclusion, the classical approach is simply of no help in deepening our understanding of many aspects of intelligence.Abbildung in dieser Leseprobe nicht enthalten
1.5 The Embodied Turn
After these issues became clear, it seemed that the field was in dire need of a paradigm shift. In the mid-1980s Rodney Brooks, director of the MIT Computer Science and Artificial Intelligence Laboratory, suggested that all of this focus on logic, problem solving, and reasoning was misguided. Brooks argued in a series of provocative papers entitled ‘Intelligence Without Representation’ and ‘Intelligence Without Reason’ that intelligence always requires a body: we can only ascribe intelligence to real physical systems whose behaviour can be observed as they interact with the environment. Unlike the classical view of intelligence, which is algorithm-based, the embodied approach envisions the intelligent artefact as more than just a computer program: it must perform tasks and behave intelligently in the real world. This perspective is known as embodiment. ‘The world is its own best model’ was another of Brook’s slogans at the time: why build sophisticated models of the world when you can simply look at it?
With this change in focus, the nature of the research questions also started to shift. In the second half of the 1980s Brooks started studying insect-like locomotion, and built, for example, the famous six-legged walking robot ‘Ghengis’. He chose to investigate insects because it took evolution so much longer (about 3 billion years) to move from inorganic matter to insects than it took to get from insects to humans (less than 0.5 billion years). He argued that once we understand insect-level intelligence it would be much easier and faster to understand and build human-level intelligence. Because of this interest in insects, walking and locomotion in general became important research topics. Other research topics included orientation, searching for food, bringing food back to the nest, generally exploring an environment, moving toward a light source, and obstacle avoidance. One might wonder what these simple behaviours have to do with intelligence: we hope to establish the link later in this book. We believe that embodiment is the approach that holds the most promise for our future understanding of intelligence.
The perspective of embodiment requires working with real-world physical systems such as robots. This is the synthetic methodology, which is characterized by the slogan ‘understanding by building’. For example, if we are trying to understand human walking, we should build an actual walking robot, as this approach will always yield the most new insights – and if you don’t get it completely right, it simply won’t work. A simulation may be employed, but only if it accurately replicates the actual physical processes of walking. It is easy to ‘cheat’ with simulation: a real-world walking agent has to somehow deal with bumps in the ground, whilst in a simulation this problem can easily be ignored. The synthetic methodology thus contrasts with the more analytical ways of proceeding as in biology, psychology, or neuroscience, where an animal or human is analyzed in detail by performing experiments on it.
Robots differ from computers in that they often have complex sensors that provide rich, continuously changing information about the world, rather than just a keyboard or mouse: they learn about the environment through their own sensory systems as they are interacting with the real world. When designing a robot we must deal with many new issues such as deciding which environments it must function in, the kinds of sensors and actuators to use, the energy supply (a notoriously hard problem), and the materials for construction. The physics of the interaction – the forces that the robot will experience – must also be considered. Most of these considerations are normally not associated with the notion of intelligence, and so the nature of the field changed dramatically when embodiment entered the picture. Researchers following the embodied approach began to move away from computer science and into robotics, engineering, and biology labs.
1.6 The Role of Neuroscience
In the early 1980s a new discipline arose called connectionism, which tries to model cognitive phenomena with artificial neural networks – collections of virtual models of neurons that are connected to each other in large networks, functioning in a massively parallel fashion. Although they had been around since the 1950s, neural networks only started to really take off in the 1980s when artificial intelligence was in a deep crisis and desperately looking for a way out. Because they are inspired by the brain, researchers were hoping that neural networks would be better at describing mental phenomena. Although neural networks relate to brain activity only at a very abstract level and neglect many essential properties of biological neurons, they demonstrate impressive performance and can achieve, for example, difficult classification and pattern-recognition tasks, like deciding from an X-ray image whether some tissue contains a cancerous tumor or not, or distinguishing bags containing plastic explosives from innocuous ones at airports.
In the field of cognitive psychology, artificial neural networks became very popular for modelling a variety of abilities such as perception (e.g. recognizing different types of objects), language learning and memory. An exciting new discipline called connectionist psychology emerged as a result. Massively parallel neural networks have highly desirable properties: just like natural brains they can adapt and learn, they are noise and fault tolerant, i.e. they continue to function even when partially damaged (such as after drinking too much), and they can generalize, meaning they continue to work in similar but different situations.
The main problem with this approach, however, was that the networks were not connected to the outside world: they did not collect their own data from the environment. Autonomous agents such as humans or robots, by contrast, have to acquire data in their physical and social interaction with the environment. We can draw inspiration from nature here: biological systems have evolved to cope precisely with these kinds of interactions because during evolution, there was always a complete organism that had to interact and survive in the environment.
Because of this connection to the real world, researchers in embodied artificial intelligence started to cooperate much more closely with neurobiologists. Around the same time a new breed of scientist started to appear: the computational neuroscientists, who developed detailed models of neurons, neural circuits, or specialized collections of neurons in the brain such, as the cerebellum, which plays a key role in motor control. Although their research topics strongly overlap, most computational neuroscientists would not yet consider themselves to be doing research in AI, and computational neuroscience is only just beginning to investigate embodiment. Finally, engineers have started cooperating with neuroscientists to connect electro-mechanical devices directly to neural tissue, as we shall see when we discuss cyborgs in chapter 7.
1.7 Diversification
AI has always been an interdisciplinary field, combining computer science, psychology, linguistics and philosophy. With the embodied approach, it has become even more so: now engineering, robotics, biology, biomechanics (the study of how humans and other animals move), material science, neuroscience, and sport science (a discipline aimed at using scientific methods to improve sporting performance in humans) have become part of the game. As we shall see, there has been a shift of interest from high-level processes – as studied in psychology and linguistics – to more low-level sensory-motor processes.
As the disciplines participating in AI have changed, the terms for describing the research areas have also shifted: researchers using the embodied approach no longer refer to themselves as doing artificial intelligence but rather robotics, engineering of adaptive systems, dynamic locomotion, or bio-inspired technology. Likewise, scientists who have their origins in other fields have started to play an important role in the study of intelligence: computational neuroscience is a case in point. So on the one hand the field of artificial intelligence has significantly expanded, while on the other its boundaries have become even fuzzier than they were before. Diversification has resulted in a number of interesting developments, including the appearance of the fields of biorobotics, developmental robotics, evolutionary robotics, artificial life, brain-machine interface technology, and multi-agent systems: we will look into all these areas briefly throughout this book.
1.8 New Approaches in Robotics
Biorobotics is the branch of robotics dedicated to building robots that mimic the behaviours of specific biological organisms. There are a great many examples of successful biorobotics projects that have contributed to our understanding of locomotion and orientation behaviour. A good illustration of bio-robotics is the work done by the mathematician and engineer Dimitri Lambrinos while he was working at the AI Lab at the University of Zurich. In cooperation with the world leader of ant navigation research, Ruediger Wehner, also of the University of Zurich, he built a series of robots called the Sahabots (short for Sahara robot) that mimic behaviours of the desert ant Cataglyphis that lives in a flat, sandy saltpan in southern Tunisia.
The challenge they faced was to create mechanisms that could, in principle, on a robot, reproduce the navigation behaviour of these desert ants. One such mechanism, the ‘snapshot model’, was originally postulated by British insect biologist Tom Collett of Sussex University. In this model, when the ant leaves the nest it takes a snapshot, a picture of the horizon as seen from the nest, which is stored in the ant’s brain. The ant then goes out searching for food, travelling sometimes up to 200 meters away from the nest. It later returns to the vicinity of the nest using another navigation system based on an estimation of the distance and direction to the nest with polarized sunlight. Because this long-term navigation system is inaccurate, when the ant gets near the nest the snapshot method takes over and guides it very precisely to the nest entrance. This model, which has been verified many experiments with real ants, has also been tested on robots in the Sahara desert where the ants live with impressive success. This shows that in principle such a mechanism could work and hence that, contrary to widespread assumption, agents do not need maps or internal models of the environment in order to navigate successfully.
Toward the mid-1990s Brooks argued that we had now achieved insect-level intelligence in robots and we should move ahead toward new frontiers. However, insects can do things like manipulating objects with their legs and mouth, navigating different kinds of environments (even in the desert!), building complex housing, jointly carrying large heavy objects, developing highly organized social structures, and reproducing and caring for their offspring: many of these abilities are far from being realized in robotic systems. Nevertheless, in the early 1990s Brooks started the ‘Cog’ project for the development of a humanoid robot, with the goal of eventually reaching high-level cognition. This was, in a sense, a return to the goals of traditional artificial intelligence, keeping the lessons learned from biorobotics in mind. The term ‘humanoid robot’ is used for robots that look like humans: they typically have two arms and legs, a torso, a movable head with a vision system, and sometimes hearing and a sense of touch. Because of this people often project humanlike properties onto them: David McFarland described this as ‘Anthropomorphization, the incurable disease!’ This phrase expresses how humans have a strong compulsion for attributing human-like properties to animals (e.g., Disney characters) and also to robots, even though these agents are often too simple for this to be justified.
1.9 Summary of key points
- The field of Artificial intelligence is about understanding biological systems, abstracting principles of intelligent behaviour, and designing and building intelligence artefacts
- The classical approach, based on abstract symbol processing, fails to explain many things about intelligence
- Embodied systems are those that have a physical form and can thus be observed interacting with their environment
- We believe that embodiment is the key to understanding intelligence
- The body is not controlled by the brain: instead, behaviour arises by interaction between the two systems
- Is it very important to actually build physical systems (which will usually be robots) to derive and test our ideas: this approach is known as the synthetic methodology
- In recent years AI has been transformed from a computational to a multidisciplinary field, involving biology, neuroscience, engineering and robotics
2. Prerequisites for a Theory of Intelligence
illustration not visible in this excerpt
Figure 2
Time scales and emergence. Three times scales must be taken into account when designing or analyzing an agent: ‘here and now’, ontogenetic (developmental), and evolutionary. Bodies, minds and behaviours emerge through the time scales, as shown. The principles governing behaviour are different for the different time scales.
In the following chapters we will take the first steps toward a new theory of intelligence. This chapter will outline what type of theory we are looking for, as well as introducing some key concepts we will need later on, and the next chapter will present the theory itself. Developing a theory of intelligence is a massive endeavor, and many great minds have tried their luck at it – such as the American psychologist and philosopher William James, the Austrian psychologist Sigmund Freud, the British psychologist Charles Spearman, the American psychologists Robert Sternberg (‘Triarchic Theory of Intelligence’), Howard Gardener (‘Theory of Multiple Intelligences’) and John Anderson (‘The Architecture of Cognition’), the Artificial Intelligence researchers Marvin Minsky (‘The Society of Mind’) and Allen Newell (‘Unified Theories of Cognition’), and the linguist and psychologist Steven Pinker (‘How the Mind Works’). Despite all of these distinguished efforts, we feel that we can make a valuable contribution because our ideas have grown from a perspective of embodiment and the synthetic methodology, which is mostly absent from previous attempts.
2.1 Form of the Theory
A theory of intelligence should capture our understanding of the field in a compact way, so that the insights can be applied to many different problems and be widely communicated. We want our theory to characterize not only ants and rats, but also humans, robots, and perhaps other kinds of artifacts such as mobile phones, intelligent cars, and wired T-shirts. Because our theory will be so broad we cannot expect to derive specific models or designs from it, but it should provide us with general guidelines on how to proceed. One aspect of these guidelines will be their use of the theory of ‘non-linear dynamical systems’ as a metaphor. A dynamical system in the real world is one that changes over time according to certain laws: examples include the stock market, the weather, a swinging pendulum, or a society of monkeys. To say that a dynamical system is non-linear means that its future is only predictable on a very short-term basis (weather forecasts are chronically error prone), and also that effects cannot simply be added: take your two favourite songs – by playing them both simultaneously, you don’t get double listening pleasure!
Dynamical systems have generated a lot of hype in the artificial intelligence community as a potential solution for escaping the weaknesses of the cognitivistic approach: at scientific conferences around the world, it has been loudly declaimed from lecterns (if not from the rooftops!) that theories of intelligence (or cognition) will have to be phrased in terms of dynamical systems. A related concept is chaos theory, which is sometimes used synonymously: this discipline achieved cult status in the 1980s and 1990s when professionals from all areas – managers, teachers, journalists, and even politicians – started using its terminology. Although this hype has largely faded away, the basic idea is still appealing, and we will make use of concepts from dynamical systems theory as a highly intuitive set of metaphors for thinking about physically embodied agents and groups of agents.
We are looking for a scaffold or structure that will get us somehow the best of all these worlds: the analytic component for understanding natural and artificial agents, the synthetic one for designing and building systems, and the dynamical systems metaphor for developing ideas and getting inspiration about intelligent behaviour in general. We feel that we can combine them all if we formulate the theory as a set of design principles. On the one hand these will represent fundamental ingredients for a general theory of intelligence, and on the other they will provide powerful engineering heuristics for the design of intelligent artifacts. As well as typifying the synthetic methodology of understanding by building and following the engineering flavor of the field, they will also serve to characterize existing natural and artificial systems too.
In addition to the design principles, our theory also includes a set of more general concepts and considerations that provide the framework within which design and analysis can take place: these concepts are described in this chapter. The design principles will also be supplemented with further principles relating to evolution, development and collective intelligence in chapters 4, 5 and 6.
2.2 Diversity and Compliance
As mentioned in chapter 1, agents we intuitively consider to be functioning intelligently within an ecological niche usually comply with and exploit the rules of their ecological niche in order to produce diverse behaviours. Art is a good illustration of this idea: you can make random scribbles on a piece of paper, but even though these scribbles are likely to be unique (nobody has ever seen precisely these scribbles), people will not find this very interesting unless there is at least some compliance with the – admittedly vague – rules of aesthetics. But if you master these rules, you can exploit them to produce pieces that people might consider to be works of art. We have the option to follow these rules or not, which is why they are called ‘soft’. In contrast, there are ‘hard’ rules that cannot be modified, such as the laws of physics. There is no choice between complying with them or not: only a choice as to which laws to exploit in order to achieve a particular purpose. In walking, for example, we exploit gravity and friction, but we do not exploit electromagnetic waves, whereas for seeing we do. We can ignore the rules of art by scribbling at random, but we cannot stop gravity from exerting a downward force on our bodies.
Intuitively, we are more likely to call a system ‘intelligent’ if it is equipped with sensory and motor abilities, because then it can use its sensors, its body, and its arms, legs, or fins to exploit the environment for different purposes. For example, a swimming robot equipped with light sensors might swim toward a light source, even if that entails swimming against the prevailing current: it is not merely dragged along with the flow of water. Unlike computers, robots can exploit physical laws in many different ways: friction and gravity for walking, drinking, and writing; fluid dynamics for swimming; sound propagation and vibration for talking and listening, and so on.
2.3 Frame of Reference
In order to exploit the givens of a particular environment you need not be consciously aware of what you are doing. It is we, as observers or scientists, who say that fish exploit turbulences in the water: the fish know nothing about it. Even us humans might not realize that we are exploiting some aspect of a physical law: for example, have you ever noticed that when you walk, you hardly use your muscles to extend your lower leg before placing your foot on the ground, but that gravity does the job for you? We exploit the laws of physics for walking whether we are aware of it or not. Intelligence, in this sense, is not so much a property of an agent or its brain, but resides in the eye of the beholder. This is the frame-of-reference issue, which we will encounter many times throughout the book.
The initial inspiration for this line of thought comes from Herb Simon’s seminal book ‘The Sciences of the Artificial’, in which he introduced the anecdote of an ant walking along a beach.Abbildung in dieser Leseprobe nicht enthalten From our point of view the ant describes a complex path, walking around puddles, rocks, twigs, and pebbles. However, from the point of view of the ant, the mechanisms that bring about this behaviour might in fact be quite simple, such as ‘if obstacle on right then turn left’ or ‘if obstacle on left then turn right,’ and ‘go straight.’ The path of the ant emerges from its interaction with the environment: it knows nothing about puddles, pebbles, and twigs.
Let us now consider another classic example: the ‘Swiss robots’. The Swiss robots are a set of simple robots that were built in the Artificial Intelligence Laboratory at Zurich University in the 1990s by the biologist Rene te Boekhorst and the engineer Marinus Maris. Each robot was equipped with two motors, one on the left and one on the right, and two infrared sensors, one front left and one front right. Infrared sensors provide a rough measure of distance to an object by sending out a signal and measuring the intensity of its reflection: the closer the object, the stronger the intensity of the reflection. If you now set three or four Swiss robots loose into an arena with randomly placed Styrofoam cubes, they will eventually shuffle most of the cubes into two or three clusters, with a few pushed against the walls.
To an observer it seems like the Swiss robots are ‘cleaning up’; but let us look at the world from their perspective. The robots are programmed in such a way that when the sensor on one side is stimulated – such as by an object, a wall, or another robot – they will move in the other direction. If there is no stimulation on either side, the robots will simply go straight. Clusters come about because the sensors are placed so that if a robot encounters a cube head-on, neither sensor fires, so that the robot simply drives forward, thereby pushing the cube. When another cube appears on its side, it turns away, leaving the cube next to the first one. This process, when repeated, leads to clustering.
The robots know nothing about cleaning up, avoiding Styrofoam cubes, or what a Styrofoam cube is; they simply react to levels of sensory stimulation: the world, as seen by a Swiss robot, only consists of sensory stimulation on the right and on the left, and the only actions they really perform are moving their right and left wheels accordingly. Here is what the outspoken American philosopher of mind Dan Dennett had to say about the Swiss robots: ‘These robots are cleaning up, but that’s not what they think they are doing!’ We have to be careful not to project our own perceptions of the world onto the agent we are observing or constructing. Recall McFarland’s warning: ‘Anthropomorphization, the incurable disease.’
Let us now look at a more recent robot: the four-legged robot ‘Puppy’. It was developed by the young and gifted engineer Fumiya Iida, working in the same Zurich laboratory at the time. Puppy, like the Swiss Robots, has a very simple design but its behaviour is amazingly lifelike and stable.Abbildung in dieser Leseprobe nicht enthalten It has a total of 12 joints: four at the shoulder and hips, four at each knee, and four at each ankle. In addition, there are a number of springs connecting the lower and upper parts of each leg. There are also pressure sensors on the bottom of the feet that indicate when a foot is touching the ground. The control is very simple: the ‘shoulder’ and ‘hip’ joints are periodically moved back and forth.
When the robot is placed on the ground and begins moving, it scrabbles at the floor, but then soon settles in to a smooth running gait. This is the result of the interaction between its oscillatory movements, the robot’s morphology (its shape and the springs attached), the friction on the feet, and gravity. Puppy only knows about the pressure patterns on its feet, which are its sole means of connecting to the outside world: the ‘running’ is entirely in the head of the observer. Like the action of the Swiss robots, the running behaviour of Puppy cannot be reduced to or explained by its control mechanisms alone: the simple oscillatory movements programmed into the robot lead to running behaviour only with the correct interaction of its embodiment and the environment: the springs, for instance, are essential for this to work.
2.4 The Synthetic Methodology
As we have already mentioned, artificial intelligence is concerned not only with analyzing natural phenomena, but also with building artificial systems (hence its name). The synthetic methodology, which can be understood as understanding by building, lies at the heart of the field. Many researchers have been inspired by the neuroscientist Valentino Braitenberg’s delightful book ‘Vehicles’Abbildung in dieser Leseprobe nicht enthalten, which carries the telling subtitle ‘Experiments in Synthetic Psychology’. In this book he presents a series of robot vehicles of increasing complexity, starting with very simple ones. Even though they have ‘brains’ composed of only a few wires, they exhibit seemingly sophisticated behaviour. The synthetic approach can be traced back even further, though, to the British neuroscientist, engineer and showman Sir W. Grey Walter, who claimed to have built true ‘artificial life’ in the form of two turtle robots, whimsically named Elmer and Elsie. There was no software on board Elmer or Elsie (or the Swiss robots either for that matter): the view of cognition as computation had yet to arise.
Grey Walter’s turtles, like Braitenberg’s vehicles, illustrate a very important and at first sight quite surprising result: very simple ‘brains’, in the right context, can produce rather complex behaviour that we are tempted to call intelligent. For example, one type of Braitenberg vehicle contains two wires, connecting the sensor on one side of its body to the motor on the other side, with the result that the vehicle moves toward and follows a light source. If two such vehicles are placed near each other– and each vehicle has a light source attached on top of it– the vehicles perform complex movements, which Walter described as reminiscent of mating dances or territorial aggression.
Since then, a long legacy of robots with simple controllers that nonetheless display complex behaviours have been created: Craig Reynolds’s ‘boids’, a group of simulated birds endowed with only three simple rules that allowed them to fly in a ‘flock’ (this algorithm has since been used in movies such as Jurassic Park, The Lion King, and the Lord of the Rings trilogy); the tag-playing cubic-inch-sized swarmbots built by the young innovator James McLurkin at MIT; Kismet, a humanoid robot we will say more about later that engages users in simple social interactions using a set of reflexes, and many others. Because behaviours that look complex and sophisticated can emerge from simple rules, we suspect there are often correspondingly simple neural circuits producing the behaviour of natural organisms too.
Although building artificial yet lifelike agents is certainly very challenging, we can gain interesting insights with even very simple experiments. For example, building artificial muscles with characteristics different from natural muscles, rather than just constructing precise replicas of natural muscles, may help us to learn more about the dynamics of walking in general. Furthermore, perhaps we can eventually produce artificial muscles superior to natural muscles. Biological muscle tires easily, and it is relatively weak when fully stretched or contracted – but we don’t need to mimic animals or humans; we are free to build whatever we want. Given an interesting behaviour – how we recognize a face in a crowd, how ants find their way back to their nest, or how birds manage to fly in flocks – we aim to build a system, typically a robot or a simulation, that is able to perform it.
2.5 Time Perspectives
The next component of our framework is time scales. A comprehensive explanation of the behaviour of any system must incorporate three perspectives, which span increasingly longer periods of time. The first is the ‘here and now’ perspective, which relates to what is currently happening and the mechanisms by which things work. The second is learning and development, the ‘ontogenetic’ view, which spans the lifetime of an individual. Thirdly, we have the evolutionary or phylogenetic perspective, which can extend over many generations of a population of agents. These distinctions have their origin in biology.
As an example, let us ask why drivers stop their cars at red traffic lights. A here-and-now answer is that a specific visual stimulus, the red light, leads to a certain behaviour like applying the brakes. A different answer could describe how individual drivers learn this rule from books, television, and driving instructors: this is an explanation in terms of learning and development. Lastly, an evolutionary explanation might deal with the historical process whereby a red light came to be used as a way of regulating traffic at road junctions. Explanations at different time scales are often linked: for example, hitting your thumb with a hammer teaches you how to better handle it in the future, and so the ‘here and now’ affects development. Likewise, learning affects what you will do in future situations: development affects the ‘here and now.’ The evolution of hand morphology changes what an organism can do with its hand: evolution affects the ‘here and now’ too. One important reason for separating the three time scales is that the mechanisms and principles that hold for each of them are different: evolutionary processes based on mutation and selection are very different from ontogenetic ones of growth and learning. But more about this later!
The time scales also help us clarify the kinds of design decisions that we as engineers can make. We can choose to build all aspects of the robot – its ‘brain’ and body – ourselves and watch what it does (the ‘here-and-now’ view, which requires detailed understanding of the actual mechanisms). Alternatively, we can take a step back: we build a starting ‘baby’ robot and define the rules by which that simpler agent can learn from the environment and develop into a more complex ‘adult’ robot (the ontogenetic view). Finally, can step back even further and design an artificial evolutionary system that produces agents on its own (the phylogenetic view). The time perspectives allow us to influence our designs either very directly, as in the ‘here-and-now’ position; or by letting emergence play an increasing role, as in the ontogenetic and phylogenetic stance.
Assume that we want to understand how the desert ant Cataglyphis finds its way back to the nest. We can build a robot that implements the snapshot model, in which the ant takes a photographic image of its environment as it leaves the nest, stores it, and uses it to orient itself when it returns to the area near the nest. This approach models the mechanism of navigation itself, and thus adopts the ‘here-and-now’ perspective. The biorobotics researcher Verena Hafner and her colleagues have asked whether the snapshot model is learned as an agent interacts with its environment, or is a behaviour that is hard-wired from birth. The antlike robots in her experiments learn something that is indeed very similar to the snapshot model, so the ‘here-and-now’ mechanism of the snapshot model is learnt as the robots interact with the environment. These experiments – like most learning experiments at the ontogenetic time scale – assume a fixed morphology and allow only the control architecture to change. However, as we shall see later, there are studies where morphological change is exploited.
2.6 Emergence
‘Emergence’ designates behaviour that has not been explicitly programmed into a system or agent, and typically emerges from following simple behavioural rules. We can distinguish between three types of emergence: (1) global phenomena arising from collective behaviour, (2) individual behaviour resulting from an agent’s interaction with the environment, and (3) emergence of behaviour from one time scale to another. The formation of ant trails is an example of the first type: the individual ants know nothing about the fact that they are forming a trail that will develop into the shortest connection to a food source. In Braitenberg’s vehicles, light-following behaviour emerges because the robot has two wires connecting its sensors to its motors in a particular way, and there is a light source in its environment. This is emergence of behaviour resulting from an interaction with the environment: the Swiss robots and the quadruped ‘Puppy’ are further cases in point. Finally, there is emergence with respect to the time scales, when a phenomenon at one time scale emerges from another, longer time scale, as depicted in figure 2 – for example, the snapshot model emerges from a developmental process.
For the researchers working in the field of artificial intelligence or artificial life, emergence is the thing to strive for. If a phenomenon can be explained as emergent from simpler processes, this constitutes an explanation and a deeper level of understanding: for example, we can understand the mating dance of Grey Walter’s robot turtles much better once we know how they are wired to react to light. By watching how one process or characteristic of an agent emerges from processes acting over a longer time scale, we can learn not only about the process itself but also about how and in what situations it arises. For example, when using evolutionary algorithms to automatically design agents for locomotion, we can study what morphologies and neural systems develop depending on the environment (e.g., land or water). If all agents evolved for rapid locomotion have a symmetric morphology, we can conclude that – most likely – symmetric morphologies are conducive to fast movement. Or if it can be shown that robots evolved for light-seeking behaviour will have a Braitenberg-like architecture, we will say that this architecture is emergent from an evolutionary process. Finally, if we can show that evolved agents follow the design principles in chapter 5 then this will add validity to the principles themselves.
Once we have the behavioural rules of a system, e.g., how ants drop chemical signals and have a tendency to move in the direction of high pheromone concentration, or how the infrared sensors on the Swiss robots are positioned and how they react to sensory stimulation, it is straightforward to find out about the emergent behaviours by simply running it. There is no mystery about emergence and we can give a perfectly rational explanation of how the Swiss robots form the clusters. However, given a certain desired behaviour we want to achieve, devising the rules that will lead to this behaviour is much more difficult, and it is still an open question how this can be done systematically. At the moment, design for emergence is an art rather than a hard-core engineering discipline, but we will try, throughout the course of this book, to provide evidence that our design principles for intelligent systems can help.
One area in which design for emergence has turned out to be astonishingly simple is artificial evolution. In some experiments, fantastically complex behaviours and structures have emerged from relatively simple evolutionary systems. Using the concept of emergence we may be able to automatically design systems that are more complex than those we design by hand. We have to keep in mind, however, that using evolution for design often makes it difficult for us to understand the results: we might end up with complex intelligent artifacts whose behaviour we cannot figure out, which is particularly inconvenient if the device needs to be repaired. But the fact that the mechanisms are not obvious does not imply that the behaviour is mystical – there will be a rational explanation even if we cannot currently see it.
2.7 Real Worlds and Complete agents
In traditional artificial intelligence, agents were designed for very simple environments such as computerized virtual worlds or simplified laboratory settings: they did not have to deal with all of the difficulties of the outside world. But if we are to learn something relevant about intelligence – something that holds true in real-world behaviour – we need to study systems that can act in the real world. The real world has characteristics that challenge agents in different ways. Acquisition of information about the real world is limited and always takes time: if I want to know who is in the room next door, I have to go there and look, or call them, or install a camera. Physical devices such as sensors are always subject to disturbances and malfunctions, and the information acquired through them will always contain errors. There is also always time pressure: things happen even if we don’t do anything ourselves. Moreover, agents in the real world often have several things to do simultaneously: animals have to eat and drink, but they also have to take care that they are not eaten by predators, clean themselves, breathe, fight off infection, reproduce, and care for their offspring. Similarly, robots which have to function in the real world always have many tasks to perform in parallel.
Agents that can function autonomously in the real world will be referred to as ‘complete agents’: they must fend for themselves, deal with unforeseen situations, create their own objectives, and search for food or energy. Complete creatures must be endowed with everything needed to behave in the real world, which means they have to be embodied and situated (they can acquire information about the world through their own sensory systems), self-sufficient (they can keep functioning without external help for extended periods of time) and autonomous (independent of external control). Autonomy, like intelligence, is not an all-or-none property: even robots that are not directly controlled by a human may rely on us for their energy supply and for maintenance.
There are certain properties that all complete agents share because they are simply unavoidable consequences of embodiment. All complete agents are subject to the laws of physics, for example: if an agent jumps up in the air, gravity will eventually pull it back to the ground, and energy will inevitably dissipate over time. Complete agents all generate sensory stimulation through interaction with the real world: when we walk we produce pressure patterns on our feet, and can feel the regular flexing and relaxing of our muscles as our legs move. Complete agents also affect their environment: when we walk across a lawn, grass is crushed underfoot; when we burn energy we heat the environment.
Last but not least, complete agents have ‘attractor states’: behaviours that agents are naturally drawn towards. For example, certain speeds are more comfortable to run at. Horses can walk, trot, canter, and gallop, and an expert can easily identify when the horse is in one of these gaits; likewise, when you are walking and speeding up, at some point you will automatically start running: a transition into a new gait pattern. The attractor states into which an agent settles are always the result of the interaction of three systems: the agent’s body, its brain (or control system), and its environment.
Recall the four-legged robot Puppy, and let us consider it as a Dynamical System. Puppy’s legs are designed to move back and forth in a periodic manner, and when it is put on the ground it will, after a few steps, settle into a natural running rhythm. The speed at which Puppy runs cannot be varied arbitrarily, even though the speed of the motors can: within certain ranges stable gaits emerge, but within others the robot moves erratically or falls over. The stable gaits are attractor states that the robot will resettle into after being slightly perturbed, but if the perturbation is too big then the robot will change behaviour and settle into a new attractor: it may fall over and come to rest, or fall on its side and kick itself around in a circle, mimicking the infamous stage antics of Angus Young, lead guitarist for the rock band AC/DC. The region of states around an attractor that lead back to it are called its basin of attraction. Some gaits have a larger basin of attraction than others: they are more stable. Falling back into a natural gait – or falling into a new one – is not controlled by a microprocessor but arises naturally from the robot’s morphology and environment.
2.8 Summary of key points
- Our theory of intelligence will be formulated as a set of design principles, for use in an analytic way for understanding and a synthetic way for building
- The frame of reference issue reminds us to distinguish between processes really going on in the agent and interpretations that only exist in our heads
- There are three times scales we must take into account: the ‘here and now’, the developmental, and the evolutionary
- There are three types of emergence: from groups of agents, from interaction with the environment, and from the time scales
- The real world has characteristics that challenge agents: it is continuously changing, has its own dynamics, and is knowable only to a limited extent
- Complete agents are bound by the laws of physics, generate sensory information through their actions, and affect the world around them
- Attractor states are states are discrete stable states in a continuous physical system that the agent has a tendency to return to when perturbed
[...]
- Quote paper
- Rolf Pfeifer (Author), Josh Bongard (Author), Don Berry (Author), 2011, Designing Intelligence, Munich, GRIN Verlag, https://www.grin.com/document/165548
-
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X.