Search for notes by fellow students, in your own course and all over the country.

Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.

My Basket

You have nothing in your shopping cart yet.

Title: Robotics
Description: Robotics

Document Preview

Extracts from the notes are below, to see the PDF you'll receive please use the links above


SIEGWART
Illah R
...
Arkin, editor
Robot Shaping: An Experiment in Behavior Engineering,
Marco Dorigo and Marco Colombetti, 1997
Behavior-Based Robotics,
Ronald C
...
Murphy, 2000
Strategic Negotiation in Multiagent Environments,
Sarit Kraus, 2001
Mechanics of Robotic Manipulation,
Matthew T
...
Breazeal, 2002
Introduction to Autonomous Mobile Robots,
Roland Siegwart and Illah R
...
Nourbakhsh

A Bradford Book
The MIT Press
Cambridge, Massachusetts
London, England

© 2004 Massachusetts Institute of Technology

All rights reserved
...


This book was set in Times Roman by the authors using Adobe FrameMaker 7
...

Printed and bound in the United States of America
...

Introduction to autonomous mobile robots / Roland Siegwart and Illah Nourbakhsh
...
cm
...

Includes bibliographical references and index
...
paper)
1
...
2
...
I
...
II
...
III
...


TJ211
...
S54 2004
629
...
mobilerobots
...
1 Introduction
1
...
1 Introduction
2
...
1 Key issues for locomotion
2
...
2
...
2
...
3 Wheeled Mobile Robots
2
...
1 Wheeled locomotion: the design space
2
...
2 Wheeled locomotion: case studies

13
13
16
17
18
21
30
31
38

3

Mobile Robot Kinematics
3
...
2 Kinematic Models and Constraints
3
...
1 Representing robot position
3
...
2 Forward kinematic models
3
...
3 Wheel kinematic constraints
3
...
4 Robot kinematic constraints
3
...
5 Examples: robot kinematic models and constraints
3
...
3
...
3
...
3
...
4

3
...
6

Mobile Robot Workspace
3
...
1 Degrees of freedom
3
...
2 Holonomic robots
3
...
3 Path and trajectory considerations
Beyond Basic Kinematics
Motion Control (Kinematic Control)
3
...
1 Open loop control (trajectory-following)
3
...
2 Feedback control

74
74
75
77
80
81
81
82

4

Perception
89
4
...
1
...
1
...
1
...
1
...
1
...
1
...
1
...
1
...
2 Representing Uncertainty
145
4
...
1 Statistical representation
145
4
...
2 Error propagation: combining uncertain measurements
149
4
...
3
...
3
...
1 Introduction
5
...
2
...
2
...
2
...
2
...
3 To Localize or Not to Localize: Localization-Based Navigation versus
Programmed Solutions
5
...
4
...
4
...
5

5
...
7

5
...
5
...
5
...
5
...
6
...
6
...
6
...
7
...
7
...
7
...
7
...
8
...
8
...
1 Introduction
6
...
2
...
2
...
3 Navigation Architectures
6
...
1 Modularity for code reuse and sharing
6
...
2 Control localization
6
...
3 Techniques for decomposition
6
...
4 Case studies: tiered robot architectures

200
200
203
210
212
212
214
227
244
245
246
248
249
250
250
253
257
257
258
259
272
291
291
291
292
298

Bibliography
Books
Papers
Referenced Webpages
Interesting Internet Links to Mobile Robots

305
305
306
314
314

Index

317

Acknowledgments

This book is the result of inspirations and contributions from many researchers and students
at the Swiss Federal Institute of Technology Lausanne (EPFL), Carnegie Mellon University’s Robotics Institute, Pittsburgh (CMU), and many others around the globe
...
It is their work that
enables us to collect the material for this book
...
We would like to thank: Kai Arras for his contribution to uncertainty representation, feature extraction and Kalman filter localization;
Matt Mason for his input on kinematics; Nicola Tomatis and Remy Blank for their support
and assistance for the section on vision-based sensing; Al Rizzi for his guidance on feedback control; Roland Philippsen and Jan Persson for their contribution to obstacle avoidance; Gilles Caprari and Yves Piguet for their input and suggestions on motion control;
Agostino Martinelli for his careful checking of some of the equations and Marco Lauria for
offering his talent for some of the figures
...

This book was also inspired by other courses, especially by the lecture notes on mobile
robotics at the Swiss Federal Institute of Technology, Zurich (ETHZ)
...
At the Robotics Institute special
thanks go to Emily Hamner and Jean Harpley for collecting and organizing photo publication permissions
...
Thanks go to all the many hundreds of students that followed the lecture and
contributed thought their corrections and comments
...
Thanks to Ronald
C
...

Special thanks also to Marie-Jo Pellaud at EPFL for carefully correcting the text files
and to our colleagues at the Swiss Federal Institute of Technology Lausanne and Carnegie
Mellon University
...
Its roots include many engineering and science disciplines, from mechanical, electrical and electronics engineering to computer, cognitive and
social sciences
...
Our objective in writing this textbook is to provide mobile robotics with such
a preparatory guide
...
A collection of workshop proceedings and journal publications could present the
new student with a snapshot of the state of the art in all aspects of mobile robotics
...
The formalism and
analysis herein will prove useful even as the frontier of the state of the art advances due to
the rapid progress in all of mobile robotics' sub-disciplines
...
This textbook is
suitable as a whole for introductory mobile robotics coursework at both the undergraduate
and graduate level
...

The origins of the this book bridge the Atlantic Ocean
...
Their combined set of curriculum details and lecture notes formed the earliest versions of this text
...

For an overview of the organization of the book and summaries of individual chapters,
refer to Section 1
...

Finally, for the teacher and the student: we hope that this textbook proves to be a fruitful
launching point for many careers in mobile robotics
...


1

1
...

Robot arms, or manipulators, comprise a 2 billion dollar industry
...
1)
...

Yet, for all of their successes, these commercial robots suffer from a fundamental disadvantage: lack of mobility
...


© SIG Demaurex SA

Figure 1
...


2

Chapter 1

on where it is bolted down
...

This book focuses on the technology of mobility: how can a mobile robot move unsupervised through real-world environments to fulfill its tasks? The first challenge is locomotion itself
...
2)
...
3, 1
...
5, 1
...
In these cases, the low-level
complexities of the robot often make it impossible for a human operator to directly control
its motions
...

For example, Plustech’s walking robot provides automatic leg coordination while the
human operator chooses an overall direction of travel (figure 1
...
Figure 1
...

Other commercial robots operate not where humans cannot go but rather share space
with humans in human environments (figure 1
...
These robots are compelling not for reasons of mobility but because of their autonomy, and so their ability to maintain a sense of
position and to navigate without human intervention is paramount
...
2
The mobile robot Sojourner was used during the Pathfinder mission to explore Mars in summer 1997
...
However, some on-board sensors allowed for
obstacle detection
...
oact
...
nasa
...
shtm)
...
3
Plustech developed the first application-driven walking robot
...
The leg coordination is automated, but navigation is still done by the human operator on the
robot
...
plustech
...
© Plustech
...
4
Airduct inspection robot featuring a pan-tilt camera with zoom and sensors for automatic inclination
control, wall following, and intersection detection (http://asl
...
ch)
...


4

Chapter 1

Figure 1
...
© Wide World Photos
...
6
Picture of recovering MBARI’s ALTEX AUV (autonomous underwater vehicle) onto the Icebreaker
Healy following a dive beneath the Arctic ice
...


Introduction

5

Figure 1
...
Ten Roboxes have operated during 5 months at the Swiss exhibition EXPO
...
They were developed by EPFL [132] (http://robotics
...
ch) and commercialized by BlueBotics (http://www
...
ch)
...
8
Newest generation of the autonomous guided vehicle (AGV) of SWISSLOG used to transport motor
blocks from one assembly station to another
...

There are thousands of AGVs transporting products in industry, warehouses, and even hospitals
...


6

Chapter 1

front

back

Figure 1
...
It has various on-board sensors for autonomous navigation in the corridors
...
It can detect the lamps on the ceiling as references, or landmarks (http://
www
...
com)
...


Figure 1
...
, Germany
...
The RoboCleaner RC 3000 covers badly soiled areas with a
special driving strategy until it is really clean
...
karcher
...
© Alfred Kärcher GmbH & Co
...
11
PIONEER is a modular mobile robot offering various options like a gripper or an on-board camera
...
MobileRobots
...


Figure 1
...
It
has a large variety of sensors for high-performance navigation tasks (http://www
...
com/rwi/)
...


8

Chapter 1

Figure 1
...
It is only about 60 mm in diameter
...
More then 700 units had
already been sold by the end of 1998
...
k-team
...
© K-Team SA
...
8) autonomously
deliver parts between various assembly stations by following special electrical guidewires
using a custom sensor
...
9)
...
10)
...
Other specialized cleaning robots take advantage of the regular geometric pattern of aisles in supermarkets to facilitate the localization and navigation tasks
...
This is one of the largest current markets for mobile robots
...

The most popular research robots are those of ActivMedia Robotics, K-Team SA, and IRobot (figures 1
...
12, 1
...
14)
...
No mean feat, this makes
mobile robotics as interdisciplinary a field as there can be
...
To create robust perceptual systems, the mobile roboticist must leverage the fields
of signal analysis and specialized bodies of knowledge such as computer vision to properly

Introduction

9

employ a multitude of sensor technologies
...

Figure 1
...
This figure identifies many of the main bodies of knowledge associated with mobile robotics
...
The
intended audience is broad, including both undergraduate and graduate students in introductory mobile robotics courses, as well as individuals fascinated by the field
...

Mobile robotics is a large field, and this book focuses not on robotics in general, nor on
mobile robot applications, but rather on mobility itself
...

Clearly, a useful, commercially viable mobile robot does more than just move
...
The aspiring mobile roboticist will
start with this book, but quickly graduate to course work and research specific to the desired
application, integrating techniques from fields as disparate as human-robot interaction,
computer vision, and speech understanding
...
14
Alice is one of the smallest fully autonomous robots
...


10

Chapter 1

Knowledge,
Data Base

Localization
Map Building

Mission
Commands

“Position”
Global Map

Cognition
Path Planing

Path
Execution

Raw data

Actuator Commands

Sensing

Acting

Motion Control

Path

Information
Extraction and
Interpretation
Perception

Environment Model
Local Map

Real World
Environment

Figure 1
...


1
...
15
...

Chapter 4 presents an in-depth view of perception
...
Each chapter builds upon previous chapters, and so the reader
is encouraged to start at the beginning, even if their interest is primarily at the high level
...

Chapter 2, “Locomotion”, begins with a survey of the most popular mechanisms that
enable locomotion: wheels and legs
...
But designing a robot’s locomotive system properly
requires the ability to evaluate its overall motion capabilities quantitatively
...

The greatest single shortcoming in conventional mobile robotics is, without doubt, perception: mobile robots can travel across much of earth’s man-made surfaces, but they
cannot perceive the world nearly as well as humans and other animals
...
With this language in hand, chapter 4
goes on to present many of the off-the-shelf sensors available to the mobile roboticist,
describing their basic principles of operation as well as their performance limitations
...

But perception is more than sensing
...
The second half of chapter 4 describes strategies for feature extraction
that have been most useful in mobile robotics applications, including extraction of geometric shapes from range-based sensing data, as well as landmark and whole-image analysis
using vision-based sensing
...
The first point at which mobility and sensing must meet is localization: mobile robots often need to maintain a sense of
position
...
Case studies demonstrate
various localization schemes, including both Markov localization and Kalman filter localization
...

Mobile robotics is so young a discipline that it lacks a standardized architecture
...
But the question of architecture is of paramount importance when one chooses to address the higher-level competences of a mobile
robot: how does a mobile robot navigate robustly from place to place, interpreting data,
localizing and controlling its motion all the while? For this highest level of robot competence, which we term navigation competence, there are numerous mobile robots that showcase particular architectural strategies
...
But first, chapter 6 addresses two skills that a competent, navigating robot usually must
demonstrate: obstacle avoidance and path planning
...
We hope, though, that this broad introduction will place the
reader in the context of mobile robotics’ collective wisdom
...


2

2
...
But there are a large variety of possible ways to move, and so the selection of a robot’s approach to locomotion is an important aspect of mobile robot design
...
Most of these locomotion mechanisms have been inspired by their biological counterparts (see figure 2
...

There is, however, one exception: the actively powered wheel is a human invention that
achieves extremely high efficiency on flat ground
...
Our bipedal walking system can be approximated by a rolling
polygon, with sides equal in length d to the span of the step (figure 2
...
As the step size
decreases, the polygon approaches a circle or wheel
...

Biological systems succeed in moving through a wide variety of harsh environments
...
However,
replicating nature in this regard is extremely difficult for several reasons
...
Cell division, in combination with specialization, can readily produce a millipede with
several hundred legs and several tens of thousands of individually sensed cilia
...
Additionally, the cell is a microscopic building block that enables extreme miniaturization
...
Finally, the biological
energy storage system and the muscular and hydraulic activation systems used by large animals and insects achieve torque, response time, and conversion efficiencies that far exceed
similarly scaled man-made systems
...
2)

Running

Jumping

Walking

Figure 2
...


Owing to these limitations, mobile robots generally locomote either using wheeled
mechanisms, a well-known human technology for vehicles, or using a small number of
articulated legs, the simplest of the biological approaches to locomotion (see figure 2
...

In general, legged locomotion requires higher degrees of freedom and therefore greater
mechanical complexity than wheeled locomotion
...
As figure 2
...
The railway is ideally engineered for wheeled locomotion because rolling friction is minimized on
a hard and flat steel surface
...
This is demonstrated in figure
2
...


Locomotion

15

h

O

l

α

α
d

Figure 2
...
As the step size decreases, the polygon approaches a circle or wheel with the radius l
...
1

1

10
speed (miles/hour)

100

Figure 2
...


16

Chapter 2

Figure 2
...


In effect, the efficiency of wheeled locomotion depends greatly on environmental qualities, particularly the flatness and hardness of the ground, while the efficiency of legged
locomotion depends on the leg mass and body mass, both of which the robot must support
at various points in a legged gait
...
For example, in the case
of insects in a forest the vertical variation in ground height is often an order of magnitude
greater than the total height of the insect
...
Therefore, it
is also understandable that virtually all industrial applications of mobile robotics utilize
some form of wheeled locomotion
...
4
...
1
...
Following this, in sections 2
...
3, we present overviews of legged
locomotion and wheeled locomotion techniques for mobile robots
...
1
...
In manipulation, the robot arm is fixed but
moves objects in the workspace by imparting force to them
...
In both cases, the
scientific basis is the study of actuators that generate interaction forces, and mechanisms

Locomotion

17

that implement desired kinematic and dynamic properties
...
g
...
From this starting point, we can formally define and analyze all manner of mobile robot locomotion systems
...
Thus we will not delve deeply into the
physical basis of locomotion
...
Then,
chapter 3 presents a more detailed analysis of the kinematics and control of wheeled mobile
robots
...
2

Legged Mobile Robots

Legged locomotion is characterized by a series of point contacts between the robot and the
ground
...

Because only a set of point contacts is required, the quality of the ground between those
points does not matter so long as the robot can maintain adequate ground clearance
...
A final advantage of legged locomotion is the potential to manipulate
objects in the environment with great skill
...

The main disadvantages of legged locomotion include power and mechanical complexity
...
Additionally, high maneuverability will only be achieved if the legs have a sufficient number of degrees of freedom to impart forces in a number of different directions
...
5
Arrangement of the legs of various animals
...
2
...
A number of different leg configurations have been successful
in a variety of organisms (figure 2
...
Large animals, such as mammals and reptiles, have
four legs, whereas insects have six or more legs
...
Especially in the case of humans, balance has progressed
to the point that we can even jump with one leg1
...

In contrast, a creature with three legs can exhibit a static, stable pose provided that it can
ensure that its center of gravity is within the tripod of ground contact
...
A small deviation from stability (e
...
, gently pushing the stool) is passively corrected toward the stable pose when the upsetting force stops
...
In order to achieve static walking, a robot must have at least six legs
...
8)
...
For them, the problem of
balance during walking is relatively simple
...
Fauns, for example, spend several minutes
attempting to stand before they are able to do so, then spend several more minutes learning
to walk without falling
...
Infants require months to stand and walk, and even longer to learn to jump, run,
and stand on one leg
...
In child development, one of the tests used to determine if the child is acquiring advanced locomotion skills is the ability to jump on one leg
...
6
Two examples of legs with three degrees of freedom
...

Once again, the biological world provides ample examples at both extremes
...
Each leg has only a single degree of freedom, which is oriented longitudinally along the leg
...
The caterpillar leg is therefore mechanically very simple, using a minimal number of extrinsic muscles to achieve complex overall
locomotion
...
More than fifteen muscle groups actuate eight
complex joints
...
More common is
the addition of a third degree of freedom for more complex maneuvers, resulting in legs
such as those shown in figure 2
...
Recent successes in the creation of bipedal walking
robots have added a fourth degree of freedom at the ankle joint
...

In general, adding degrees of freedom to a robot leg increases the maneuverability of the
robot, both augmenting the range of terrains on which it can travel and the ability of the
robot to travel with a variety of gaits
...
Additional actuators require energy and
control, and they also add to leg mass, further increasing power and load requirements on
existing actuators
...
7
Two gaits with four legs
...


In the case of a multilegged mobile robot, there is the issue of leg coordination for locomotion, or gait control
...

The gait is a sequence of lift and release events for the individual legs
...
1)

For a biped walker k = 2 legs, the number of possible events N is
N = ( 2k – 1 )! = 3! = 3 ⋅ 2 ⋅ 1 = 6

(2
...
lift right leg;
2
...
release right leg;
4
...
lift both legs together;
6
...

Of course, this quickly grows quite large
...
3)

Figures 2
...
8 depict several four-legged gaits and the static six-legged tripod gait
...
2
...
Several interesting designs are presented below,
beginning with the one-legged robot and finishing with six-legged robots
...
uwe
...
uk/clawar/
...
2
...
1 One leg
The minimum number of legs a legged robot can have is, of course, one
...
Body mass is particularly important to
walking machines, and the single leg minimizes cumulative leg mass
...

Perhaps most importantly, the one-legged robot maximizes the basic advantage of legged
locomotion: legs have single points of contact with the ground in lieu of an entire track, as
with wheels
...
Furthermore, a hopping robot can dynamically cross a gap
that is larger than its stride by taking a running start, whereas a multilegged walking robot
that cannot run is limited to crossing gaps that are as large as its reach
...
For a robot with one
leg, static walking is not only impossible but static stability when stationary is also impossible
...
Thus, the successful single-legged robot must be dynamically
stable
...
8
Static walking with six legs
...


Figure 2
...
This robot makes continuous corrections to body attitude
and to robot velocity by adjusting the leg angle with respect to the body
...
Although powerful, these actuators require a large, off-board hydraulic pump
to be connected to the robot at all times
...
10 shows a more energy-efficient design developed more recently [46]
...
This spring returns approximately 85% of the energy, meaning that stable hopping
requires only the addition of 15% of the required energy on each hop
...
As with the Raibert hopper, the
bow leg hopper controls velocity by changing the angle of the leg to the body at the hip
joint
...
9
The Raibert hopper [28, 124]
...
© 1983
...
10
The 2D single bow leg hopper [46]
...
Benjamin Brown and Garth Zeglin, CMU
...
11
The Sony SDR-4X II, © 2003 Sony Corporation
...
Often clever mechanical design
can perform the same operations as complex active control circuitry
...
This robot is dynamically stable, and is furthermore passive
...

2
...
2
...
Two
legged robots have been shown to run, jump, travel up and down stairways, and even do
aerial tricks such as somersaults
...
Both companies designed small, powered joints that achieve power-to-weight performance unheard of in commercially available servomotors
...


Locomotion

25

Specifications:
Maximum speed:
Autonomy:
Weight:
Height:
Leg DOF:
Arm DOF:

2 km/h
15 min
210 kg
1
...
12
The humanoid robot P2 from Honda, Japan
...


The Sony Dream Robot, model SDR-4X II, is shown in figure 2
...
This current model
is the result of research begun in 1997 with the basic objective of motion entertainment and
communication entertainment (i
...
, dancing and singing)
...
Given the goal of fluid and entertaining motion, Sony spent considerable effort designing a motion prototyping application system to enable their engineers to
script dances in a straightforward manner
...
5 kg
...
Figure 2
...
Note from this picture that the Honda humanoid is much larger than the SDR4X at 120 cm tall and 52 kg
...
Perhaps the first robot to
famously demonstrate biomimetic bipedal stair climbing and descending, these Honda
humanoid series robots are being designed not for entertainment purposes but as human
aids throughout society
...


26

Chapter 2

Specifications:
Weight:
Height:

131 [kg]
1
...
13
The humanoid robot WABIAN-RIII at Waseda University in Japan [75]
...


An important feature of bipedal robots is their anthropomorphic shape
...
WABIAN is a robot built at Waseda Universities Japan (figure 2
...
WABIAN is designed to emulate
human motion, and is even designed to dance like a human
...
Furthermore, each leg must have sufficient capacity to support the full weight
of the robot
...
An elegant design of a biped robot is the Spring Flamingo of MIT (figure 2
...
This robot inserts springs in series with the leg actuators to
achieve a more elastic gait
...

2
...
2
...
14
The Spring Flamingo developed at MIT [123]
...


gait
...
15)
...
In addition to developing custom motors and
software, Sony incorporated a color vision system that enables AIBO to chase a brightly
colored ball
...

Early sales of the robot have been very strong, with more than 60,000 units sold in the first
year
...

Four-legged robots have the potential to serve as effective artifacts for research in
human-robot interaction (figure 2
...
Humans can treat the Sony robot, for example, as a
pet and might develop an emotional relationship similar to that between man and dog
...
As the challenges of high energy storage and
motor technology are solved, it is likely that quadruped robots much more capable than
AIBO will become common throughout the human environment
...
2
...
4 Six legs (hexapod)
Six-legged configurations have been extremely popular in mobile robotics because of their
static stability during walking, thus reducing the control complexity (figures 2
...
3)
...

Head sensor: Senses when a person taps or
pets AIBO on the head
...

Eye lights: These light up in blue-green or
red to indicate AIBO’s emotional state
...

Speaker: Emits various musical tones and
sound effects
...

Pause button: Press to activate AIBO or to
pause AIBO
...

Paw sensors: Located on the bottom of each
paw
...

Back sensor: Senses when a person touches
AIBO on the back
...
15
AIBO, the artificial dog from Sony, Japan
...
6)
...
18)
...
Because it consists of a
straightforward arrangement of servomotors and straight legs, such robots can be readily
built by a robot hobbyist
...
Currently, the gap between
the capabilities of six-legged insects and artificial six-legged robots is still quite large
...
Rather, insects combine a small number of active degrees of freedom with passive

Locomotion

29

Specifications:
Weight:1
Height:
DOF:

9 kg
0
...
16
Titan VIII, a quadruped robot developed at Tokyo Institute of Technology
...
mes
...
ac
...
© Tokyo Institute of Technology
...
5 m/s
Weight:1
6 kg
Height:
0
...
7 m
No
...
17
Lauron II, a hexapod platform developed at the University of Karlsruhe, Germany
...


30

Chapter 2

Figure 2
...
ai
...
edu/projects/genghis)
...


structures, such as microscopic barbs and textured pads, that increase the gripping strength
of each leg significantly
...
For example, a research group is attempting to re-create the complete mechanical
function of the cockroach leg [65]
...
Nevertheless, significant
gains have been realized recently, primarily due to advances in motor design
...

2
...
It can achieve very good efficiencies, as demonstrated in
figure 2
...

In addition, balance is not usually a research problem in wheeled robot designs, because
wheeled robots are almost always designed so that all wheels are in ground contact at all
times
...
When more than three wheels are used,
a suspension system is required to allow all wheels to maintain ground contact when the
robot encounters uneven terrain
...
19
The four basic wheel types
...
(b) castor wheel: two degrees of freedom; rotation around an
offset steering joint
...
(d) Ball or spherical wheel: realization
technically difficult
...
3
...
We begin by discussing the wheel
in detail, as there are a number of different wheel types with specific strengths and weaknesses
...

2
...
1
...
19
...
The standard wheel and the castor wheel have a primary axis of
rotation and are thus highly directional
...
The key difference between these two wheels is that the
standard wheel can accomplish this steering motion with no side effects, as the center of
rotation passes through the contact patch with the ground, whereas the castor wheel rotates
around an offset axis, causing a force to be imparted to the robot chassis during steering
...
20
Navlab I, the first autonomous highway vehicle that steers and controls the throttle using vision and
radar sensors [61]
...


The Swedish wheel and the spherical wheel are both designs that are less constrained by
directionality than the conventional standard wheel
...
The small rollers attached around the circumference of the
wheel are passive and the wheel’s primary axis serves as the only actively powered joint
...

The spherical wheel is a truly omnidirectional wheel, often designed so that it may be
actively powered to spin along any direction
...

Regardless of what wheel is used, in robots designed for all-terrain environments and in
robots with more than three wheels, a suspension system is normally required to maintain
wheel contact with the ground
...
For instance, in the case of some four-wheeled indoor robots
that use castor wheels, manufacturers have applied a deformable tire of soft rubber to the
wheel to create a primitive suspension
...


Locomotion

33

2
...
1
...
The mobile robot designer must consider these two issues
simultaneously when designing the locomoting mechanism of a wheeled robot
...

Unlike automobiles, which are largely designed for a highly standardized environment
(the road network), mobile robots are designed for applications in a wide variety of situations
...
However, there is no single wheel configuration that
maximizes these qualities for the variety of environments faced by different mobile robots
...
In fact, few
robots use the Ackerman wheel configuration of the automobile because of its poor maneuverability, with the exception of mobile robots designed for the road system (figure 2
...

Table 2
...

This table shows both the selection of particular wheel types and their geometric configuration on the robot chassis
...
For instance, the two-wheeled bicycle arrangement has moderate maneuverability and poor controllability
...
Nevertheless, this table provides an indication of the large variety of wheel
configurations that are possible in mobile robot design
...
1 is quite large
...
Below, we identify some of the key trade-offs in terms of the three issues we
identified earlier: stability, maneuverability, and controllability
...
3
...
3 Stability
Surprisingly, the minimum number of wheels required for static stability is two
...
Cye is a commercial mobile robot that uses this wheel configuration (figure 2
...

However, under ordinary circumstances such a solution requires wheel diameters that
are impractically large
...
Conventionally, static stability requires a minimum of three wheels, with the additional caveat that the center of gravity must be contained within the triangle formed by the
ground contact points of the wheels
...


34

Chapter 2

Table 2
...
1
Wheel configurations for rolling vehicles
# of
wheels
4

Arrangement

Description

Typical examples

Two motorized wheels in the
rear, 2 steered wheels in the
front; steering has to be different for the 2 wheels to avoid
slipping/skidding
...

Four steered and motorized
wheels

Four-wheel drive, fourwheel steering Hyperion
(CMU)

Two traction wheels (differential) in rear/front, 2 omnidirectional wheels in the front/rear

Charlie (DMT-EPFL)

Four omnidirectional wheels

Carnegie Mellon Uranus

Two-wheel differential drive
EPFL Khepera, Hyperbot
with 2 additional points of con- Chip
tact

Four motorized and steered
castor wheels

Nomad XR4000

36

Chapter 2

Table 2
...


2
...
1
...
This level of maneuverability requires wheels that can move in more than just one
direction, and so omnidirectional robots usually employ Swedish or spherical wheels that
are powered
...
24
...


Locomotion

37

Figure 2
...
(http://www
...
com)
...


In general, the ground clearance of robots with Swedish and spherical wheels is somewhat limited due to the mechanical constraints of constructing omnidirectional wheels
...
In this configuration, the robot is truly
omnidirectional because, even if the castor wheels are facing a direction perpendicular to
the desired direction of travel, the robot can still move in the desired direction by steering
these wheels
...

In the research community, other classes of mobile robots are popular which achieve
high maneuverability, only slightly inferior to that of the omnidirectional configurations
...

With a circular chassis and an axis of rotation at the center of the robot, such a robot can
spin without changing its ground footprint
...

One or two additional ground contact points may be used for stability, based on the application specifics
...
Such a vehicle typically has a turning diameter that is larger than
the car
...
Nevertheless, Ackerman
steering geometries have been especially popular in the hobby robotics market, where a
robot can be built by starting with a remote control racecar kit and adding sensing and
autonomy to the existing mechanism
...

2
...
1
...
For
example, the omnidirectional designs such as the four-castor wheel configuration require
significant processing to convert desired rotational and translational velocities to individual
wheel commands
...
For instance, the Swedish wheel has a set of free rollers along the
wheel perimeter
...

Controlling an omnidirectional robot for a specific direction of travel is also more difficult and often less accurate when compared to less maneuverable designs
...
In a differential-drive vehicle, the two motors attached to the two
wheels must be driven along exactly the same velocity profile, which can be challenging
considering variations between wheels, motors, and environmental differences
...

In summary, there is no “ideal” drive configuration that simultaneously maximizes stability, maneuverability, and controllability
...

2
...
2 Wheeled locomotion: case studies
Below we describe four specific wheel configurations, in order to demonstrate concrete
applications of the concepts discussed above to mobile robots built for real-world activities
...
3
...
1 Synchro drive
The synchro drive configuration (figure 2
...
It is an interesting configuration because, although there are
three driven and steered wheels, only two motors are used in total
...
But note that the
wheels are being steered with respect to the robot chassis, and therefore there is no direct
way of reorienting the robot chassis
...


Locomotion

39

steering pulley

driving pulley
wheel

be

ive
dr

steering
motor

i ng
er
ste

be
lt

wheel steering axis

lt

drive motor

rolling axis

Figure 2
...


Synchro drive is particularly advantageous in cases where omnidirectionality is sought
...
Of course, if the robot chassis has directionality and the designers intend to reorient
the chassis purposefully, then synchro drive is only appropriate when combined with an
independently rotating turret that attaches to the wheel chassis
...
12)
...

There are two main reasons for this
...
Because of to slop and backlash in the drive
train, whenever the drive motor engages, the closest wheel begins spinning before the furthest wheel, causing a small change in the orientation of the chassis
...
Second, the mobile robot has no direct control over the orientation of the chassis
...
22
...


40

Chapter 2

spheric bearing

motor

Figure 2
...
Left:
arrangement of spheric bearings and motors (bottom view)
...


2
...
2
...
4
...
Omnidirectional robots that are able to move in any direction
( x, y, θ ) at any time are also holonomic (see section 3
...
2)
...
Three examples of such holonomic robots are
presented below
...
The omnidirectional robot
depicted in figure 2
...
In
this design, the spherical wheels are suspended by three contact points, two given by spherical bearings and one by a wheel connected to the motor axle
...
However, it is limited to flat surfaces and
small loads, and it is quite difficult to find round wheels with high friction coefficients
...
The omnidirectional arrangement depicted in figure 2
...
This configuration consists of four Swedish 45-degree
wheels, each driven by a separate motor
...


Locomotion

41

Figure 2
...


For example, when all four wheels spin “forward” or “backward” the robot as a whole
moves in a straight line forward or backward, respectively
...

This four-wheel arrangement of Swedish wheels is not minimal in terms of control
motors
...
1
...

One application for which such omnidirectional designs are particularly amenable is
mobile manipulation
...
As with humans, it would be ideal if the base could move omnidirectionally without greatly impacting the position of the manipulator tip, and a base such as Uranus can
afford precisely such capabilities
...
Another solution for omnidirectionality is to use castor wheels
...
2
...
Unfortunately, Nomadic has ceased production of mobile robots
...
1, but this is not an exhaustive list of
all wheeled locomotion techniques
...

Below are two unique designs created for specialized applications
...
25
The Nomad XR4000 from Nomadic Technologies had an arrangement of four castor wheels for holonomic motion
...


2
...
2
...
An alternative form of steering, termed slip/skid,
may be used to reorient the robot by spinning wheels that are facing the same direction at
different speeds or in opposite directions
...
26) is an example of a mobile robot based on the same concept
...
However, due to this large ground contact patch, changing the orientation
of the robot usually requires a skidding turn, wherein a large portion of the track must slide
against the terrain
...
Because of
the large amount of skidding during a turn, the exact center of rotation of the robot is hard
to predict and the exact change in position and orientation is also subject to variations
depending on the ground friction
...
This is the trade-off that is made in return for extremely good maneuverability and
traction over rough and loose terrain
...
In terms of
power efficiency, this approach is reasonably efficient on loose terrain but extremely inefficient otherwise
...
26
The microrover Nanokhod, developed by von Hoerner & Sulger GmbH and the Max Planck Institute,
Mainz, for the European Space Agency (ESA), will probably go to Mars [138, 154]
...
3
...
4 Walking wheels
Walking robots might offer the best maneuverability in rough terrain
...
Hybrid solutions, combining the
adaptability of legs with the efficiency of wheels, offer an interesting compromise
...

The Sojourner robot of NASA/JPL (see figure 1
...
A more recent mobile robot design for
similar applications has recently been produced by EPFL (figure 2
...
This robot, called
Shrimp, has six motorized wheels and is capable of climbing objects up to two times its
wheel diameter [97, 133]
...
Using a rhombus configuration, the Shrimp has a steering wheel
in the front and the rear, and two wheels arranged on a bogie on each side
...
The
steering of the rover is realized by synchronizing the steering of the front and rear wheels
and the speed difference of the bogie wheels
...
The use of parallel
articulations for the front wheel and the bogies creates a virtual center of rotation at the
level of the wheel axis
...


44

Chapter 2

Figure 2
...


The climbing ability of the Shrimp is extraordinary in comparison to most robots of similar mechanical complexity, owing much to the specific geometry and thereby the manner
in which the center of mass (COM) of the robot shifts with respect to the wheels over time
...
28
...
A dedicated motor drives the boom to change the front/rear weight distribution in order to facilitate step-climbing
...
In this case the Personal Rover accomplished this closed-loop control by inferring terrain based on measurements of current flowing to each independently driven wheel [66]
...
At the same time, the control problems of inverse kinematics and
dynamics are now so readily conquered that these complex mechanics can in general be
controlled
...
They will each be technologically
impressive, and each will be designed as the expert robot for its particular environmental
niche
...
28
The Personal Rover, demonstrating ledge climbing using active center-of-mass shifting
...
1

Mobile Robot Kinematics

Introduction

Kinematics is the most basic study of how mechanical systems behave
...

Of course, mobile robots are not the first complex mechanical systems to require such
analysis
...
In some ways, manipulator robots are much more complex than early mobile robots:
a standard welding robot may have five or more joints, whereas early mobile robots were
simple differential-drive machines
...
e
...

The mobile robotics community poses many of the same kinematic questions as the
robot manipulator community
...
A mobile robot’s workspace is equally important because it
defines the range of possible poses that the mobile robot can achieve in its environment
...
Similarly, a mobile robot’s controllability defines possible paths and trajectories in its workspace
...

The mobile robot is also limited by dynamics; for instance, a high center of gravity limits
the practical turning radius of a fast, car-like robot because of the danger of rolling
...
A manipulator has one end fixed to the environment
...
The
manipulator’s position is thus always computable by looking at current sensor data
...
There is no direct way to measure a mobile robot’s position instantaneously
...
Add to this the inaccuracies
of motion estimation due to slippage and it is clear that measuring a mobile robot’s position
precisely is an extremely challenging task
...
Each wheel has a role in enabling the
whole robot to move
...
In the following section, we introduce notation that allows expression of robot motion in a global reference frame as well as
the robot’s local reference frame
...
Next, we formally
describe the kinematic constraints of individual wheels, and then combine these kinematic
constraints to express the whole robot’s kinematic constraints
...

3
...
Each individual
wheel contributes to the robot’s motion and, at the same time, imposes constraints on robot
motion
...
But the
forces and constraints of each wheel must be expressed with respect to a clear and consistent reference frame
...
We begin by defining these reference frames formally, then using the resulting
formalism to annotate the kinematics of individual wheels and whole robots
...

3
...
1 Representing robot position
Throughout this analysis we model the robot as a rigid body on wheels, operating on a horizontal plane
...
Of course, there are additional degrees of freedom and flexibility due to the
wheel axles, wheel steering joints, and wheel castor joints
...


Mobile Robot Kinematics

49

YI

YR

XR
θ
P

XI
Figure 3
...


In order to specify the position of the robot on the plane we establish a relationship
between the global reference frame of the plane and the local reference frame of the robot,
as in figure 3
...
The axes X I and Y I define an arbitrary inertial basis on the plane as the
global reference frame from some origin O: { X I, Y I }
...
The basis { X R, Y R }
defines two axes relative to P on the robot chassis and is thus the robot’s local reference
frame
...
We
can describe the pose of the robot as a vector with these three elements
...
1)

To describe robot motion in terms of component motions, it will be necessary to map
motion along the axes of the global reference frame to motion along the axes of the robot’s
local reference frame
...

This mapping is accomplished using the orthogonal rotation matrix:

50

Chapter 3

YI

XR

θ
YR

XI
Figure 3
...


cos θ sin θ 0
R(θ) = – sin θ cos θ 0
0
0 1

(3
...
This operation is denoted by R(θ)ξ I
because the computation of this operation depends on the value of θ :
·
π ·
ξ R = R(-- )ξ I
2

(3
...
2
...
4)

Mobile Robot Kinematics

51

YI
castor wheel

v(t)

θ
ω(t)
XI
Figure 3
...


· · ·
Given some velocity ( x, y, θ ) in the global reference frame we can compute the
components of motion along this robot’s local axes X R and Y R
...
5)

3
...
2 Forward kinematic models
In the simplest cases, the mapping described by equation (3
...
3
...
Given a point P centered between the two drive wheels, each wheel is a distance l from P
...
6)

From equation (3
...
Therefore, the strategy will be to first compute the contribution of each of the two wheels in the local reference

52

Chapter 3

·
frame, ξ R
...

Suppose that the robot’s local reference frame is aligned such that the robot moves forward along +X R , as shown in figure 3
...
First consider the contribution of each wheel’s
spinning speed to the translation speed at P in the direction of +X R
...
In a differential drive robot, these two contributions can simply be added
x r2 = ( 1 ⁄ 2 )rϕ 2
·
·
to calculate the x R component of ξ R
...
The result is a stationary,
·
·
spinning robot
...
The value of y R is even simpler
to calculate
...
Finally, we must compute the rotational component θ R of
·
ξ R
...
Consider the right wheel (we will call this wheel 1)
...
Recall that if wheel 1 spins alone, the robot
pivots around wheel 2
...
7)

The same calculation applies to the left wheel, with the exception that forward spin
results in clockwise rotation at point P :
·
–r ϕ2
ω 2 = ----------2l

(3
...
9)

Mobile Robot Kinematics

53

We can now use this kinematic model in an example
...
In general, calculating the inverse of a matrix may be challenging
...
10)

Suppose that the robot is positioned such that θ = π ⁄ 2 , r = 1 , and l = 1
...
11)

So this robot will move instantaneously along the y-axis of the global reference frame
with speed 3 while rotating with speed 1
...
However, we wish to determine the space of possible motions for each robot
chassis design
...
Section 3
...
3 begins this process by describing constraints
for various wheel types; the rest of this chapter provides tools for analyzing the characteristics and workspace of a robot given these constraints
...
2
...
Just as shown in section 3
...
2, the motions of individual wheels can later
be combined to compute the motion of the robot as a whole
...
Therefore, we begin
by presenting sets of constraints specific to each wheel type
...
We assume that
the plane of the wheel always remains vertical and that there is in all cases one single point
of contact between the wheel and the ground plane
...
That is, the wheel undergoes motion only under
conditions of pure rolling and rotation about the vertical axis through the contact point
...


54

Chapter 3

YR

β
Robot chassis

ϕ, r
A

l

v

α
XR

P
Figure 3
...


Under these assumptions, we present two constraints for every wheel type
...
The second constraint enforces the concept of no lateral
slippage – that the wheel must not slide orthogonal to the wheel plane
...
2
...
1 Fixed standard wheel
The fixed standard wheel has no vertical axis of rotation for steering
...
Figure 3
...
The
position of A is expressed in polar coordinates by distance l and angle α
...
The wheel, which has radius r , can spin over time, and so its rotational position around its horizontal axle is a function of time t : ϕ(t)
...
12)

Mobile Robot Kinematics

55

The first term of the sum denotes the total motion along the wheel plane
...
Note that the R(θ)ξ I term is used to transform
·
the motion parameters ξ I that are in the global reference frame { X I, Y I } into motion
parameters in the local reference frame { X R, Y R } as shown in example equation (3
...
This
is necessary because all other parameters in the equation, α, β, l , are in terms of the robot’s
local reference frame
...

The sliding constraint for this wheel enforces that the component of the wheel’s motion
orthogonal to the wheel plane must be zero:
·
cos ( α + β ) sin ( α + β ) l sin β R(θ)ξ I = 0

(3
...
This
would place the contact point of the wheel on X I with the plane of the wheel oriented parallel to Y I
...
13)] reduces to
·
·
x
1 0 0 x
·
·
1 0 0 0 1 0 y = 1 0 0 y = 0
·
·
0 0 1 θ
θ

(3
...

3
...
3
...
The equations of position for the
steered standard wheel (figure 3
...
4 with one exception
...
The rolling
and sliding constraints are
·
·
sin ( α + β ) – cos ( α + β ) ( – l ) cos β R(θ)ξ I – rϕ = 0

(3
...
16)

56

Chapter 3

YR

β(t)
Robot chassis

l

ϕ, r
A

v

α
P

XR

Figure 3
...


·
These constraints are identical to those of the fixed standard wheel because, unlike ϕ ,
·
β does not have a direct impact on the instantaneous motion constraints of a robot
...
This may seem subtle, but is a very important distinction between change in steer·
·
ing position, β , and change in wheel spin, ϕ
...
2
...
3 Castor wheel
Castor wheels are able to steer around a vertical axis
...
Figure 3
...

The wheel contact point is now at position B , which is connected by a rigid rod AB of
fixed length d to point A fixes the location of the vertical axis about which B steers, and
this point A has a position specified in the robot’s reference frame, as in figure 3
...
We
assume that the plane of the wheel is aligned with AB at all times
...
ϕ(t)
represents the wheel spin over time as before
...

For the castor wheel, the rolling constraint is identical to equation (3
...
6
A castor wheel and its parameters
...
17)

The castor geometry does, however, have significant impact on the sliding constraint
...
Because of the offset ground contact point relative to A , the constraint that there be zero lateral movement would be wrong
...
18)

In equation (3
...
This result is critical to the suc·
cess of castor wheels because by setting the value of β any arbitrary lateral motion can be
acceptable
...
But in a castor wheel the steering action itself moves the robot
chassis because of the offset between the ground contact point and the vertical axis of rotation
...
7
Office chair with five castor wheels
...
17) and (3
...
Therefore, a robot with only castor wheels can move with
any velocity in the space of possible robot motions
...

A real-world example of such a system is the five-castor wheel office chair shown in
figure 3
...
Assuming that all joints are able to move freely, you may select any motion
vector on the plane for the chair and push it by hand
...
By the same token, if each
of the chair’s castor wheels housed two motors, one for spinning and one for steering, then
a control system would be able to move the chair along any trajectory in the plane
...

3
...
3
...
This is possible by adding a degree of freedom to the fixed standard
wheel
...
The
exact angle γ between the roller axes and the main axis can vary, as shown in figure 3
...

For example, given a Swedish 45-degree wheel, the motion vectors of the principal axis
and the roller axes can be drawn as in figure 3
...
Since each axis can spin clockwise or
counterclockwise, one can combine any vector along one axis with any vector along the
other axis
...


Mobile Robot Kinematics

59

YR

β
γ

ϕ, r

Robot chassis

α

l

A
XR

P
Figure 3
...


The pose of a Swedish wheel is expressed exactly as in a fixed standard wheel, with the
addition of a term, γ , representing the angle between the main wheel plane and the axis of
rotation of the small circumferential rollers
...
8 within the robot’s
reference frame
...
The instantaneous constraint is due to the specific orientation of the small rollers
...
That is, moving
in that direction without spinning the main axis is not possible without sliding
...
12) except that the formula is modified by adding γ such that the
effective direction along which the rolling constraint holds is along this zero component
rather than along the wheel plane:
·
·
sin ( α + β + γ ) – cos ( α + β + γ ) ( – l ) cos ( β + γ ) R(θ)ξ I – rϕ cos γ = 0

(3
...

·
·
·
cos ( α + β + γ ) sin ( α + β + γ ) l sin ( β + γ ) R(θ)ξ I – rϕ sin γ – r sw ϕ sw = 0 (3
...
9
A spherical wheel and its parameters
...
Consider γ = 0
...
In this
case, the zero component of velocity is in line with the wheel plane and so equation (3
...
12), the fixed standard wheel rolling constraint
...
20)]
...
19) and therefore the wheel is omnidirectional
...

At the other extreme, consider γ = π ⁄ 2
...
Interestingly, if this value is substituted
for γ in equation (3
...
13)
...
However, in this case the main wheel
never needs to spin and therefore the rolling constraint disappears
...

3
...
3
...
9)
...
As with castor wheels and Swedish wheels, the spherical

Mobile Robot Kinematics

61

wheel is clearly omnidirectional and places no constraints on the robot chassis kinematics
...
21) simply describes the roll rate of the ball in the direction of motion
v A of point A of the robot
...
21)

By definition the wheel rotation orthogonal to this direction is zero
...
22)

As can be seen, the equations for the spherical wheel are exactly the same as for the fixed
standard wheel
...
22) is different
...
22)
...
Then equation (3
...

3
...
4 Robot kinematic constraints
Given a mobile robot with M wheels we can now compute the kinematic constraints of the
robot chassis
...

We have categorized all wheels into five categories: (1) fixed and (2)steerable standard
wheels, (3) castor wheels, (4) Swedish wheels, and (5) spherical wheels
...
17), (3
...
19) that the castor wheel,
Swedish wheel, and spherical wheel impose no kinematic constraints on the robot chassis,
·
since ξ I can range freely in all of these cases owing to the internal wheel degrees of freedom
...
Suppose that the robot has a total of N standard wheels, comprising
N f fixed standard wheels and N s steerable standard wheels
...
In contrast, β f refers to the
orientation of the N f fixed standard wheels as depicted in figure 3
...
In the case of wheel
spin, both the fixed and steerable wheels have rotational positions around the horizontal
axle that vary as a function of time
...
23)

The rolling constraints of all wheels can now be collected in a single expression:
·
·
J 1(β s)R(θ)ξ I – J 2 ϕ = 0

(3
...
J 2 is a
constant diagonal N × N matrix whose entries are radii r of all standard wheels
...
25)

Note that J 1(β s) is only a function of β s and not β f
...
J 1f is therefore a constant matrix of projections for all fixed standard wheels
...
12) for each fixed standard wheel
...
15) for each steerable standard wheel
...
24) represents the constraint that all standard wheels must spin
around their horizontal axis an appropriate amount based on their motions along the wheel
plane so that rolling occurs at the ground contact point
...
13) and (3
...
26)

(3
...
13) and (3
...
Thus

Mobile Robot Kinematics

63

equation (3
...
This sliding constraint over all standard
wheels has the most significant impact on defining the overall maneuverability of the robot
chassis, as explained in the next section
...
2
...
2
...
We can
now use the tools presented above to construct the same kinematic expression by direct
application of the rolling constraints for every wheel type
...
2
...
Then we proceed to the case of the three-wheeled omnidirectional robot
...
2
...
1 A differential-drive robot example
First, refer to equations (3
...
26)
...
Fusing these two equations yields the following expression:
J1 ( βs )

J ϕ
·
R ( θ )ξ I = 2
C1 ( βs )
0

(3
...
3
...
The castor is unpowered and
is free to move in any direction, so we ignore this third point of contact altogether
...
To employ the fixed standard wheel’s rolling constraint formula,
equation (3
...
Suppose that the
robot’s local reference frame is aligned such that the robot moves forward along +X R , as
shown in figure 3
...
In this case, for the right wheel α = – π ⁄ 2 , β = π , and for the left
wheel, α = π ⁄ 2 , β = 0
...
4)
...
12) and (3
...
Because
the two fixed standard wheels are parallel, equation (3
...
28) gives

64

Chapter 3

YI
v(t)
θ

ω(t)

XI
Figure 3
...
cs
...
edu/~pprk)
...
29)

Inverting equation (3
...
30)

This demonstrates that, for the simple differential-drive case, the combination of wheel
rolling and sliding constraints describes the kinematic behavior, based on our manual calculation in section 3
...
2
...
2
...
2 An omnidirectional robot example
Consider the omniwheel robot shown in figure 3
...
This robot has three Swedish 90degree wheels, arranged radially symmetrically, with the rollers perpendicular to each main
wheel
...
We do so by
choosing point P at the center of the robot, then aligning the robot with the local reference

Mobile Robot Kinematics

65

YR

ω1

v y1
1

v x1

XR

2

3

r ⋅ ϕ1

ICR
Figure 3
...


frame such that X R is colinear with the axis of wheel 2
...
11 shows the robot and its
local reference frame arranged in this manner
...
Once again, the value of ξ I can be computed as a combination of
the rolling constraints of the robot’s three omnidirectional wheels, as in equation (3
...
As
with the differential- drive robot, since this robot has no steerable wheels, J 1(β s) simplifies
to J 1f :
·
–1 –1
·
ξ I = R(θ) J 1f J 2 ϕ

(3
...
19)
...
Referring to figure (3
...
Note that this immediately simplifies equation (3
...
12), the
rolling constraints of a fixed standard wheel
...
Furthermore, β = 0 for all wheels because the
wheels are tangent to the robot’s circular body
...
12) yields

66

Chapter 3

J 1f =

3
π
π
-----sin -- – cos -- – l
2
3
3
0
0
– cos π – l =
π
π
3
sin – -- – cos – -- – l
– -----3
3
2

1
– -- – l
2
1 –l

(3
...
31)
...
A second approach would be to compute the contribution of each
Swedish wheel to chassis motion, as shown in section 3
...
2
...
Once the inverse is obtained, ξ I can be isolated:

·
–1
ξ I = R(θ)

1
-----3
1
– -3

1
0 – -----3

1
2
·
-- – -- J 2 ϕ
3
3
1
1 1
- – ---- – ---- – ---3l 3l 3l

(3
...
The robot’s
local reference frame and global reference frame are aligned, so that θ = 0
...
34)

So this robot will move instantaneously along the x -axis with positive speed and along
the y axis with negative speed while rotating clockwise
...


Mobile Robot Kinematics

67

The sliding constraints comprising C 1(βs) can be used to go even further, enabling us
to evaluate the maneuverability and workspace of the robot rather than just its predicted
motion
...

3
...

The basic constraint limiting mobility is the rule that every wheel must satisfy its sliding
constraint
...
26)
...
As we will see in section 3
...
3,
the overall maneuverability of a robot is thus a combination of the mobility available based
on the kinematic sliding constraints of the standard wheels, plus the additional freedom
contributed by steering and spinning the steerable standard wheels
...
3
...
26) imposes the constraint that every wheel must avoid any lateral slip
...
35)

·
C 1s(β s)R(θ)ξ I = 0

(3
...
Mathematically, the null space of C 1(β s) is the space N such that for any vector n in
N, C 1(β s)n = 0
...
The kinematic constraints [equations (3
...
36)] can also be demonstrated geometrically using the concept of a robot’s instantaneous
center of rotation ( ICR )
...
It is forced by the sliding constraint to have zero lateral motion
...
12)
...
In other words, the wheel must be moving

68

Chapter 3

w1

a)

ICR

w2

b)

ICR

Figure 3
...
(b) bicycle
...
This center point, called the instantaneous center of rotation, may
lie anywhere along the zero motion line
...

A robot such as the Ackerman vehicle in figure 3
...
Because all of its zero motion lines meet at a single point, there
is a single solution for robot motion, placing the ICR at this meet point
...
In figure 3
...
Each wheel contributes a constraint, or a zero
motion line
...
This is because the two constraints are independent, and thus each
further constrains overall robot motion
...
13a, the two wheels are aligned
along the same horizontal axis
...
In fact, the second wheel imposes no additional kinematic constraints on
robot motion since its zero motion line is identical to that of the first wheel
...


Mobile Robot Kinematics

69

b)

a)

βs ( t )

Figure 3
...
g
...
(b) Tricycle with two fixed standard wheels and one steered standard wheel,
e
...
Piaggio minitransporter
...
12a demonstrates another way in which a wheel may
be unable to contribute an independent constraint to the robot kinematics
...
Given the instantaneous position of just one of these steerable wheels and the position of the fixed rear wheels, there is only a single solution for the
ICR
...

Therefore, it offers no independent constraints to robot motion
...
The mathematical interpretation of independence is
related to the rank of a matrix
...
Equation (3
...
Therefore rank C 1(β s) is the number of independent constraints
...
For example, consider a robot
with a single fixed standard wheel
...
This
robot may be a unicycle or it may have several Swedish wheels; however, it has exactly one
fixed standard wheel
...
C 1(β s) is comprised of C 1f and C 1s
...
Because there is one fixed standard wheel, this matrix has a rank of one and therefore
this robot has a single independent constrain on mobility:
C 1(β s) = C 1f =

cos ( α + β ) sin ( α + β ) l sin β

(3
...
Without loss of generality, we can place point P at the midpoint between the centers
of the two wheels
...
Therefore, in this
case, the matrix C 1(β s) has two constraints but a rank of one:
C 1(β s) = C 1f =

cos ( α 1 )

sin ( α 1 )

0

(3
...
We
again place point P between the two wheel centers, and orient the wheels such that they lie
on axis x 1
...
39)

In general, if rank C 1f > 1 then the vehicle can, at best, only travel along a circle or
along a straight line
...
Because such configurations have only a degenerate form of mobility in the plane, we
do not consider them in the remainder of this chapter
...

Not surprisingly, the price that must be paid for such violations of the sliding constraints is
that dead reckoning based on odometry becomes less accurate and power efficiency is
reduced dramatically
...
We can therefore identify the possible range of rank values for any
robot: 0 ≤ r ank C 1(βs) ≤ 3
...
This is only possible
if there are zero independent kinematic constraints in C 1(β s)
...

Consider the other extreme, rank C 1(β s) = 3
...
e
...
Therefore, there cannot be more than three indepen-

Mobile Robot Kinematics

71

dent constraints
...

Now we are ready to formally define a robot’s degree of mobility δ m :
δ m = dimN C 1(β s) = 3 – rank C 1(β s)

(3
...
It is logical therefore that δ m must range between 0
and 3
...
On such a robot there are two fixed standard wheels sharing a common horizontal axis
...
Therefore, rank C 1(β s) = 1 and
δ m = 2
...
In other words, its ICR is constrained to lie on the infinite line extending from its
wheels’ horizontal axles
...
This configuration consists of one fixed standard
wheel and one steerable standard wheel
...
Therefore, δ m = 1
...
Yet it has one less degree of mobility
...
A bicycle only has control over its forward/reverse speed by direct manipulation of wheel velocities
...

As expected, based on equation (3
...

Such a robot can directly manipulate all three degrees of freedom
...
3
...
Steering can also have an eventual impact on a robot chassis
pose ξ , although the impact is indirect because after changing the angle of a steerable standard wheel, the robot must move for the change in steering angle to have impact on pose
...
41)

72

Chapter 3

Recall that in the case of mobility, an increase in the rank of C 1(βs) implied more kinematic constraints and thus a less mobile system
...
Since C 1(β s) includes C 1s(β s) , this means that a steered standard wheel
can both decrease mobility and increase steerability: its particular orientation at any instant
imposes a kinematic constraint, but its ability to change that orientation can lead to additional trajectories
...
The case δ s = 0 implies that the robot
has no steerable standard wheels, N s = 0
...

For example, consider an ordinary automobile
...
But
the fixed wheels share a common axle and so rank C 1f = 1
...
Therefore, the second steerable wheel cannot impose any independent kinematic constraint and so rank C 1s(βs) = 1
...

The case δ s = 2 is only possible in robots with no fixed standard wheels: N f = 0
...

Then, orienting one wheel constrains the ICR to a line while the second wheel can constrain the ICR to any point along that line
...

3
...
3 Robot maneuverability
The overall degrees of freedom that a robot can manipulate, called the degree of maneuverability δ M , can be readily defined in terms of mobility and steerability:
δM = δm + δs

(3
...
Based on the investigations of the
previous sections, one can draw the basic types of wheel configurations
...
14
Note that two robots with the same δ M are not necessarily equivalent
...
13) have equal maneuverability δ M = 2
...
In the case of a tricycle the maneuverability results from steering also: δ m = 1
and δ s = 1
...
In both cases, the ICR must lie on a predefined line with respect to the robot refer-

Mobile Robot Kinematics

Omnidirectional
δ M =3
δ m =3
δs
=0

Differential
δ M =2
δ m =2
δs
=0

73

Omni-Steer
δ M =3
δ m =2
δs
=1

Tricycle
δ M =2
δ m =1
δs
=1

Two-Steer
δ M =3
δ m =1
δs
=2

Figure 3
...
The spherical wheels can be replaced by castor or
Swedish wheels without influencing maneuverability
...


ence frame
...
In a tricycle, this line extends from the shared common axle of the fixed wheels,
with the steerable wheel setting the ICR point along this line
...

One final example will demonstrate the use of the tools we have developed above
...
22)
...
One motor provides power for spinning all three wheels while the second motor
provides power for steering all three wheels
...
Therefore rank C 1s(β s) can be used to determine both δ m and
δ s
...
The third must be dependent on these two constraints for
motion to be possible
...
This is intuitively correct
...

However an interesting complication occurs when considering δ s
...
41) the robot should have δ s = 2
...
However, we have
additional information: in a synchro drive configuration a single motor steers all three
wheels using a belt drive
...
Finally, we can compute maneuverability based on these values: δ M = 2 for a synchro drive robot
...
In fact, if the reader reflects on the wheel configuration of a synchro drive robot
it will become apparent that there is no way for the chassis orientation to change
...

3
...
But the robot
is situated in some environment, and the next question is to situate our analysis in the environment
...
For instance, consider the Ackerman vehicle, or automobile
...
But what is the total degrees
of freedom of the vehicle in its environment? In fact it is three: the car can position itself
on the plane at any x, y point and with any angle θ
...
In addition to workspace, we care about how the robot is able to
move between various configurations: what are the types of paths that it can follow and,
furthermore, what are its possible trajectories through this configuration space? In the
remainder of this discussion, we move away from inner kinematic details such as wheels
and focus instead on the robot chassis pose and the chassis degrees of freedom
...

3
...
1 Degrees of freedom
In defining the workspace of a robot, it is useful to first examine its admissible velocity
space
...
For example, the velocity
space of a unicycle can be represented with two axes, one representing the instantaneous
forward speed of the unicycle and the second representing the instantaneous change in ori·
entation, θ , of the unicycle
...
This is also called the differentiable degrees of freedom
( DDOF )
...
For example,
a bicycle has the following degree of maneuverability: δ M = δ m + δ s = 1 + 1 = 2
...


Mobile Robot Kinematics

75

In contrast to a bicycle, consider an omnibot, a robot with three Swedish wheels
...
So, the omnibot has three differential degrees of freedom
...

Given the difference in DDOF between a bicycle and an omnibot, consider the overall
degrees of freedom in the workspace of each configuration
...
Clearly, it has a workspace with
DOF = 3
...
For example, if a bicycle configuration must move laterally 1 m, the simplest successful maneuver
would involve either a spiral or a back-and-forth motion similar to parallel parking of automobiles
...

Clearly, there is an inequality relation at work: DDOF ≤ δ M ≤ D OF
...
Just as workspace DOF
governs the robot’s ability to achieve various poses, so the robot’s DDOF governs its ability to achieve various paths
...
4
...
The term holonomic has broad applicability to several mathematical areas, including differential equations, functions and constraint expressions
...
A
holonomic robot is a robot that has zero nonholonomic kinematic constraints
...

A holonomic kinematic constraint can be expressed as an explicit function of position
variables only
...
Such a constraint may not use derivatives of these values, such as ϕ or ξ
...
Furthermore, it cannot be integrated to provide a constraint in
terms of the position variables only
...


76

Chapter 3

Consider the fixed standard wheel sliding constraint:
·
cos ( α + β ) sin ( α + β ) l sin β R(θ)ξ I = 0

(3
...
The constraint is nonintegrable, depending explicitly on robot motion
...
Consider a bicycle configuration, with one fixed standard wheel
and one steerable standard wheel
...

But suppose that one locks the bicycle steering system, so that it becomes two fixed standard wheels with separate but parallel axes
...
Is it nonholonomic? Although it may not appear so because of the sliding and rolling
constraints, the locked bicycle is actually holonomic
...
It consists of a single infinite line along which the bicycle can move (assuming the steering was frozen straight ahead)
...
In this case the sliding constraints of both wheels can be
replaced with an equally complete set of constraints on the robot pose: { y = 0, θ = 0 }
...

The only remaining nonholonomic kinematic constraints are the rolling constraints for
each wheel:
·
·
– sin ( α + β ) cos ( α + β ) l cos β R(θ)ξ I + rϕ = 0

(3
...
But in the case of our locked bicycle, given the
initial rotational position of a wheel at the origin, ϕ o , we can replace this constraint with
one that directly relates position on the line, x, with wheel rotation angle, ϕ :
ϕ = (x ⁄ r ) + ϕo
...
This is the case for all holonomic
robots with δ M < 3
...
Since there are no kinematic constraints, there are
also no nonholonomic kinematic constraints and so such a robot is always holonomic
...


Mobile Robot Kinematics

77

An alternative way to describe a holonomic robot is based on the relationship between
the differential degrees of freedom of a robot and the degrees of freedom of its workspace:
a robot is holonomic if and only if DDOF = DOF
...
Examples include differential drive and bicycle/tricycle configurations
...
But the “holonomic”
abilities to maneuver around obstacles without affecting orientation and to track at a target
while following an arbitrary path are important additional considerations
...
We define this class of robot configurations as omnidirectional: an
omnidirectional robot is a holonomic robot with DDOF = 3
...
4
...
Consider the issue of a robot’s ability to follow
paths: in the best case, a robot should be able to trace any path through its workspace of
poses
...
Unfortunately, omnidirectional robots must use unconstrained
wheels, limiting the choice of wheels to Swedish wheels, castor wheels, and spherical
wheels
...
Although powerful from a path space point
of view, they are thus much less common than fixed and steerable standard wheels, mainly
because their design and fabrication are somewhat complex and expensive
...
Consider an omnidirectional vehicle driving at high speed on a curve with constant
diameter
...
This lateral force pushing the vehicle out of the curve has to be counteracted by
the motor torque of the omnidirectional wheels
...
However, for a car-like robot with kinematic constraints, the lateral forces are passively counteracted through the sliding constraints,
mitigating the demands on motor torque
...
This vehicle achieves a
degree of steerability of 2, resulting in a high degree of maneuverability:
δ M = δ m + δ s = 1 + 2 = 3
...


78

Chapter 3

YI

x, y, θ
y(t)
x(t)
θ(t)
XI
1

2

3

t / [s]

Figure 3
...


The maneuverability result, δ M = 3 , means that the two-steer can select any ICR by
appropriately steering its two wheels
...
More generally, any robot with δ M = 3 can follow any path in
its workspace from its initial pose to its final pose
...

But there is still a difference between a degree of freedom granted by steering versus by
direct control of wheel velocity
...
A trajectory is like a path, except that it occupies an additional dimension: time
...

For example, consider a goal trajectory in which the robot moves along axis X I at a constant speed of 1 m/s for 1 second, then changes orientation counterclockwise 90 degrees
also in 1 second, then moves parallel to axis Y I for 1 final second
...
15, using plots of x, y and θ in relation to time
...
16
Example of robot trajectory similar to figure 3
...


80

Chapter 3

Can the omnidirectional robot accomplish this trajectory? We assume that the robot can
achieve some arbitrary, finite velocity at each wheel
...
Under these
assumptions, the omnidirectional robot can indeed follow the trajectory of figure 3
...
The
transition between the motion of second 1 and second 2, for example, involves only
changes to the wheel velocities
...
However, it cannot follow this 4D
trajectory
...
In short, the two-steer requires changes to internal degrees of freedom and
because these changes take time, arbitrary trajectories are not attainable
...
16
depicts the most similar trajectory that a two-steer can achieve
...

3
...
When speed and force are also considered, as is particularly necessary in the case of
high-speed mobile robots, dynamic constraints must be expressed in addition to kinematic
constraints
...
When analyzing such systems, it is
often necessary to explicitly model the dynamics of viscous friction between the robot and
the ground plane
...
However to effectively move in
this workspace a mobile robot must have appropriate actuation of its degrees of freedom
...

In addition to motorization, there is the question of controllability: under what conditions can a mobile robot travel from the initial pose to the goal pose in bounded time?
Answering this question requires knowledge – both knowledge of the robot kinematics and
knowledge of the control systems that can be used to actuate the mobile robot
...


Mobile Robot Kinematics

81

YR

θ

v(t)

XR

ω(t)
start

goal
Figure 3
...
6

Motion Control (Kinematic Control)

As seen above, motion control might not be an easy task for nonholonomic systems
...

3
...
1 Open loop control (trajectory-following)
The objective of a kinematic controller is to follow a trajectory described by its position or
velocity profile as a function of time
...
The control problem is thus to precompute a smooth trajectory based on line and circle
segments which drives the robot from the initial position to the final position (figure 3
...

This approach can be regarded as open-loop motion control, because the measured robot
position is not fed back for velocity or position control
...

• The robot will not automatically adapt or correct the trajectory if dynamic changes of
the environment occur
...
18
Open-loop control of a mobile robot based on straight lines and circular trajectory segments
...
g
...
This means there is a discontinuity in the robot’s acceleration
...
6
...
With such a controller the robot’s path-planning task is reduced to setting
intermediate positions (subgoals) lying on the requested path
...
6
...
1
...
Others can be found in [8, 52, 53,
137]
...
6
...
1 Problem statement
Consider the situation shown in figure 3
...
The actual pose error vector given
R
T
in the robot reference frame { X R, Y R, θ } is e = [ x, y, θ ] with x, y , and θ being the goal
coordinates of the robot
...
19
Robot kinematics and its frames of interests
...
45)

such that the control of v ( t ) and ω ( t )
R

x
v(t)
= K⋅e = K y
ω(t)
θ

(3
...
2
lim e ( t ) = 0

t→∞

(3
...
Remember that v(t) is always heading in the XR direction of the robot’s reference frame due to the
nonholonomic constraint
...
6
...
2 Kinematic model
We assume, without loss of generality, that the goal is at the origin of the inertial frame (figT
ure 3
...
In the following the position vector [ x, y, θ ] is always represented in the inertial
frame
...
48)

·
·
where x and y are the linear velocities in the direction of the X I and Y I of the inertial
frame
...
If α ∈ I1 , where
π π
- I 1 =  – -- , - 2 2

(3
...

ρ =

2

∆x + ∆y

2

(3
...
51)

β = –θ–α

(3
...
53)

where ρ is the distance between the center of the robot’s wheel axle and the goal position,

Mobile Robot Kinematics

85

θ denotes the angle between the X R axis of the robot reference frame, and the X I axis associated with the final position v and ω are the tangent and the angular velocity respectively
...
54)

redefining the forward direction of the robot by setting v = – v , we obtain a system
described by a matrix equation of the form
cos α 0
·
ρ
sin α
v
·
α = – ----------- 1
ρ
ω
·
β
sin α
----------- 0
ρ

(3
...
6
...
3 Remarks on the kinematic model in polar coordinates [eq
...
53) and (3
...

• For α ∈ I 1 the forward direction of the robot points toward the goal, for α ∈ I 2 it is the
reverse direction
...
However, this does not mean that α remains
in I I for all time t
...
The same applies for the reverse direction (see stability
issues below)
...
6
...
4 The control law
The control signals v and ω must now be designed to drive the robot from its actual configuration, say ( ρ 0, α 0, β 0 ) , to the goal position
...
53) presents
a discontinuity at ρ = 0 ; thus the theorem of Brockett does not obstruct smooth stabilizability
...
56)

ω = kα α + kβ β

(3
...
53) a closed-loop system described by
·
– k ρ ρ cos α
ρ
·
α = k ρ sin α – k α α – k β β
·
– k ρ sin α
β

(3
...
Thus it will drive the robot to this point, which is the goal position
...
57)] leads to equations
which are not defined at x = y = 0
...

• Observe that the control signal v has always a constant sign, that is, it is positive whenever α ( 0 ) ∈ I1 and it is always negative otherwise
...

In figure 3
...
All movements have smooth trajectories toward the goal in the center
...
5 )
...
59)

3
...
2
...
58)] is locally
exponentially stable if
kρ > 0 ;

kβ < 0 ;

kα – kρ > 0

(3
...
58) can
be written as
·
–kρ
0
0 ρ
ρ
·
α = 0 –( kα –kρ ) –kβ α ,
·
β
0 β
0
–kρ

(3
...
20
Resulting paths when the robot is initially on the unit circle in the x,y plane
...
62)

0

all have a negative real part
...
63)

and all roots have negative real part if
kρ > 0 ;

–k β > 0 ;

kα – kρ > 0

(3
...

For robust position control, it might be advisable to apply the strong stability condition,
which ensures that the robot does not change direction during its approach to the goal:
kρ > 0 ;

kβ < 0 ;

5
2
k α + -- k β – -- k > 0
3
π ρ

(3
...
This strong stability condition has also been verified in applications
...
This is done by taking measurements using various sensors and
then extracting meaningful information from those measurements
...
For more detailed information
about many of the sensors used on mobile robots, refer to the comprehensive book Sensors
for Mobile Robots by H
...
Everett [15]
...
1

Sensors for Mobile Robots

There are a wide variety of sensors used in mobile robots (figure 4
...
Some sensors are
used to measure simple values like the internal temperature of a robot’s electronics or the
rotational speed of the motors
...
In this chapter we focus primarily on sensors used to extract information about the
robot’s environment
...

We begin with a functional classification of sensors
...

4
...
1 Sensor classification
We classify sensors using two important functional axes: proprioceptive/exteroceptive and
passive/active
...

Exteroceptive sensors acquire information from the robot’s environment; for example,
distance measurements, light intensity, sound amplitude
...


90

Chapter 4

a)

b)

Pan-Tilt
Stereo Camera

Sonar Sensors

IR Sensors

c)
IMU
Inertial
Measurement
Unit

Emergency
Stop
Button

Wheel
Encoders

Omnidirectional
Camera
Pan-Tilt
Camera

Sonar
Sensors

Laser
Range
Scanner
Bumper

Figure 4
...


Passive sensors measure ambient environmental energy entering the sensor
...

Active sensors emit energy into the environment, then measure the environmental reaction
...
However, active sensing introduces several
risks: the outbound energy may affect the very characteristics that the sensor is attempting
to measure
...
For example, signals emitted by other nearby robots, or similar sensors on the same robot, may influence the resulting measurements
...

Table 4
...
The most interesting sensors are discussed in this chapter
...
1
Classification of sensors used in mobile robotics applications
General classification
(typical use)

Sensor
Sensor System

PC or
EC

A or P

Tactile sensors
(detection of physical contact or
closeness; security switches)

Contact switches, bumpers
Optical barriers
Noncontact proximity sensors

EC
EC
EC

P
A
A

Wheel/motor sensors
(wheel/motor speed and position)

Brush encoders
Potentiometers
Synchros, resolvers
Optical encoders
Magnetic encoders
Inductive encoders
Capacitive encoders

PC
PC
PC
PC
PC
PC
PC

P
P
A
A
A
A
A

Heading sensors
(orientation of the robot in relation to
a fixed reference frame)

Compass
Gyroscopes
Inclinometers

EC
PC
EC

P
P
A/P

Ground-based beacons
(localization in a fixed reference
frame)

GPS
Active optical or RF beacons
Active ultrasonic beacons
Reflective beacons

EC
EC
EC
EC

A
A
A
A

Active ranging
(reflectivity, time-of-flight, and geometric triangulation)

Reflectivity sensors
Ultrasonic sensor
Laser rangefinder
Optical triangulation (1D)
Structured light (2D)

EC
EC
EC
EC
EC

A
A
A
A
A

Motion/speed sensors
(speed relative to fixed or moving
objects)

Doppler radar
Doppler sound

EC
EC

A
A

Vision-based sensors
(visual ranging, whole-image analysis, segmentation, object recognition)

CCD/CMOS camera(s)
Visual ranging packages
Object tracking packages

EC

P

A, active; P, passive; P/A, passive/active; PC, proprioceptive; EC, exteroceptive
...
1 are arranged in ascending order of complexity and
descending order of technological maturity
...

Commercial quadrature encoders, for example, may be purchased as part of a gear-motor
assembly used in a mobile robot
...
However, commercially
available sensor units that provide visual functionalities are only now beginning to emerge
[90, 160]
...
1
...

Some sensors provide extreme accuracy in well-controlled laboratory settings, but are
overcome with error when subjected to real-world environmental variations
...
In order to quantify such
performance characteristics, first we formally define the sensor performance terminology
that will be valuable throughout the rest of this chapter
...
1
...
1 Basic sensor response ratings
A number of sensor characteristics can be rated quantitatively in a laboratory setting
...

Dynamic range is used to measure the spread between the lower and upper limits of
input values to the sensor while maintaining normal sensor operation
...
Because this raw ratio can be unwieldy, it is usually measured in decibels, which are
computed as ten times the common logarithm of the dynamic range
...
Suppose your sensor measures motor current
and can register values from a minimum of 1 mA to 20 Amps
...
001

(4
...
Voltage is not a unit of power, but the square
of voltage is proportional to power
...
001

93

(4
...
In such cases, it is critical to understand how the sensor will respond
...

Resolution is the minimum difference between two values that can be detected by a sensor
...

However, in the case of digital sensors, this is not necessarily so
...
If this sensor is truly linear, then it has 2 – 1 total output values, or a resolution of
5 V ( 255 ) = 20 mV
...
A linear response indicates that if two inputs x and y result in the
two outputs f ( x ) and f ( y ) , then for any values a and b , f ( ax + by ) = af ( x ) + bf ( y )
...

Bandwidth or frequency is used to measure the speed with which a sensor can provide a
stream of readings
...
Because of the dynamics of moving through their environment,
mobile robots often are limited in maximum speed by the bandwidth of their obstacle detection sensors
...

4
...
2
...
However, a number
of important measures cannot be reliably acquired without deep understanding of the complex interaction between all environmental characteristics and the sensors in question
...

Sensitivity itself is a desirable trait
...
Formally, sensitivity is
the ratio of output change to input change
...


94

Chapter 4

Cross-sensitivity is the technical term for sensitivity to environmental parameters that
are orthogonal to the target parameters for the sensor
...
However, the compass will also demonstrate high sensitivity to ferrous building
materials, so much so that its cross-sensitivity often makes the sensor useless in some
indoor environments
...

Error of a sensor is defined as the difference between the sensor’s output measurements
and the true values being measured, within some specific operating context
...

Accuracy is defined as the degree of conformity between the sensor’s measurement and
the true value, and is often expressed as a proportion of the true value (e
...
, 97
...
Thus small error corresponds to high accuracy and vice versa:
error
 accuracy = 1 – ----------------

v 

(4
...
Further, it is important to distinguish between two different sources of error:
Systematic errors are caused by factors or processes that can in theory be modeled
...
e
...
Poor calibration of a laser
rangefinder, an unmodeled slope of a hallway floor, and a bent stereo camera head due to
an earlier collision are all possible causes of systematic sensor errors
...
These errors can only be described in probabilistic
terms (i
...
, stochastically)
...

Precision is often confused with accuracy, and now we have the tools to clearly distinguish these two terms
...
For example, one sensor taking multiple readings of the same environmental state
has high precision if it produces the same output
...
Precision does not, however, have any bearing on the accuracy of the sensor’s
output with respect to the true value being measured
...
The formal definition of precision is the ratio of the sensor’s output range to the standard deviation:

Perception

range
precision = -------------σ

95

(4
...
In contrast, mean error µ is
directly proportional to overall sensor error and inversely proportional to sensor accuracy
...
1
...
3 Characterizing error: the challenges in mobile robotics
Mobile robots depend heavily on exteroceptive sensors
...
Of course, these “objects” surrounding the robot are all detected from the viewpoint of its local reference frame
...
In this section, empowered with the terminology of the earlier discussions, we describe how dramatically the sensor error of a mobile
robot disagrees with the ideal picture drawn in the previous section
...
Active ranging sensors tend to have failure
modes that are triggered largely by specific relative positions of the sensor and environment
targets
...
During
motion of the robot, such relative angles occur at stochastic intervals
...
The chances of one sonar entering
this error mode during robot motion is high
...
Yet, if the robot were to stop,
becoming motionless, then a very different error modality is possible
...
Once the
robot is motionless, the error appears to be systematic and of high precision
...
The models for such cross-sensitivity
are not, in an underlying sense, truly random
...

Sonar is not the only sensor subject to this blurring of systematic and random error
modality
...
g
...
The important point is to realize that, while systematic error and random error are well-defined in a controlled setting, the mobile robot can
exhibit error characteristics that bridge the gap between deterministic and stochastic error
mechanisms
...
It is common to characterize the behavior of a sensor’s
random error in terms of a probability distribution over various output values
...
For example, we can assume that the error is zero-mean,
in that it symmetrically generates both positive and negative measurement error
...
Although we discuss the mathematics of this in detail in section 4
...
This means
that measuring the correct value is most probable, and any measurement that is further
away from the correct value is less likely than any measurement that is closer to the correct
value
...

Consider, for example, the sonar sensor once again
...
This portion of its sensor behavior will
exhibit error characteristics that are fairly symmetric and unimodal
...
In such cases, the error will be
biased toward positive measurement error and will be far from the correct value
...
So the sonar sensor has two separate types of operational modes, one in
which the signal does return and some random error is possible, and the second in which
the signal returns after a multipath reflection, and gross overestimation error occurs
...

As a second example, consider ranging via stereo vision
...
If the stereo vision system correctly correlates two images, then
the resulting random error will be caused by camera noise and will limit the measurement
accuracy
...
In such a case
stereo vision will exhibit gross measurement error, and one can easily imagine such behavior violating both the unimodal and the symmetric assumptions
...
Nonetheless, as we shall see, many successful mobile robot systems make use of these simplifying assumptions and the resulting mathematical techniques
with great empirical success
...
In the following sections,
we do the same for a sampling of the most commonly used mobile robot sensors today
...
1
...
These sensors have vast applications outside of mobile robotics and, as a
result, mobile robotics has enjoyed the benefits of high-quality, low-cost wheel and motor
sensors that offer excellent resolution
...

4
...
3
...

In mobile robotics, encoders are used to control the position or speed of wheels and other
motor-driven joints
...

An optical encoder is basically a mechanical light chopper that produces a certain
number of sine or square wave pulses for each shaft revolution
...
As the rotor moves, the amount of light
striking the optical detectors varies based on the alignment of the fixed and moving gratings
...
Resolution is measured in cycles per revolution (CPR)
...
A typical encoder in mobile robotics may have 2000 CPR, while the
optical encoder industry can readily manufacture encoders with 10000 CPR
...
Industrial optical encoders present no bandwidth
limitation to mobile robot applications
...
In this case, a second illumination and detector pair is placed 90 degrees shifted with respect to the original in terms of
the rotor disc
...
2, provide significantly
more information
...
Furthermore, the four detectably different states improve the resolution by a factor of four with no change to the rotor disc
...
Further improvement is possible by retaining the sinusoidal

98

Chapter 4

State

Ch A

Ch B

s1

high

low

s2

high

high

s3

low

high

s4

low

low

Figure 4
...
A single slot in the outer track generates a
reference (index) pulse per revolution
...
Such
methods, although rare in mobile robotics, can yield 1000-fold improvements in resolution
...
The accuracy of optical encoders is often assumed to be 100% and,
although this may not be entirely correct, any errors at the level of an optical encoder are
dwarfed by errors downstream of the motor shaft
...
1
...
They are used to determine the robot’s orientation and inclination
...
This procedure, which has its roots in vessel and ship navigation, is called dead reckoning
...
1
...
1 Compasses
The two most common modern sensors for measuring the direction of a magnetic field are
the Hall effect and flux gate compasses
...

The Hall effect describes the behavior of electric potential in a semiconductor when in
the presence of a magnetic field
...
In addition, the sign of the voltage potential identifies the direction of the magnetic field
...
Hall effect digital compasses are popular in mobile robotics, and con-

Perception

99

Figure 4
...
com/dico), enable inexpensive (< $ 15) sensing of magnetic fields
...
The
instruments are inexpensive but also suffer from a range of disadvantages
...
Internal sources of error include the nonlinearity of the
basic sensor and systematic bias errors at the semiconductor level
...
For example, the Hall effect compass pictured
in figure 4
...
5 seconds to settle after a 90 degree spin
...
Two small coils are wound on
ferrite cores and are fixed perpendicular to one another
...
By measuring both phase shifts, the direction of the magnetic
field in two dimensions can be computed
...

Regardless of the type of compass used, a major drawback concerning the use of the
Earth’s magnetic field for mobile robot applications involves disturbance of that magnetic
field by other magnetic objects and man-made structures, as well as the bandwidth limitations of electronic compasses and their susceptibility to vibration
...

4
...
4
...
Thus they provide an absolute measure for the heading of a mobile system
...
4
Two-axis mechanical gyroscope
...

Mechanical gyroscopes
...
The property of interest is known as the gyroscopic precession
...
This is due to the angular momentum associated with
a spinning wheel and will keep the axis of the gyroscope inertially stable
...

τ = IωΩ

(4
...
4, no torque can be transmitted from
the outer pivot to the wheel axis
...
e
...
Nevertheless, the remaining friction in the bearings of the
gyro axis introduce small torques, thus limiting the long-term space stability and introducing small errors over time
...
1 degrees in 6 hours
...
If the spinning axis is
aligned with the north-south meridian, the earth’s rotation has no effect on the gyro’s horizontal axis
...


Perception

101

Rate gyros have the same basic arrangement as shown in figure 4
...
The gimbals are restrained by a torsional spring with additional viscous
damping
...

Optical gyroscopes
...
Commercial use
began in the early 1980s when they were first installed in aircraft
...
They work on the principle that the
speed of light remains unchanged and, therefore, geometric change can cause light to take
a varying amount of time to reach its destination
...
Because the laser traveling in the
direction of rotation has a slightly shorter path, it will have a higher frequency
...
New solid-state optical gyroscopes based on the same principle are build using
microfabrication technology, thereby providing heading information with resolution and
bandwidth far beyond the needs of mobile robotic applications
...
0001 degrees/hr
...
1
...
Using the interaction of on-board sensors and the environmental beacons, the robot can identify its position precisely
...

In the following section, we describe one such beacon system, the global positioning
system (GPS), which is extremely effective for outdoor ground-based and flying robots
...
The
expense of environmental modification in an indoor setting is not amortized over an
extremely large useful area, as it is, for example, in the case of the GPS
...
A laser-based indoor beacon system, for example, must disambiguate the one true laser signal from possibly tens of other powerful signals that have
reflected off of walls, smooth floors, and doors
...
In commercial applications, such as manufacturing plants, the
environment can be carefully controlled to ensure success
...


102

Chapter 4

GPS
satellites

monitor
stations
users
uploading
station

master
stations

Figure 4
...


4
...
5
...
There are at least twenty-four operational GPS satellites at all times
...
190 km
...
5)
...

Therefore, GPS receivers are completely passive but exteroceptive sensors
...
When
a GPS receiver reads the transmission of two or more satellites, the arrival time differences
inform the receiver as to its relative distance to each satellite
...
In theory, such triangulation requires only three data points
...
It is, of course, mandatory that the satellites be well synchronized
...


Perception

103

The GPS receiver clock is also important so that the travel time of each satellite’s transmission can be accurately measured
...
So,
although three satellites would ideally provide position in three axes, the GPS receiver
requires four satellites, using the additional information to solve for four variables: three
position axes plus a time correction
...
GPS satellite transmissions are extremely low-power,
and reading them successfully requires direct line-of-sight communication with the satellite
...
Of course, most indoor spaces will also fail to
provide sufficient visibility of the sky for a GPS receiver to function
...

A number of factors affect the performance of a localization sensor that makes use of
the GPS
...
Specifically, at the North and South Poles, the satellites
are very close to the horizon and, thus, while resolution in the latitude and longitude directions is good, resolution of altitude is relatively poor as compared to more equatorial locations
...
They can be
employed with various strategies in order to achieve dramatically different levels of localization resolution
...
An extension of this method is differential GPS (DGPS), which makes use of a second receiver that is static and at a known exact
position
...
A disadvantage of this technique is that the stationary
receiver must be installed, its location must be measured very carefully, and of course the
moving robot must be within kilometers of this static unit in order to benefit from the DGPS
technique
...
There are two carriers, at 19 cm and 24 cm, and therefore significant improvements in precision are possible when the phase difference between
multiple satellites is measured successfully
...

A final consideration for mobile robot applications is bandwidth
...
On a fast-moving mobile robot or flying robot, this can mean that local motion
integration will be required for proper control due to GPS latency limitations
...
1
...
Many
ranging sensors have a low price point, and, most importantly, all ranging sensors provide
easily interpreted outputs: direct measurements of distance from the robot to objects in its
vicinity
...
But the local freespace information provided by ranging sensors can also
be accumulated into representations beyond the robot’s current local reference frame
...
It is only with the slow advent of successful visual
interpretation competence that we can expect the class of active ranging sensors to gradually lose their primacy as the sensor class of choice among mobile roboticists
...
Then, we present two geometric active ranging sensors: the optical
triangulation sensor and the structured light sensor
...
1
...
1 Time-of-flight active ranging
Time-of-flight ranging makes use of the propagation speed of sound or an electromagnetic
wave
...
6)

where
d = distance traveled (usually round-trip);
c = speed of wave propagation;
t = time of flight
...
3 m/ms whereas the speed of electromagnetic signals is 0
...
The time of flight for a typical distance, say 3 m, is 10 ms for an ultrasonic
system but only 10 ns for a laser rangefinder
...
This explains
why laser range sensors have only recently become affordable and robust for use on mobile
robots
...
g
...

The ultrasonic sensor (time-of-flight, sound)
...
The distance d of the object causing the reflection can be calculated based on the propagation speed of sound c and the time
of flight t
...
7)

The speed of sound c in air is given by
c =

γRT

(4
...

In air at standard pressure and 20° C the speed of sound is approximately c = 343 m/s
...
6 shows the different signal output and input of an ultrasonic sensor
...
An integrator also begins
to linearly climb in value, measuring the time from the transmission of these sound waves
to detection of an echo
...
This threshold is often decreasing in time, because the amplitude of the
expected echo decreases over time based on dispersal as it travels longer
...
A transducer will
continue to ring for up to several milliseconds after the initial transmission, and this governs the blanking time of the sensor
...


106

Chapter 4

wave packet
transmitted sound

threshold
analog echo signal
threshold
digital echo signal
integrator

time of flight (sensor output)

integrated time
output signal

Figure 4
...


However, once the blanking interval has passed, the system will detect any abovethreshold reflected sound, triggering a digital signal and producing the distance measurement using the integrator value
...
Often the same unit is used to measure the
reflected signal, although the required blanking interval can be reduced through the use of
separate output and input devices
...
Lower frequencies correspond to a longer range, but with the disadvantage of longer post-transmission ringing and,
therefore, the need for longer blanking intervals
...
The published accuracy of commercial ultrasonic sensors varies between 98% and 99
...
In mobile robot applications,
specific implementations generally achieve a resolution of approximately 2 cm
...
This is a major
limitation since sound propagates in a cone-like manner (figure 4
...
Consequently, when using ultrasonic ranging one does not acquire
depth data points but, rather, entire regions of constant depth
...
The sensor readings must be plotted as segments of an arc (sphere for 3D) and not as
point measurements (figure 4
...
However, recent research developments show significant
improvement of the measurement quality in using sophisticated echo processing [76]
...
The published accuracy values for ultrasonics are

Perception

107


-30°

measurement cone
30°

-60°

60°

Amplitude [dB]
Figure 4
...


a)

b)

Figure 4
...
Courtesy of John Leonard, MIT
...
This does not capture the effective error modality seen on
a mobile robot moving through its environment
...
Therefore, the true error behavior of ultrasonic sensors is
compound, with a well-understood error distribution near the true value in the case of a successful retroreflection, and a more poorly understood set of range values that are grossly
larger than the true value in the case of coherent reflection
...
Again,
the impact is discrete, with one material possibly failing to produce a reflection that is sufficiently strong to be sensed by the unit
...

A final limitation of ultrasonic ranging relates to bandwidth
...
For example, measuring the distance to an object that is 3 m away will take such a sensor 20 ms, limiting its
operating speed to 50 Hz
...
4 seconds and the overall update frequency of any one sensor
is just 2
...
For a robot conducting moderate speed motion while avoiding obstacles
using ultrasonics, this update rate can have a measurable impact on the maximum speed
possible while still sensing and avoiding obstacles safely
...
The laser rangefinder is a time-offlight sensor that achieves significant improvements over the ultrasonic range sensor owing
to the use of laser light instead of sound
...
g
...
Often
referred to as optical radar or lidar (light detection and ranging), these devices produce a
range estimate based on the time needed for the light to reach the target and return
...

One way to measure the time of flight for the light beam is to use a pulsed laser and then
measure the elapsed time directly, just as in the ultrasonic solution described earlier
...
A second method is to measure the beat frequency between a frequencymodulated continuous wave (FMCW) and its received reflection
...
We describe this third approach
in detail
...
9
Schematic of laser rangefinding by phase-shift measurement
...
10
Range estimation by measuring the phase shift between transmitted and received signals
...
Near-infrared light (from a light-emitting diode [LED] or
laser) is collimated and transmitted from the transmitter in figure 4
...
For surfaces having a roughness greater than the wavelength of the incident light, diffuse reflection will occur, meaning that the light is reflected almost isotropically
...
The component of the infrared light which falls within the receiving aperture of the sensor will return
almost parallel to the transmitted beam for distant objects
...
Figure 4
...
The wavelength of the modulating signal
obeys the equation c = f ⋅ λ where c is the speed of light and f the modulating frequency
...
The total distance D' covered by the
emitted light is

110

Chapter 4

a)

b)

R
M o ta t
irr in
or g

Reflected light
Transmitted light

c)

Reflected light
LED/Laser

Detector

Figure 4
...
; (c) Industrial 180 degree laser range sensor from Sick Inc
...
9)

where D and L are the distances defined in figure 4
...
The required distance D , between
the beam splitter and the target, is therefore given by
λ

D = ----4π

(4
...
It can be seen that the
transmission of a single frequency modulated wave can theoretically result in ambiguous
range estimates since, for example, if λ = 60 m, a target at a range of 5 m would give an
indistinguishable phase measurement from a target at 65 m, since each phase angle would
be 360 degrees apart
...

It can be shown that the confidence in the range (phase estimate) is inversely proportional to the square of the received signal amplitude, directly affecting the sensor’s accuracy
...


Perception

111

Figure 4
...
The length of the lines through
the measurement points indicate the uncertainties
...
11 the schematic of a typical 360 degrees laser range sensor and two examples are shown
...
12 shows a typical range image of a 360 degrees scan taken with
a laser range sensor
...
The Sick laser scanner shown in Figure 4
...
5 degree
...
This device performs
twenty five 180 degrees scans per second but has no mirror nodding capability for the vertical dimension
...
With light, this will only occur when striking a highly polished surface
...
Unlike ultrasonic sensors, laser rangefinders cannot detect
the presence of optically transparent materials such as glass, and this can be a significant
obstacle in environments, for example, museums, where glass is commonly used
...
1
...
2 Triangulation-based active ranging
Triangulation-based ranging sensors use geometric properties manifest in their measuring
strategy to establish distance readings to objects
...
13
Principle of 1D laser triangulation
...
g
...
The reflection of the known pattern is captured by a receiver
and, together with known geometric values, the system can use simple triangulation to
establish range measurements
...
If the receiver measures the position of the reflection along two orthogonal axes, we call the sensor a structured light sensor
...

Optical triangulation (1D sensor)
...
13
...
g
...
The reflected light is collected by a lens and
projected onto a position-sensitive device (PSD) or linear camera
...
13, the distance D is given by
L
D = f -x

(4
...
Sensors based on this principle are used in range
sensing up to 1 or 2 m, but also in high-precision industrial measurements with resolutions
far below 1 µm
...
However, the operating range of such a device is normally fairly
limited by geometry
...
14

Perception

113

Figure 4
...


operates over a distance range of between 8 and 80 cm
...
Although more limited in range than sonar, the optical
triangulation sensor has high bandwidth and does not suffer from cross-sensitivities that are
more common in the sound domain
...
If one replaces the linear camera or PSD of an optical triangulation sensor with a 2D receiver such as a CCD or CMOS camera, then one can recover
distance to a large set of points instead of to only one point
...
Many systems exist which either
project light textures (figure 4
...
Yet another popular alternative is to project a laser stripe (figure 4
...
Regardless of how it is created, the projected light has a known structure, and therefore the image taken by the CCD or CMOS
receiver can be filtered to identify the pattern’s reflection
...
In passive image analysis, as we discuss later, existing features in
the environment must be used to perform correlation, while the present method projects a
known pattern upon the environment and thereby avoids the standard correlation problem
altogether
...
g
...
In contrast, stereo vision would fail in such texturefree circumstances
...
15c shows a 1D active triangulation geometry
...
15c
...
15
(a) Principle of active two dimensional triangulation
...
(c) 1D schematic of the principle
...


sured values in the system are α and u, the distance of the illuminated point from the origin
in the imaging sensor
...
g
...

From figure 4
...
12)

Perception

115

where f is the distance of the lens to the imaging plane
...
12)
is given by
∂u
b⋅f
----- = G p = -------2
∂z
z

(4
...
In a scanning ranging system, there is an additional effect on the ranging accuracy,
caused by the measurement of the projection angle α
...
12 we see that
2

∂α
b sin α
------ = G α = ---------------2
∂z
z

(4
...
The larger b
is, the better the range resolution will be
...
As the baseline length b is increased, one introduces the chance that, for close objects, the illuminated point(s) may not be in the receiver’s field of view
...
Increasing the
detector length, however, means a larger sensor head and worse electrical characteristics
(increase in random error and reduction of bandwidth)
...

At one time, laser stripe-based structured light sensors were common on several mobile
robot bases as an inexpensive alternative to laser rangefinding devices
...

4
...
7 Motion/speed sensors
Some sensors measure directly the relative motion between the robot and its environment
...
There are a
number of sensors that inherently measure some aspect of motion or change
...
When a human walks across the sensor’s field

116

Chapter 4

b)

a)

v

Transmitter

Receiver

v

Transmitter/
Receiver
Object

Figure 4
...


of view, his or her motion triggers a change in heat in the sensor’s reference frame
...

These sensors represent a well-known technology with decades of general applications
behind them
...

4
...
7
...

A transmitter emits an electromagnetic or sound wave with a frequency f t
...
16a) or reflected from an object (figure 4
...
The measured frequency f r at the receiver is a function of the relative speed v between transmitter
and receiver according to
1
f r = f t -----------------1+v⁄c

(4
...
16)

if the receiver is moving
...
16b) there is a factor of 2 introduced, since any
change x in relative separation affects the round-trip path length by 2x
...


Perception

117

2f t v cos θ
∆f = f t – f r = --------------------c

(4
...
18)

where
∆f = Doppler frequency shift;
θ = relative angle between direction of motion and beam axis
...
It has a wide spectrum
of applications:
• Sound waves: for example, industrial process control, security, fish finding, measure of
ground speed
...

A current application area is both autonomous and manned highway vehicles
...
Both systems
have equivalent range, but laser can suffer when visual signals are deteriorated by environmental conditions such as rain, fog, and so on
...
These systems are called VORAD
(vehicle on-board radar) and have a total range of approximately 150 m
...
The beam is approximately 4 degrees wide and 5 degrees in elevation
...
Existing systems can provide
information on multiple targets at approximately 2 Hz
...
1
...
It provides us with an enormous amount of information
about the environment and enables rich, intelligent interaction in dynamic environments
...
The first step in this
process is the creation of sensing devices that capture the same raw information light that
the human vision system uses
...
These sensors have specific limitations in performance when compared to the human eye, and it is important that the reader understand
these limitations
...
17
Commercially available CCD chips and CCD cameras
...
howstuffworks
...
htm)
...

4
...
8
...
The charged coupled device is the most popular basic ingredient of
robotic vision systems today
...
17) is an array of light-sensitive
picture elements, or pixels, usually with between 20,000 and several million pixels total
...
First, the capacitors of all pixels are charged fully, then the integration period
begins
...
Over time, each pixel accumulates a varying level
of charge based on the total number of photons that have struck it
...
In a CCD,
the reading process is performed at one corner of the CCD chip
...
This means that each charge must be transported across the chip, and it is
critical that the value be preserved
...

The photodiodes used in CCD chips (and CMOS chips as well) are not equally sensitive
to all frequencies of light
...

It is important to remember that photodiodes are less sensitive to the ultraviolet end of the
spectrum (e
...
, blue) and are overly sensitive to the infrared portion (e
...
, heat)
...
There are two
common approaches for creating color images
...
Normally, two pixels measure green
while one pixel each measures red and blue light intensity
...
The number of pixels in the system has been
effectively cut by a factor of four, and therefore the image resolution output by the CCD
camera will be sacrificed
...
Three separate CCD chips receive the light, with
one red, green, or blue filter over each entire chip
...
Resolution is preserved in this solution, although the three-chip color
cameras are, as one would expect, significantly more expensive and therefore more rarely
used in mobile robotics
...
This means that
the overall system detects blue light much more poorly than red and green
...
It is not uncommon to assume at least one to two bits of additional noise on the blue channel
...

The CCD camera has several camera parameters that affect its behavior
...
In others, the values are constantly changing based on built-in
feedback loops
...
The iris position and shutter speed regulate the amount of light being measured by the camera
...
Shutter speed regulates the integration period of the
chip
...
Camera gain controls the overall amplification of the analog signal, prior to A/D conversion
...
Thus gain merely amplifies the signal, and amplifies along with the
signal all of the associated noise and error
...
g
...


120

Chapter 4

In color cameras, an additional control exists for white balance
...
g
...
), the relative measurements of red, green, and blue light that
define pure white light will change dramatically
...
White balance controls enable the user to
change the relative gains for red, green, and blue in order to maintain more consistent color
definitions in varying contexts
...
As mentioned above, a number of parameters can change the brightness
and colors with which a camera creates its image
...
For more details on the fields of color constancy and
luminosity constancy, consult [40]
...
In cases of very low illumination, each pixel will receive only a
small number of photons
...
e
...
e
...
In cases of
very high illumination, a pixel fills its well with free electrons and, as the well reaches its
limit, the probability of trapping additional electrons falls and therefore the linearity
between incoming light and electrons in the well degrades
...
When a well has
reached its limit, then additional light within the remainder of the integration period may
cause further charge to leak into neighboring pixels, causing them to report incorrect values
or even reach secondary saturation
...

The camera parameters may be adjusted for an environment with a particular light level,
but the problem remains that the dynamic range of a camera is limited by the well capacity
of the individual pixels
...
The noise level for reading the well may be 11 electrons, and therefore
the dynamic range will be 40,000:11, or 3600:1, which is 35 dB
...
The complementary metal oxide semiconductor chip is a significant
departure from the CCD
...
Just as in CCD chips, all of the pixels accumulate
charge during the integration period
...
18
A commercially available, low-cost CMOS camera with lens attached
...
Using more traditional traces from general
semiconductor chips, the resulting pixel values are all carried to their destinations
...
First and foremost, there is
no need for the specialized clock drivers and circuitry required in the CCD to transfer each
pixel’s charge down all of the array columns and across all of its rows
...
Therefore, the same production lines that create microchips can create inexpensive
CMOS chips as well (see figure 4
...
The CMOS chip is so much simpler that it consumes
significantly less power; incredibly, it operates with a power consumption that is one-hundredth the power consumption of a CCD chip
...

On the other hand, the CMOS chip also faces several disadvantages
...
Many photons hit the transistors rather than the photodiode, making the CMOS
chip significantly less sensitive than an equivalent CCD chip
...
Time will doubtless bring the
high-end CMOS imagers closer to CCD imaging performance
...
As compared to the
human eye, these chips all have far poorer adaptation, cross-sensitivity, and dynamic range
...
Only over time, as the underlying
performance of imaging chips improves, will significantly more robust vision-based sensors for mobile robots be available
...
Although digital cameras have inherently digital output,
throughout the 1980s and early 1990s, most affordable vision modules provided analog
output signals, such as NTSC (National Television Standards Committee) and PAL (Phase
Alternating Line)
...
The D/A and A/D steps are far from
noisefree, and furthermore the color depth of the analog signal in such cameras was optimized for human vision, not computer vision
...
At the most basic level, an imaging chip provides parallel digital I/O (input/output) pins that communicate discrete pixel level values
...
To relieve the real-time
demands, researchers often place an image buffer chip between the imager’s digital output
and the computer’s digital inputs
...

At the highest level, a roboticist may choose instead to utilize a higher-level digital
transport protocol to communicate with an imager
...
0) standards, although some older imaging
modules also support serial (RS-232)
...
Take note, however, of the distinction between lossless
digital video and the standard digital video stream designed for human visual consumption
...
For
vision researchers, such compression must be avoided as it not only discards information
but even introduces image detail that does not actually exist, such as MPEG (Moving Picture Experts Group) discretization boundaries
...
1
...
2 Visual ranging sensors
Range sensing is extremely important in mobile robotics as it is a basic input for successful
obstacle avoidance
...
It is natural to attempt to implement ranging
functionality using vision chips as well
...
Any vision chip collapses the 3D world into a 2D image plane, thereby losing depth
information
...
But such assumptions are rarely possible in real-world

123

focal plane
image plane

Perception

(x, y, z)

(xl, yl)

f
d

e

δ

Figure 4
...
In order to get a sharp image, the image
plane must coincide with the focal plane
...


mobile robot applications
...

The general solution is to recover depth by looking at several images of the scene to gain
more information, hopefully enough to at least partially recover depth
...
They could
differ in viewpoint, yielding stereo or motion algorithms
...
This is the fundamental idea behind depth from focus and
depth from defocus techniques
...
Subsequently, we present details for the correspondence-based techniques of depth
from stereo and motion
...
The depth from focus class of techniques relies on the fact that image
properties not only change as a function of the scene but also as a function of the camera
parameters
...
19
...
19)

If the image plane is located at distance e from the lens, then for the specific object
voxel depicted, all light will be focused at a single point on the image plane and the object
voxel will be focused
...
19, then the light from the object voxel will be cast on the image plane as a blur circle
...
20)

L is the diameter of the lens or aperture and δ is the displacement of the image plan
from the focal point
...
For example, if the aperture
or lens is reduced to a point, as in a pinhole camera, then the radius of the blur circle
approaches zero
...
Of course, the disadvantage of doing so is that we are allowing less light to form the image on the image plane and
so this is practical only in bright circumstances
...
Suppose the
image plane is at a fixed distance 1
...
2 and focal length
f = 0
...
We can see from equation (4
...
If the object is at distance d = 1 , then
from equation (4
...
2
...
533
...
20) in each case we can compute R = 0
...
08 respectively
...

In contrast, suppose the object is at d = 10
...
526
...
524
...
117 and R = 0
...
This analysis demonstrates the fundamental
limitation of depth from focus techniques: they lose sensitivity as objects move farther
away (given a fixed focal length)
...

Nevertheless, camera optics can be customized for the depth range of the intended application
...
20
Two images of the same scene taken with a camera at two different focusing positions
...
The scene is an outdoor
concrete step
...
Similarly, a large lens
diameter, coupled with a very fast shutter speed, will lead to larger, more detectable blur
circles
...
g
...
20)
...
The human visual system uses an abundance of cues and
techniques, and one system demonstrated in humans is depth from focus
...
Such approaches, in which
the lens optics are actively searched in order to maximize focus, are technically called depth
from focus
...

The depth from focus method is one of the simplest visual ranging techniques
...
When the sharpness is maximized, the corresponding position of the image plane directly reports range
...
Of course, a method is required for
measuring the sharpness of an image or an object within the image
...
21)

2

(4
...
21
The Cheshm robot uses three monochrome cameras as its only ranging sensor for obstacle avoidance
in the context of humans, static obstacles such as bushes, and convex obstacles such as ledges and
steps
...
21)]
is that the calculation can be implemented in analog circuitry using just a rectifier, a lowpass filter, and a high-pass filter
...
Such systems will be sensitive to contrast along one particular axis,
although in practical terms this is rarely an issue
...
For this reason this method has not been applied to mobile robots
...
This robot uses three monochrome cameras placed
as close together as possible with different, fixed lens focus positions (figure 4
...

Several times each second, all three frame-synchronized cameras simultaneously capture three images of the same scene
...
The approximate sharpness of each region is computed
using a variation of equation (4
...
Note
that equation (4
...
This is due to
a subtle but important issue
...
This means

Perception

127

that the odd rows are captured first, then afterward the even rows are captured
...
The result is an artificial blurring due to motion and not optical defocus
...

Recall that the three images are each taken with a camera using a different focus position
...
A 5 x 3
coarse depth map of the scene is constructed quickly by simply comparing the sharpness
values of each of the three corresponding regions
...
The critical step is to adjust the focus positions of all three cameras so that flat ground in front of
the obstacle results in medium readings in one row of the depth map
...

Although sufficient for obstacle avoidance, the above depth from focus algorithm presents unsatisfyingly coarse range information
...

Depth from defocus methods take as input two or more images of the same scene, taken
with different, known camera geometry
...
We begin by deriving the relationship between the actual scene properties (irradiance and depth), camera geometry settings, and the image g that is formed at the image
plane
...
Consider a pinhole aperture
( L = 0 ) in lieu of the lens
...

We define f ( x, y ) as the irradiance (or light intensity) at p due to the light from P
...

The point spread function h ( x g, y g, x f, y f, R x, y ) is defined as the amount of irradiance
from point P in the scene (corresponding to ( x f, y f ) in the focused image f that contributes
to point ( x g, y g ) in the observed, defocused image g
...
R , in turn, depends upon the distance from point P to the lens, as can be seen
by studying equations (4
...
20)
...
23)

2

Intuitively, point P contributes to the image pixel ( x g, y g ) only when the blur circle of
point P contains the point ( x g, y g )
...
24)

This equation relates the depth of scene points via R to the observed image g
...
However, this function has another unknown,
and that is f , the focused image
...

Given two images of the same scene, taken with varying camera geometry, in theory it
will be possible to solve for g as well as R because f stays constant
...
The classic
approach is known as inverse filtering because it attempts to directly solve for R , then
extract depth information from this solution
...
Suppose that the incoming light is split and
sent to two cameras, one with a large aperture and the other with a pinhole aperture [121]
...

With this approach, there remains a single equation with a single unknown, and so the solution is straightforward
...
Note, however, that the pinhole aperture necessitates a large amount of incoming light, and that furthermore the actual image intensities
must be normalized so that the pinhole and large-diameter images have equivalent total
radiosity
...
These matrix-based methods have
recently achieved significant improvements in accuracy over all previous work
...
The equations above do not require search algorithms to find the solution, as would
the correlation problem faced by depth from stereo methods
...


Perception

129

objects contour
(x, y, z)

origin
y

lens l

lens r
x

f

z
(xr, yr)

(xl, yl)
b/2

focal plane

b/2

Figure 4
...


As with all visual methods for ranging, accuracy decreases with distance
...

Stereo vision
...
The theory of
depth from stereo has been well understood for years, while the engineering challenge of
creating a practical stereo sensor has been formidable [16, 29, 30]
...

First, we consider a simplified case in which two cameras are placed with their optical
axes parallel, at a separation (called the baseline) of b, shown in figure 4
...

In this figure, a point on the object is described as being at coordinate ( x, y, z ) with
respect to a central origin located between the two camera lenses
...

Thus, the origin for the coordinate frame referenced by points of the form ( x l ,y l ) is located
at the center of lens l
...
22, it can be seen that
x
x
x+b⁄2
x–b⁄2
---l = ------------------ and ---r = -----------------f
z
f
z

(4
...
26)

where f is the distance of both lenses to the image plane
...
25) that
xl – xr
b
------------- = -f
z

(4
...
This is an
important term in stereo vision, because it is only by measuring disparity that we can
recover depth information
...
28)

Observations from these equations are as follows:
• Distance is inversely proportional to disparity
...
In general, this is acceptable for mobile robotics, because for navigation and
obstacle avoidance closer objects are of greater importance
...
For a given disparity error, the accuracy of the depth estimate increases with increasing baseline b
...
Such objects by definition
will not have a disparity and therefore will not be ranged successfully
...
Given one member of the conjugate pair, we know
that the other member of the pair lies somewhere along a line known as an epipolar line
...
22, because the cameras are perfectly aligned with one
another, the epipolar lines are horizontal lines (i
...
, along the x direction)
...

In order to optimize the range of distances that can be recovered, it is often useful to turn
the cameras inward toward one another, for example
...
22 shows the orientation
vectors that are necessary to solve this more general problem
...
The reference
frames of the cameras need not be aligned, and can indeed be at any arbitrary orientation
relative to one another
...
Note that these are the coordinates of point P , not the position of its
counterpart in the left camera image
...
If we have a rotation matrix R and translation matrix r 0 relating the relative positions of cameras l and r, then we can define r' r in terms of r' l :
r' r = R ⋅ r' l + r 0

(4
...

Expanding equation (4
...
30)

r 03

The above equations have two uses:
1
...
Of course, if we knew r' l then we would
have complete information regarding the position of P relative to the left camera, and
so the depth recovery problem would be solved
...
22, R = I (the identify matrix)
...
We could calibrate the system and find r11, r12 … given a set of conjugate pairs
{ ( x' l, y' l, z' l ), ( x' r, y' r, z' r ) }
...
This means that calibration requires, for a given
scene, four conjugate points
...
In fact, singlecamera calibration is itself an active area of research, particularly when the goal includes
any 3D recovery aspect
...
Such single-camera calibration involves finding solutions for the values for the
exact offset of the imaging chip relative to the optical axis, both in translation and angle,
and finding the relationship between distance along the imaging chip surface and external
viewed surfaces
...
e
...

A commonly practiced technique for such single-camera calibration is based upon
acquiring multiple views of an easily analyzed planar pattern, such as a grid of black
squares on a white background
...
Because modern imaging systems are capable of spatial accuracy greatly
exceeding the pixel size, the payoff of such refined calibration can be significant
...

Assuming that the calibration step is complete, we can now formalize the range recovery
problem
...
Instead, by virtue of the two cameras we have
pixels on the image planes of each camera, ( x l, y l, z l ) and ( x r, y r, z r )
...
31)

Let us concentrate first on recovery of the values z' l and z' r
...
30) and
(4
...
32)

Perception

133

x
y
y
 r ---l + r ---l + r  z' + r = ---r z'
22 23 l
02
 21 f
f
f r

(4
...
34)

The same process can be used to identify values for x' and y' , yielding complete information about the position of point P
...
This fundamental challenge, identifying the
conjugate pairs and thereby recovering disparity, is the correspondence problem
...

The correspondence problem, or the problem of matching the same object in two different inputs, has been one of the most challenging problems in the computer vision field and
the artificial intelligence fields
...
With more reliable data in hand, stereo algorithms search for the best conjugate pairs
representing as many of the images’ pixels as possible
...
This has been the chief technology driver in stereo vision algorithms, and one particular method has become widely used in commercially available systems
...
ZLoG is a strategy for identifying features in the left and right camera images that are stable and will match well, yielding
high-quality stereo depth recovery
...
It has led to several commercial stereo vision systems and yet it is
extremely simple
...

The core of ZLoG is the Laplacian transformation of an image
...
Formally, the Laplacian L ( x, y ) of an image with
intensities I ( x, y ) is defined as
2

L ( x, y ) =

∂I
∂x

2

2

+

∂I
∂y

2

(4
...
Such a transformation, called a convolution, must be computed over the discrete
space of image pixel values, and therefore an approximation of equation (4
...
36)

We depict a discrete operator P , called a kernel, that approximates the second derivative
operation along both axes as a 3 x 3 table:
0 1 0
1 –4 1
0 1 0

(4
...
The kernel defines
the contribution of each pixel in the image to the corresponding pixel in the target as well
as its neighbors
...
37) causes pixel I ( 5, 5 ) to make the following contributions to the target image L :
L ( 5, 5 ) += -40;
L ( 4, 5 ) += 10;
L ( 6, 5 ) += 10;
L ( 5, 4 ) += 10;
L ( 5, 6 ) += 10
...
The second
derivative will have a sharp positive peak followed by a sharp negative peak, as depicted
in figure 4
...
The Laplacian is used because of this extreme sensitivity to changes in the
image
...
We would like the Laplacian to
trigger large peaks due to real changes in the scene’s intensities, but we would like to keep
signal noise from triggering false peaks
...
Such smoothing can be
effected via convolution with a 3 × 3 table that approximates Gaussian smoothing:

Perception

135

Figure 4
...


1
----16
2
----16
1
----16

2
----16
4
----16
2
----16

1
----16
2
----16
1
----16

(4
...
This should seem familiar
...
It is, nonetheless, very effective at removing
high-frequency noise, just as blurring removes fine-grained detail
...

The result of Laplacian of Gaussian (LoG) image filtering is a target array with sharp
positive and negative spikes identifying boundaries of change in the original image
...

To solve the correspondence problem, we would like to identify specific features in LoG
that are amenable to matching between the left camera and right camera filtered images
...


136

Chapter 4

Figure 4
...


Many zero crossings do lie at edges in images, but their occurrence is somewhat broader
than that
...
The accuracy can even be
further enhanced by using interpolation to establish the position of the zero crossing with
subpixel accuracy
...

Figure 4
...

Several commercial stereo vision depth recovery sensors have been available for
researchers over the past 10 years
...
25
...
The correspondence problem
is solved at the level of these subregions, a process called area correlation, and after correspondence is solved the results are interpolated to one-fourth pixel precision
...
24
Extracting depth information from a stereo image
...
(b1 and b2) Vertical edge filtered left and right image: filter = [1 2 4 -2 -10 -2 4 2 1]
...
(d) Depth image (disparity): bright = close; dark = far
...
This is valuable because such additional information can be
used over time to eliminate spurious, incorrect stereo matches that have poor match quality
...
The SVM consists of sensor hardware, including two CMOS cameras and
DSP (Digital Signal Processor) hardware
...
g
...
On a 320 x 240
pixel image pair, the SVM assigns one of thirty-two discrete levels of disparity (i
...
, depth)
to every pixel at a rate of twelve frames per second (based on the speed of a 233 MHz Pentium II)
...

It is important to note that the SVM uses CMOS chips rather than CCD chips, demonstrating that resolution sufficient for stereo vision algorithms is readily available using the
less expensive, power efficient CMOS technology
...
It is instructive to observe the published resolution values
for the SVM sensor
...
These values are based on ideal circumstances, but nevertheless exemplify the rapid loss in resolution that will accompany vision-based ranging
...
1
...
3 Motion and optical flow
A great deal of information can be recovered by recording time-varying images from a
fixed (or moving) camera
...
If a point in the
environment moves with velocity v 0 , then this induces a velocity v i in the image plane
...

• Optical flow: it can also be true that brightness patterns in the image move as the object
that causes them moves (light source)
...

In our analysis here we assume that the optical flow pattern will correspond to the
motion field, although this is not always true in practice
...
26a
where a sphere exhibits spatial variation of brightness, or shading, in the image of the
sphere since its surface is curved
...
In figure 4
...
Here we have a fixed sphere with a moving light
source
...
In this case the optical

Perception

139

a)

b)

Figure 4
...


flow is nonzero but the motion field is zero
...

Optical Flow
...
Most algorithms use local information,
attempting to find the motion of a local patch in two consecutive images
...
Below we present details for the optical flow constraint equation
method
...

Suppose first that the time interval between successive snapshots is so fast that we can
assume that the measured intensity of a portion of the same object is effectively constant
...
If
u ( x, y ) and v ( x, y ) are the x and y components of the optical flow vector at that point,
we need to search a new image for a point where the irradiance will be the same at time
t + δt , that is, at point ( x + δt, y + δt ) , where δx = u δt and δy = v δt
...
39)

for a small time interval, δt
...
If we further assume that the brightness of the image varies smoothly, then
we can expand the left hand side of equation (4
...
40)

where e contains second- and higher-order terms in δx , and so on
...
41)

from which we can abbreviate
dy
dx
u = ----- ; v = ---dt
dt

(4
...
43)

and

so that we obtain
Ex u + Ey v + Et = 0

(4
...
Altogether, equation (4
...

We need to calculate both u and v for each pixel, but the optical flow constraint equation
only provides one equation per pixel, and so this is insufficient
...

The solution to this ambiguity requires an additional constraint
...
This constraint is interesting in that we know it will be violated
to some degree, but we enforce the constraint nonetheless in order to make the optical flow
computation tractable
...
Of
course, such situations will tend to include edges, and so this may introduce a useful visual
cue
...
45)

which is the integral of the square of the magnitude of the gradient of the optical flow
...

ec =

2

∫ ∫ ( Ex u + Ey v + Et ) dx dy

(4
...
A large parameter should be used if the brightness measurements are accurate and small if they are noisy
...

The resulting problem then amounts to the calculus of variations, and the Euler equations yield
2

(4
...
48)

∇ u = λ ( E x u + E y v + E t )E x
∇ v = λ ( E x u + E y v + E t )E y
where
2

2



2
∇ = ------- + ------2
2
δx δy

(4
...

Equations (4
...
48) form a pair of elliptical second-order partial differential
equations which can be solved iteratively
...
This of course violates the smoothness constraint
...

Another possibility is to opportunistically make use of these distinctive edges
...

Optical flow promises to be an important ingredient in future vision algorithms that
combine cues across multiple algorithms
...
27
Color markers on the top of EPFL’s STeam Engine soccer robots enable a color-tracking sensor to
locate the robots and the ball in the soccer field
...

4
...
8
...
An important aspect of vision-based sensing is that
the vision chip can provide sensing modalities and cues that no other mobile robot sensor
provides
...

Color represents an environmental characteristic that is orthogonal to range, and it represents both a natural cue and an artificial cue that can provide new information to a mobile
robot
...
27)
...
First, detection of color is a straightforward function of a single image, therefore no correspondence problem need be solved in
such algorithms
...
e
...

Efficient color-tracking sensors are now available commercially
...


Perception

143

Cognachrome color-tracking system
...
The system will detect color blobs based on three
user-defined colors at a rate of 60 Hz
...

This sensor uses a technique called constant thresholding to identify each color
...
The 3D box defined by these six constraints forms a color bounding
box, and any pixel with RGB values that are all within this bounding box is identified as a
target
...

The Cognachrome sensor achieves a position resolution of one pixel for the centroid of
each object in a field that is 200 x 250 pixels in size
...
All processing is performed on sensor-specific
hardware (i
...
, a Motorola 68332 processor and a mated framegrabber)
...

CMUcam robotic vision sensor
...
The CMUcam sensor
is a recent system that mates a low-cost microprocessor with a consumer CMOS imaging
chip to yield an intelligent, self-contained vision sensor for $100, as shown in figure 4
...

This sensor is designed to provide high-level information extracted from the camera
image to an external processor that may, for example, control a mobile robot
...
Then, the vision sensor processes the data in
real time and outputs high-level information to the external consumer
...

Figure 4
...
The approximate shape of the object
is extracted as well as its bounding box and approximate center of mass
...
Because of the rapid speedup of processors
in recent times, there has been a trend toward executing basic vision processing on a main

144

Chapter 4

Figure 4
...


Figure 4
...


processor within the mobile robot
...
In this spirit, the CMVision color-tracking
software represents a state-of-the-art software solution for color tracking in dynamic environments [47]
...

The basic algorithm this sensor uses is constant thresholding, as with Cognachrome,
with the chief difference that the YUV color space is used instead of the RGB color space
when defining a six-constraint bounding box for each color
...
Y represents the image’s luminosity while U

Perception

145

and V together capture its chrominance
...

The CMVision color sensor achieves a resolution of 160 x 120 and returns, for each
object detected, a bounding box and a centroid
...

Key performance bottlenecks for both the CMVision software, the CMUcam hardware
system, and the Cognachrome hardware system continue to be the quality of imaging chips
and available computational speed
...

4
...
1
...
As mentioned there, sensors are imperfect devices with errors of both systematic and random nature
...

But when you build a mobile robot, you combine information from many sensors, even
using the same sensors repeatedly, over time, to possibly build a model of the environment
...
With a quantitative tool in hand, the standard Gaussian uncertainty model can be presented and evaluated
...

4
...
1 Statistical representation
We have already defined error as the difference between a sensor measurement and the true
value
...
Let us formulate the problem of sensing as an estimation problem
...
The goal is to characterize the estimate of the true value E [ X ] given these measurements:
E [ X ] = g ( ρ 1, ρ 2, …, ρ n )

(4
...
30
A sample probability density function, showing a single probability peak (i
...
, unimodal) with asymptotic drops in both directions
...
We use a probability density function to characterize the statistical
properties of the value of X
...
30, the density function identifies for each possible value x of X a probability density f ( x ) along the y -axis
...
51)

The probability of the value of X falling between two limits a and b is computed as
the bounded integral:
P[a < X ≤ b] =

b

∫a f ( x ) dx

(4
...
Using f ( x ) we can quantitatively define the mean, variance, and standard
deviation as follows
...
We can easily define
E[X] :

Perception

147



∫–∞ xf ( x ) dx

µ = E[ X] =

(4
...
In contrast, the mean square value is simply the weighted average of the squares of all values of x :
2

E[X ] =



2

∫–∞ x f ( x ) dx

(4
...
55)
2

Finally, the standard deviation σ is simply the square root of variance σ , and σ will
play important roles in our characterization of the error of a single sensor as well as the error
of a model generated by combining multiple sensor readings
...
2
...
1 Independence of random variables
...

For instance, a mobile robot’s laser rangefinder may be used to measure the position of a
feature on the robot’s right and, later, another feature on the robot’s left
...

Two random variables X 1 and X 2 are independent if the particular value of one has no
bearing on the particular value of the other
...
First, the expected value (or mean
value) of the product of random variables is equal to the product of their mean values:
E [ X 1 X 2 ] = E [ X 1 ]E [ X 2 ]

(4
...
57)

In mobile robotics, we often assume the independence of random variables even when
this assumption is not strictly true
...
26%
95
...
72%

-3σ -2σ



σ





Figure 4
...
We shall refer to this as the reference Gaussian
...
44% of the values are falling within ± 2 σ
...
A further simplification, described in section 4
...
1
...

4
...
1
...
The Gaussian has many characteristics that make it mathematically advantageous to other ad hoc probability density functions
...
There is no particular bias for being larger than
or smaller than µ , and this makes sense when there is no information to the contrary
...
This distribution also has tails (the
value of f ( x ) as x approaches – ∞ and ∞ ) that only approach zero asymptotically
...
In this sense, the Gaussian is conservative
...
58)

The Gaussian’s basic shape is determined by the structure of this formula, and so the
only two parameters required to fully specify a particular Gaussian are its mean, µ , and its

Perception

149

X1





Y1

Xi

Yi




System

Xn

Ym

Figure 4
...


standard deviation, σ
...
31 shows the Gaussian function with µ = 0 and σ = 1
...
How does one identify the
chance that the value of X is within one standard deviation of µ ? In practice, this requires
integration of f ( x ) , the Gaussian function to compute the area under a portion of the curve:
Area =

σ

∫– σ f ( x ) dx

(4
...
59), and
so the common technique is to use a Gaussian cumulative probability table
...
68 ;
P [ µ – 2σ < X ≤ µ + 2σ ] = 0
...
997
...

This applies to any Gaussian distribution
...

4
...
2 Error propagation: combining uncertain measurements
The probability mechanisms above may be used to describe the errors associated with a
single sensor’s attempts to measure a real-world value
...
For example, a series of uncertain measurements of single points can be fused to
extract the position of a line (e
...
, a hallway wall) in the environment (figure 4
...

Consider the system in figure 4
...
The question of interest is: what can we say about

150

Chapter 4

Y

f(x)

µy + σy
µy
µy – σy

µx – σx

µx

X

µx + σx

Figure 4
...


the probability distribution of the output signals Y i if they depend with known functions
f i upon the input signals? Figure 4
...

The general solution can be generated using the first order Taylor expansion of f i
...
60)

CY = FX CX FX
where
C X = covariance matrix representing the input uncertainties;

C Y = covariance matrix representing the propagated uncertainties for the outputs;
F x is the Jacobian matrix defined as

F X = ∇f = ∇ X ⋅ f ( X )

T

f1

T

=




=
:
∂ X1
∂ Xn
fm

This is also the transpose of the gradient of f ( X )
...

∂f m
∂f m
-------- … -------∂X 1
∂X n

(4
...
60) to solve an
example problem in section 4
...
1
...

4
...
A wide
variety of sensing technologies are available, as shown in the previous section
...
Therefore, sensor inputs must be used in a way that
enables the robot to interact with its environment successfully in spite of measurement
uncertainty
...

One strategy is to use each sensor measurement as a raw and individual value
...
Alternatively, the raw sensor values could be
used to update an intermediate model, with the robot’s actions being triggered as a function
of this model rather than the individual sensor measurements
...
We call this process feature extraction, and it is this next,
optional step in the perceptual interpretation pipeline (figure 4
...

In practical terms, mobile robots do not necessarily use feature extraction and scene
interpretation for every activity
...
For example, in order to guarantee emergency
stops in the face of immediate obstacles, the robot may make direct use of raw forwardfacing range readings to stop its drive motors
...
For map-building and precise navigation, the range sensor values
and even vision sensor measurements may pass through the complete perceptual pipeline,
being subjected to feature extraction followed by scene interpretation to minimize the
impact of individual sensor uncertainty on the robustness of the robot’s mapmaking and
navigation skills
...

Feature definition
...

They usually can be extracted from measurements and mathematically described
...
We distinguish

Environment

152

Chapter 4

sensing

signal
treatment

feature
extraction

scene
interpretation

Figure 4
...


between low-level features (geometric primitives) like lines, circles, or polygons, and highlevel features (objects) such as edges, doors, tables, or a trash can
...
Making use of raw data has the potential advantage that every bit of information is fully used, and thus there is a high conservation of information
...
The hope, when one incorporates low-level
features, is that the features are filtering out poor or useless data, but of course it is also
likely that some valid information will be lost as a result of the feature extraction process
...

Once again, the abstraction process has the risk of filtering away important information,
potentially lowering data utilization
...
For example, a corner feature inhabits a specific coordinate location in the geometric world
...

In mobile robotics, features play an especially important role in the creation of environmental models
...
When designing a
mobile robot, a critical decision revolves around choosing the appropriate features for the
robot to use
...
For geometric features to be useful, the target geometries must be
readily detected in the actual environment
...


Perception

a)

153

b)

Figure 4
...
Courtesy of Sjur Vestli
...
Obviously, the specific sensors and sensor uncertainty of the robot
impacts the appropriateness of various features
...
In contrast, a sonar-equipped
robot may not have the appropriate tools for corner feature extraction
...
Vision-based feature extraction can effect a significant computational cost, particularly in robots where the vision sensor processing is performed by one
of the robot’s main processors
...
Feature extraction is an important step toward scene interpretation, and by this token the features extracted must provide information that is consonant with the representation used for the environmental model
...
Figure 4
...

Each approach has advantages and disadvantages, but extraction of line and corner features
has much more relevance to the representation on the left
...
5
for a close look at map representations and their relative trade-offs
...

4
...
1 Feature extraction based on range data (laser, ultrasonic, vision-based ranging)
Most of today’s features extracted from ranging sensors are geometric primitives such as
line segments or circles
...
Here we describe line extraction in detail, demonstrating how the uncertainty
models presented above can be applied to the problem of combining multiple sensor measurements
...

4
...
1
...
Usually, the
system is overdetermined in that the number of sensor measurements exceeds the number
of feature parameters to be estimated
...

One can, for example, extract the feature that minimizes the discrepancy with all sensor
measurements used (e
...
least-squares estimation)
...
For greater detail than is presented below, refer to [14, pp
...

Probabilistic line extraction from uncertain range sensor data
...
36
...
Instead, we wish to select the best possible match,
given some optimization criterion
...
We know that there is uncertainty associated with each measurement, and so we can model each measurement using two random
variables X i = ( P i , Q i )
...
Based on equation (4
...
62)

Perception

155

xi = (ρi, θi)
r

di

α
Figure 4
...
The model parameters y (length of the perpendicular) and
α (its angle to the abscissa) uniquely describe a line
...
63)

E [ P i ⋅ Q j ] = E [ P i ]E [ Q j ]

∀ i, j = 1, …, n

(4
...
65)

(4
...
If there were no error, we would want to find
a line for which all measurements lie on that line:
ρ cos θ cos α + ρ sin θ sin α – r = ρ cos ( θ – α ) – r = 0

(4
...
When it is
nonzero, this is a measure of the error between the measurement point ( ρ, θ ) and the line,
specifically in terms of the minimum orthogonal distance between the point and the line
...

For example a number of line extraction techniques do not minimize this orthogonal point-

156

Chapter 4

line distance, but instead the distance parallel to the y-axis between the point and the line
...

For each specific ( ρ i, θ i ) , we can write the orthogonal distance d i between ( ρ i, θ i ) and
the line as
ρ i cos ( θ i – α ) – r = d i
...
68)

If we consider each measurement to be equally uncertain, we can sum the square of all
errors together, for all measurement points, to quantify an overall fit between the line and
all of the measurements:
S =

∑ di
i

2

=

∑ ( ρi cos ( θi – α ) – r )
i

2

(4
...
We can do so by
solving the nonlinear equation system
∂S
= 0
∂α

∂S
= 0
...
70)

The above formalism is considered an unweighted least-squares solution because no
distinction is made from among the measurements
...
For example, we know with regard to vision-based
stereo ranging that uncertainty and, therefore, variance increases as a square of the distance
2
between the robot and the object
...


(4
...
69) becomes

2
...
See [9] for a careful treatment
...


(4
...
70) in the weighted least-squares sense3
is
2
2

w i w j ρ i ρ j cos θ i sin θ j 
w i ρi sin 2θ i – ------

Σ wi
1
α = --atan  ------------------------------------------------------------------------------------------------------------------- 
1
2
 w ρ 2 cos 2θ – ------w i w j ρ i ρ j cos ( θ i + θ j )
i i
i


Σ wi







∑∑

∑∑

w i ρ i cos ( θ i – α )
r = --------------------------------------------wi



(4
...
74)

In practice, equation (4
...

Let us demonstrate equations (4
...
74) with a concrete example
...
2 have been taken with a laser range sensor installed on a
mobile robot
...

Direct application of the above solution equations yields the line defined by α = 37
...
4
...
37
...
Returning to the subject of section
4
...
3, we would like to understand how the uncertainties of specific range sensor measurements propagate to govern the uncertainty of the extracted line
...
73) and (4
...
We follow here the notation of [14] and distinguish a weighted least-squares problem if C X is
diagonal (input errors are mutually independent) and a generalized least-squares problem if C X is
nondiagonal
...
Atan2 computes tan ( x ⁄ y ) but uses the signs of both x and y to determine the quadrant in which
the resulting angles lies
...


158

Chapter 4

y

ρ
α
x

Figure 4
...
The small lines at each measurement point represent the measurement uncertainty σ that is proportional to the square of the measurement distance
...
2
Measured values
pointing angle of sensor θi
[deg]
0
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80

range ρi
[m]
0
...
4404
0
...
4222
0
...
4371
0
...
3949
0
...
4276
0
...
3956
0
...
4752
0
...
5273
0
...
60) with A and R representing the
random output variables of α and r respectively
...
75)

given the 2n × 2n input covariance matrix

CX =

CP 0
0 CQ

2

0

0

=

diag ( σ ρ )

diag ( σ θ )

i

(4
...
73) and (4
...
Then by calculating the Jacobian,

F PQ =

∂α ∂α
∂α ∂α ∂α
∂α


∂ P1 ∂ P2
∂ Pn ∂ Q1 ∂ Q2
∂ Qn
∂r ∂r
∂r ∂r ∂r
∂r


∂ P1 ∂ P2
∂ Pn ∂ Q1 ∂ Q2
∂ Qn

(4
...
63) to yield C AR :
T

C AR = F PQ C X F PQ

(4
...
For more details about this method refer to [6, 37]
4
...
1
...
Unfortunately, the feature extraction process is significantly more complex than
that
...
Rather, only some of the range measurements should play a role in line extraction and, further, there may be more than one line

160

Chapter 4

a) Image Space

b) Model Space
β1=r [m]

β0=α [rad]
Evidence accumulation in the model space
→ Clusters of normally distributed vectors

A set of nf neighboring points
of the image space

Figure 4
...


feature represented in the measurement set
...
38
...
A diverse set of techniques exist for segmentation of sensor input in general
...

For example, one segmentation technique is the merging, or bottom-up technique in
which smaller features are identified and then merged together based on decision criteria to
extract the goal features
...
38
...
First, one may generate a large number of line segments based on adjacent groups of
range measurements
...
The simplest measure of the
closeness of two line segments5 x 1 = [ α 1, r 1 ] and x 2 = [ α 2, r 2 ] in the model space is
given by Euclidean distance:
T

2

( x1 – x 2 ) ( x1 – x2 ) = ( α1 – α2 ) + ( r1 – r2 )

5
...


2

(4
...
80)

where d m is a threshold value and x is the representation of the reference line (from a
model, average of a group of lines, etc
...
80) does not take into account the fact that for each measurement and therefore for each line segment we have a measure of uncertainty
...
81)

The distance measure of equation (4
...

4
...
1
...
An angle histogram, as presented in figure 4
...
First, a 360-degree scan of the room is taken with the range scanner,
and the resulting “hits” are recorded in a map
...
39b)
...
39c can be built
...
Detection of peaks yields only two main peaks: one
for each pair of parallel walls
...

4
...
1
...
In general, a mobile robot makes use of multiple features simultaneously, comprising a feature
set that is most appropriate for its operating environment
...

In addition, other geometric kernels consistently appear throughout the indoor manmade environment
...
Step
discontinuities, defined as a step change perpendicular to the direction of hallway travel,

162

Chapter 4

a)

b)
B
A

δ

C

E
D

F

20°

c)

n

δ [°]
Figure 4
...


are characterized by their form (convex or concave) and step size
...

Thus, the standard segmentation problem is not so simple as deciding on a mapping from
sensor readings to line segments, but rather it is a process in which features of different
types are extracted based on the available sensor measurements
...
40 shows a model
of an indoor hallway environment along with both indentation features (i
...
, step discontinuities) and doorways
...
The line feature, for example, provides two degrees of information, angle and distance
...

The set of useful geometric features is essentially unbounded, and as sensor performance improves we can only expect greater success at the feature extraction level
...
40
Multiple geometric features in a single hallway, including doorways and discontinuities in the width
of the hallway
...
Because stereo vision provides a full
3D set of range measurements, one can extract plane features in addition to line features
from the resulting data set
...
Thus they are promising as
another highly informative feature for mobile robots to use for mapping and localization
...
3
...
Significant research effort has been dedicated over the past several decades,
to inventing algorithms for understanding a scene based on 2D images and the research
efforts have slowly produced fruitful results
...
To explore these disciplines,
refer to [18, 29, 159]
...
41
...
1
...
These specific vision applications have
witnessed commercial solutions primarily because the challenges are in both cases relatively well focused and the resulting, problem-specific algorithms are straightforward
...
We would like
to solve the more general problem of extracting a large number of feature types from
images
...
41
Scheme and tools in computer vision
...


This section presents some appearance-based feature extraction techniques that are relevant to mobile robotics along these lines
...
First, the method must
operate in real time
...
Second, the method must be robust to the real-world
conditions outside of a laboratory
...

Throughout the following descriptions, keep in mind that vision-based interpretation is
primarily about the challenge of reducing information
...
By contrast, a CCD camera can output 240 million bits per
second! The sonar produces a tiny amount of information from which we hope to draw
broader conclusions
...
For
example, we may intend to measure the color of a landmark
...
Therefore the problem of visual feature
extraction is largely one of removing the majority of irrelevant information in an image so
that the remaining information unambiguously describes specific features in the environment
...
Spatially localized features are those features found in subregions of one or
more images, corresponding to specific locations in the physical world
...

Before continuing it is important to note that all vision-based sensors supply images
with such a significant amount of noise that a first step usually consists of “cleaning” the
image before launching any feature extraction algorithm
...

Image preprocessing
...
Indeed, the Laplacian of Gaussian method we studied in section 4
...
8
...
Because of the susceptibility of such
high-order derivative algorithms to changes in illumination in the basic signal, it is important to smooth the signal so that changes in intensity are due to real changes in the luminosity of objects in the scene rather than random variations due to imaging noise
...
1
...
2:
ˆ
I = G⊗I

(4
...
83)

Such a low-pass filter effectively removes high-frequency noise, and this in turn causes
the first derivative and especially the second derivative of intensity to be far more stable
...


166

Chapter 4

a)

b)

Figure 4
...
(b) Edges computed from (a)
...
3
...
1 Spatially localized features
In the computer vision community many algorithms assume that the object of interest occupies only a sub-region of the image, and therefore the features being sought are localized
spatially within images of the scene
...
This makes them particularly applicable to geometric models of the robot’s
environment
...
However, mobile robots face the specific mobility challenges of obstacle avoidance
and localization
...
Finally, in view
of the need for localization we discuss the role of vision-based feature extraction in the
detection of robot navigation landmarks
...
Figure 4
...
Edges define regions in the image plane
where a significant change in the image brightness takes place
...
The hypothesis is that edge contours
in an image correspond to important scene contours
...
42b shows, this is not
entirely true
...
Typically, there are missing contours, as well as noise contours, that do not correspond to anything of significance in the scene
...
23
...
23 (top left)
shows the 1D section of an ideal edge
...
23 (top right)
...

A naive edge detector would simply differentiate, since an edge by definition is located
where there are large transitions in intensity
...
23 (bottom right), differentiation of the noisy camera signal results in subsidiary peaks that can make edge detection very challenging
...

Below, we present several popular edge detection algorithms, all of which operate on this
same basic principle, that the derivative(s) of intensity, following some form of smoothing,
comprises the basic signal from which to extract edge features
...
The current reference edge detector throughout the
vision community was invented by John Canny in 1983 [30]
...

The Canny edge extractor smooths the image I via Gaussian convolution and then looks
for maxima in the (rectified) derivative
...
84)

Thus, smoothing the image by convolving with a Gaussian G σ and then differentiating
is equivalent to convolving the image with G' σ , the first derivative of a Gaussian (figure
4
...

We wish to detect edges in any direction
...
35)
...
The result
is a basic algorithm for detecting edges at arbitrary orientations:
The algorithm for detecting edge pixels at an arbitrary orientation is as follows:
1
...


168

Chapter 4

a)

b)

2

2

1
G σ ( x ) = -------------- e
2πσ

x
– --------2


–x
G σ' ( x ) = ---------------- e
3
2πσ

x
– --------2


Figure 4
...
(b) The first derivative of a Gaussian function
...
44
(a) Two-dimensional Gaussian function
...
(c) Horizontal filter
...
Define the square of the gradient magnitude R ( x, y ) = R V ( x, y ) + R H ( x, y )
...
Mark those peaks in R ( x, y ) that are above some predefined threshold T
...
A popular
next step in this process is nonmaxima suppression
...
45
(a) Example of an edge image; (b) Nonmaxima suppression of (a)
...
If not, then the value is set to zero
...
45)
...
First, find adjacent (or
connected) sets of edges and group them into ordered lists
...

Gradient edge detectors
...
Therefore simpler, discrete kernel operators are
commonly used to approximate the behavior of the Canny edge detector
...
He used two 2 x 2 masks to calculate the
gradient across the edge in two diagonal directions
...
Roberts obtained the gradient magnitude G with
the equation

2

2

G ≅ r1 + r 2 ; r 1 =

–1 0
0 –1
; r2 =
0 1
1 0

(4
...
Let
p 1 be the value calculated from the first mask and p 2 the value calculated from the second
mask
...

2

2

G ≅ p1 + p 2 ;

170

Chapter 4

a

b

c

d

Figure 4
...


p1
θ ≅ atan  ----  ;
 p 2

p1 =

–1 –1 –1
–1 0 1
0 0 0 ; p 2 = –1 0 1
–1 0 1
1 1 1

(4
...
Let s 1 be the value calculated from the first mask and s 2 the value calculated from the second mask
...
Figure 4
...


Perception

171

2

2

G ≅ s1 + s 2 ;
s1
θ ≅ atan  ---  ;
 s 2

s1 =

– 1 –2 – 1
–1 0 1
0 0 0 ; s2 = –2 0 2
1 2 1
–1 0 1

(4
...
Many image-processing algorithms have generally been tested in
laboratory conditions or by using static image databases
...
A vision system for mobile robots has to adapt to the changing illumination
...
The same scene with
different illumination results in edge images with considerable differences
...

To do this, a histogram of the gradient magnitudes of the processed image is calculated
(figure 4
...
With this simple histogram it is easy to consider only the n pixels with the
highest gradient magnitude for further calculation steps
...
The gradient magnitude of the point where n is reached
will be used as the temporary threshold value
...
Furthermore, for each
image, the same number of relevant edge pixels is considered, independent of illumination
...
Because most detectors use nonmaxima suppression, the
number of edge pixels will be further reduced
...
In mobile robotics the straight edge is
often extracted as a specific feature
...
The Hough transform is a
simple tool for extracting edges of a particular shape[16, 18]
...

Suppose a pixel ( x p, y p ) in the image I is part of an edge
...
This equation can only be
satisfied with a constrained set of possible values for m 1 and b 1
...


172

Chapter 4

a

b

Figure 4
...
46(b)
...
Any line passing through this second pixel
must satisfy the equation: y q = m 2 x q + b 2
...

More generally, for all pixels that are part of a single straight line through I , they must
all lie on a line defined by the same values for m and b
...
The Hough transform uses this basic property, creating a mechanism so that each edge pixel can “vote” for various values of the ( m, b ) parameters
...

• Initialize the array to zero: A [ m, b ] = 0 for all values of m, b
...

• Search the cells in A to identify those with the largest value
...


Perception

173

Floor plane extraction
...
Range-based sensors provide effective means for identifying most types of
obstacles facing a mobile robot
...
However, each ranging sensor has limitations
...
Most laser rangefinders are 2D, only detecting obstacles penetrating a specific
sensed plane
...

In addition to each individual shortcoming, range-based obstacle detection systems will
have difficulty detecting small or flat objects that are on the ground
...
In
addition, different types of floor surfaces cannot easily be discriminated by ranging
...

Floor plane extraction is a vision-based approach for identifying the traversable portions
of the ground
...

As is the case with all vision-based algorithms, floor plane extraction succeeds only in
environments that satisfy several important assumptions:
• Obstacles differ in appearance from the ground
...

• There are no overhanging obstacles
...
A stronger version of this assumption, sometimes invoked, states that
the ground is uniform in appearance and different from all obstacles
...

Floor plane extraction in artificial environments
...
Shakey, the first autonomous robot developed from 1966 through 1972 at SRI, used vision-based floor plane extraction in a
manufactured environment for obstacle detection [115]
...
Furthermore, the base of each wall was
painted with a high-contrast strip of black paint and the edges of all simple polygonal obstacles were also painted black
...
The lowest edges detected in an image corresponded to the closest
obstacles, and the direction of straight-line edges extracted from the image provided clues
regarding not only the position but also the orientation of walls and polygonal obstacles
...

Adaptive floor plane extraction
...
Such floor plane extraction algorithms
tend to use edge detection and color detection jointly while making certain assumptions
regarding the floor, for example, the floor’s maximum texture or approximate color range
[78]
...
A more recent approach is that of adaptive floor plane extraction, whereby the parameters defining the expected appearance of the
floor are allowed to vary over time
...
e
...
Then, statistics computed on these “floor sample” pixels can be used to classify
the remaining image pixels
...
The most popular solution is to construct one or more histograms
based on the floor sample pixel values
...

Histograms are also useful as discrete representations of distributions
...
Histograms can also be
updated very quickly and use very little processor memory
...

• Initialize a histogram array H with n intensity values: H [ i ] = 0 for i = 1, …, n
...

The histogram array H serves as a characterization of the appearance of the floor plane
...
Classification of each pixel in I as floor plane or obstacle is performed

Perception

175

Figure 4
...
The trapezoidal polygon identifies the floor sampling
region
...
For
example, if the target pixel has a hue that never occurred in the “floor sample,” then the
corresponding hue histogram will have a count of zero
...

Figure 4
...
Note that, unlike the static floor extraction algorithm, the adaptive algorithm is able to successfully classify a human shadow due to the
adaptive histogram representation
...

Appearance-based extraction of the floor plane has been demonstrated on both indoor
and outdoor robots for real-time obstacle avoidance with a bandwidth of up to 10 Hz
...

4
...
2
...


176

Chapter 4

Figure 4
...


Whole-image features are not designed to identify specific spatial structures such as obstacles or the position of specific landmarks
...
From the perspective of robot localization, the goal is to extract one
or more features from the image that are correlated well with the robot’s position
...

We present two techniques for whole-image feature extraction below
...
The resulting
image histogram comprises a set of whole-image features derived directly from the pixel
information of an image
...

Direct extraction: image histograms
...
A logical first step in designing a vision-based sensor
for this purpose is to maximize the field of view of the camera
...
The catadioptric camera system, now very popular in mobile robotics, offers an
extremely wide field of view [114]
...
The image
provides a 360-degree view of the robot’s environment, as shown in figure 4
...


Perception

177

The catadioptric image is a 360-degree image warped onto a 2D image surface
...
If the camera is mounted vertically on the robot so that the image represents the
environment surrounding the robot (i
...
, its horizon), then rotation of the camera and robot
simply results in image rotation
...

Of course, mobile robot rotation will still change the image; that is, pixel positions will
change, although the new image will simply be a rotation of the original image
...
Because histogramming is a function
of the set of pixel values and not the position of each pixel, the process is pixel position
invariant
...

A color camera’s output image generally contains useful information along multiple
bands: r , g , and b values as well as hue, saturation, and luminance values
...
Given a color camera image, G , the first step is to create mappings from G to each
of the n available bands
...
Each band-specific histogram H i is calculated as before:
• As preprocessing, smooth G i using a Gaussian smoothing operator
...

• For every pixel (x,y) in G i , increment the histogram: H i [ G i [ x, y ] ]+=1
...
49, the image histogram technique extracts six histograms (for each of r , g , b , hue, saturation, and luminance) as shown in figure 4
...
In
order to make use of such histograms as whole-image features, we need ways to compare
to histograms to quantify the likelihood that the histograms map to nearby robot positions
...
For an overview refer to [127]
...
Given
two histograms H and K , with h i and k i denoting the histogram entries, the Jeffrey divergence d ( H, K ) is defined as
d ( H, K ) =

2h i

2k i

∑  hi log --------------i + ki log --------------i

hi + k
hi + k 
i

(4
...
50
Six 1D histograms of the image above
...


recorded images of locations in their environment
...

Tiered extraction: image fingerprint extraction
...
We describe one particular implementation of this
approach, in which the resulting whole-image feature is called the image fingerprint [95]
...

The first extraction tier searches the panoramic image for spatially localized features:
vertical edges and sixteen discrete hues of color
...
Vertical edges are
“voted upon” by each edge pixel just as in a vertical edge Hough transform
...
51
Two panoramic images and their associated fingerprint sequences [95]
...
52
Three actual string sequences
...


in section 4
...
2
...
Suppose the
Hough table’s tallies for each candidate vertical line have a mean µ and a standard deviation σ
...

Vertical color bands are identified in largely the same way, identifying statistics over the
occurrence of each color, then filtering out all candidate color patches except those with
tallies greater than µ + σ
...
51 shows two sample panoramic images and their associated fingerprints
...

Just as with histogram distance metrics in the case of image histogramming, we need a
quantifiable measure of the distance between two fingerprint strings
...
Note that we may have strings that differ not just in a single
element value, but even in their overall length
...
52 depicts three actual

180

Chapter 4

sequences generated using the above algorithm
...

The technique used in the fingerprinting approach for string differencing is known as a
minimum energy algorithm
...
The result is a distance metric that is relatively insensitive to the addition
or subtraction of individual local features while still able to robustly identify the correct
matching string in a variety of circumstances
...
But it is spatially localized features that continue to play a dominant role because
of their immediate application to the more urgent need for real-time obstacle avoidance
...
1

Mobile Robot Localization

Introduction

Navigation is one of the most challenging competences required of a mobile robot
...
1); cognition, the robot must decide how
to act to achieve its goals; and motion control, the robot must modulate its motor outputs to
achieve the desired trajectory
...
2), localization has received the greatest research
attention in the past decade and, as a result, significant advances have been made on this
front
...

First, section 5
...
Then, section 5
...
The remainder of the chapter discusses the question of rep-

?

Figure 5
...
g
...
2
General schematic for mobile robot localization
...

5
...
The GPS would inform the robot of
its exact position, indoors and outdoors, so that the answer to the question, “Where am I?”,
would always be immediately available
...
The existing GPS network provides accuracy to within several meters, which is unacceptable for localizing human-scale mobile robots as well as miniature mobile robots such
as desk robots and the body-navigating nanorobots of the future
...

But, looking beyond the limitations of GPS, localization implies more than knowing
one’s absolute position in the Earth’s reference frame
...
This robot may need to identify its absolute position, but its relative position

Mobile Robot Localization

183

with respect to target humans is equally important
...

Furthermore, during the cognition step a robot will select a strategy for achieving its goals
...
The robot
may need to acquire or build an environmental model, a map, that aids it in planning a path
to the goal
...

Clearly, the robot’s sensors and effectors play an integral role in all the above forms of
localization
...
This section identifies important aspects of
this sensor and effector suboptimality
...
2
...
Sensor noise induces a
limitation on the consistency of sensor readings in the same environmental state and, therefore, on the number of useful bits available from each sensor reading
...

For example, a vision system used for indoor navigation in an office building may use
the color values detected by its color CCD camera
...
As a result, hue values are not constant
...

Illumination dependence is only one example of the apparent noise in a vision-based
sensor system
...

Consider the noise level (i
...
, apparent random error) of ultrasonic range-measuring sensors (e
...
, sonars) as discussed in section 4
...
2
...
When a sonar transducer emits sound
toward a relatively smooth and angled surface, much of the signal will coherently reflect
away, failing to generate a return echo
...
When this level is close to the gain threshold of
the sonar sensor, then the sonar will, at times, succeed and, at other times, fail to detect the
object
...

The poor signal-to-noise ratio of a sonar sensor is further confounded by interference
between multiple sonar emitters
...
In acoustically reflective environments, multipath interference is possible between the sonar emissions of one transducer and the echo detection
circuitry of another transducer
...
e
...
Such errors occur rarely, less
than 1% of the time, and are virtually random from the robot’s perspective
...

Clearly, the solution is to take multiple readings into account, employing temporal fusion
or multisensor fusion to increase the overall information content of the robot’s inputs
...
2
...
The problem,
known as sensor aliasing, is a phenomenon that humans rarely encounter
...
In other words, every different place looks different
...
Consider
moving through an unfamiliar building that is completely dark
...
Another useful example is that
of a human-sized maze made from tall hedges
...

In robots, the nonuniqueness of sensor readings, or sensor aliasing, is the norm and not
the exception
...
This sensor provides range information in a single direction without any additional data regarding material composition such as color, texture, and hardness
...
Formally, there is a many-to-one
mapping from environmental states to the robot’s perceptual inputs
...
A classic problem with sonarbased robots involves distinguishing between humans and inanimate objects in an indoor
setting
...

The problem posed to navigation because of sensor aliasing is that, even with noise-free
sensors, the amount of information is generally insufficient to identify the robot’s position
from a single-percept reading
...


Mobile Robot Localization

185

5
...
3 Effector noise
The challenges of localization do not lie with sensor technologies alone
...
In particular, a single action taken by a mobile robot may have several different possible results, even though from the robot’s point of view the initial state before the action
was taken is well known
...
Therefore the
simple act of moving tends to increase the uncertainty of a mobile robot
...
Using cognition, the motion can be carefully planned so as to minimize
this effect, and indeed sometimes to actually result in more certainty
...

First, however, it is important to understand the precise nature of the effector noise that
impacts mobile robots
...
The true source of error
generally lies in an incomplete model of the environment
...
All of these unmodeled sources of error result in inaccuracy between the physical
motion of the robot, the intended motion of the robot, and the proprioceptive sensor estimates of motion
...
The movement of the robot, sensed with
wheel encoders or heading sensors or both, is integrated to compute position
...
Thus
the position has to be updated from time to time by other localization mechanisms
...

In the following we concentrate on odometry based on the wheel sensor readings of a
differential-drive robot only (see also [4, 57, 58])
...
g
...

There are many sources of odometric error, from environmental factors to resolution:
• Limited resolution during integration (time increments, measurement resolution, etc
...

Some of the errors might be deterministic (systematic), thus they can be eliminated by
proper calibration of the system
...

From a geometric point of view one can classify the errors into three types:
1
...
Turn error: similar to range error, but for turns
→ difference of the wheel motions
3
...
Consider a robot whose position is initially perfectly well-known, moving forward in a straight line along the x -axis
...
Over time, as a mobile robot moves
about the environment, the rotational error between its internal reference frame and its original reference frame grows quickly
...
It is instructive to
establish an error model for odometric accuracy and see how the errors propagate over
time
...
2
...
1)

For a differential-drive robot the position can be estimated starting from a known position by integrating the movement (summing the incremental travel distances)
...
2)

Mobile Robot Localization

187

XI

θ

v(t)

ω(t)
XI
Figure 5
...


∆y = ∆s sin ( θ + ∆θ ⁄ 2 )

(5
...
4)

∆s r + ∆s l
∆s = --------------------2

(5
...

Thus we get the updated position p' :
∆s cos ( θ + ∆θ ⁄ 2 )
x
∆s cos ( θ + ∆θ ⁄ 2 )
x'
p' = y' = p + ∆s sin ( θ + ∆θ ⁄ 2 ) = y + ∆s sin ( θ + ∆θ ⁄ 2 )
∆θ
θ
∆θ
θ'

(5
...
4) and (5
...
7)

As we discussed earlier, odometric position updates can give only a very rough estimate
of the actual position
...

In the next step we will establish an error model for the integrated position p' to obtain
the covariance matrix Σ p' of the odometric position estimate
...
For the motion increment
( ∆s r ;∆s l ) we assume the following covariance matrix Σ ∆ :

Σ ∆ = covar ( ∆s r, ∆s l ) =

k r ∆s r

0

0

k l ∆s l

(5
...
As you can see, in equation (5
...

These assumptions, while not perfect, are suitable and will thus be used for the further
development of the error model
...
The values
for the error constants k r and k l depend on the robot and the environment and should be
experimentally established by performing and analyzing representative movements
...
7)] is reasonably approximated by the first-order Taylor expansion (linearization),
we conclude, using the error propagation law (see section 4
...
2),
5
...


Mobile Robot Localization

189

T

Σ p' = ∇ p f ⋅ Σ p ⋅ ∇ p f + ∇ ∆ f ⋅ Σ ∆ ⋅ ∇ ∆ f
rl

T

(5
...
g
...

Using equation (5
...
10)

∆θ
∆θ
∆θ 1
∆θ
∆s
∆s
1
- -- cos  θ + ------ – ----- sin  θ + ------ -- cos  θ + ------ + ----- sin  θ + ------


2
2  2b 
2 2
2  2b 
2
F∆ =
rl

∆θ
∆θ ∆s
∆θ 1
∆θ
∆s
1 
- -- sin θ + ------ + ----- cos  θ + ------ -- sin  θ + ------ – ----- cos  θ + ------


2
2  2b
2 2 
2  2b
2 
1
1
-– -b
b

(5
...
11) are
∂f ∂ f
F ∆ = ∇ ∆ f = ---------- ---------- = …
rl
rl
∂∆s r ∂∆s l

(5
...
13)

and with
∆s r + ∆s l
∆s = --------------------- ;
2

∆s r – ∆ s l
∆θ = -----------------b

(5
...
4
Growth of the pose uncertainty for straight-line movement: Note that the uncertainty in y grows much
faster than in the direction of movement
...
The ellipses drawn around the robot positions represent the uncertainties in the
x,y direction (e
...
3σ )
...


∂∆s
∂∆s
∂∆θ
∂∆θ
1
1
1
1
---------- = -- ; ---------- = -- ; ---------- = -- ; ---------- = – -∂∆s r
2
∂∆s l
2
∂∆s r
b
∂∆s l
b

(5
...
11)
...
4 and 5
...

The results have been computed using the error model presented above
...
One
can compensate for deterministic errors properly calibrating the robot
...
A detailed discussion of odometric errors and a method for calibration and quantification of deterministic and nondeterministic errors can be found in [5]
...


Mobile Robot Localization

191

Figure 5
...
Note that the main
axis of the uncertainty ellipse does not remain perpendicular to the direction of movement
...
3 To Localize or Not to Localize: Localization-Based Navigation versus
Programmed Solutions
Figure 5
...
Suppose
that the mobile robot in question must deliver messages between two specific rooms in this
environment: rooms A and B
...
Sensors are absolutely required to
avoid hitting moving obstacles such as humans, and some motion control system is required
so that the robot can deliberately move
...
Localization may seem mandatory in order to successfully navigate between the
two rooms
...
It is true that, at the least, the
robot must have a way of detecting the goal location
...

An alternative, espoused by the behavior-based community, suggests that, since sensors
and effectors are noisy and information-limited, one should avoid creating a geometric map
for localization
...
Fundamentally, this approach avoids explicit reasoning
about localization and position, and thus generally avoids explicit path planning as well
...
6
A sample environment
...
For example, in figure 5
...
Then the robot can reach room B by engaging the left-wall follower with the
room B detector as the termination condition for the program
...
7
...
It suffers from
some disadvantages, however
...
Often, the navigation code is location-specific, and the
same degree of coding and debugging is required to move the robot to a new environment
...
This task may be time-consuming and is heavily dependent
on the specific robot hardware and environmental characteristics
...

Even when individual behaviors are tuned to optimize performance, this fusion and rapid
switching between multiple behaviors can negate that fine-tuning
...


Mobile Robot Localization

193

communicate data
discover new area
sensors

Σ

detect goal position

actuators

avoid obstacles
follow right / left wall
coordination / fusion
e
...
fusion via vector summation

Figure 5
...


perception

localization / map-building
actuators

sensors
cognition / planning

motion control
Figure 5
...


In contrast to the behavior-based approach, the map-based approach includes both localization and cognition modules (see figure 5
...
In map-based navigation, the robot explicitly attempts to localize by collecting sensor data, then updating some belief about its
position with respect to a map of the environment
...

• The existence of the map itself represents a medium for communication between human
and robot: the human can simply give the robot a new map if the robot goes to a new
environment
...

The map-based approach will require more up-front development effort to create a navigating mobile robot
...

Of course the key risk of the map-based approach is that an internal representation,
rather than the real world itself, is being constructed and trusted by the robot
...
e
...

In the remainder of this chapter, we focus on a discussion of map-based approaches and,
specifically, the localization component of these techniques
...

5
...
There are two specific concepts that the robot must represent, and
each has its own unique possible solutions
...
What aspects of the environment are contained in this map?
At what level of fidelity does the map represent the environment? These are the design
questions for map representation
...

Does the robot identify a single unique position as its current position, or does it describe
its position in terms of a set of possible positions? If multiple possible positions are
expressed in a single belief, how are those multiple positions ranked, if at all? These are the
design questions for belief representation
...
We begin by discussing belief representation
...

The former covers solutions in which the robot postulates its unique position, whereas the
latter enables a mobile robot to describe the degree to which it is uncertain about its position
...
9
...
4
...
Given some environmental map, the robot’s belief about position is

Mobile Robot Localization

195

probability P

a)

position x

probability P

b)

position x

probability P

c)

position x

probability P

d)

node
of topological map
A

B

C

D

E

F

G

Figure 5
...
(a) Continuous map with single-hypothesis belief, e
...
, single Gaussian centered at a single
continuous value
...
g;
...
(c) Discretized (decomposed) grid map with probability values
for all possible robot positions, e
...
; Markov approach
...
g
...


196

Chapter 5

expressed as a single unique point on the map
...
10, three examples of a singlehypothesis belief are shown using three different map representations of the same actual
environment (figure 5
...
In figure 5
...
In figure 5
...

In figure 5
...
In this case, the
single hypothesis of position involves identifying a single node i in the topological graph
as the robot’s position
...
The unambiguous nature
of this representation facilitates decision-making at the robot’s cognitive level (e
...
, path
planning)
...

Just as decision-making is facilitated by a single-position hypothesis, so updating the
robot’s belief regarding position is also facilitated, since the single position must be
updated by definition to a new, single position
...
Therefore, forcing the position update process to always generate a single hypothesis of position
is challenging and, often, impossible
...
4
...

In one simple example originating in the work of Jean-Claude Latombe [21, 99], the
robot’s position is described in terms of a convex polygon positioned in a 2D map of the
environment
...
Each point
in the map is simply either contained by the polygon and, therefore, in the robot’s belief set,
or outside the polygon and thereby excluded
...
Such a polygonal representation of the
multiple-hypothesis belief can apply to a continuous, geometric map of the environment
[35] or, alternatively, to a tessellated, discrete approximation to the continuous environment
...
A strategy for representing a continuous multiple-hypothesis belief state along with a preference ordering over
possible positions is to model the belief as a mathematical distribution
...
10
Three examples of single hypotheses of position using different map representations: (a) real map
with walls, doors and furniture; (b) line-based map → around 100 lines with two parameters; (c)
occupancy grid-based map → around 3000 grid cells size 50 x 50 cm; (d) topological map using line
features (Z/S lines) and doors → around 50 features and 18 nodes
...
11
Example of multiple-hypothesis tracking (courtesy of W
...
The belief state that is
largely distributed becomes very certain after moving to position 4
...


mean µ plus a standard deviation parameter σ , thereby defining a Gaussian distribution
...
This representation is particularly amenable
to mathematically defined tracking functions, such as the Kalman filter, that are designed
to operate efficiently on Gaussian distributions
...
In
this case, each possible robot position is individually noted along with a confidence or
probability parameter (see figure 5
...
In the case of a highly tessellated map this can
result in thousands or even tens of thousands of possible robot positions in a single-belief
state
...
If the robot only acquires partial information regarding position from its sensors and effectors, that information can conceptually be
incorporated in an updated belief
...
This advantage is the key to
a class of localization and navigation solutions in which the robot not only reasons about
reaching a particular goal but reasons about the future trajectory of its own belief state
...
An example of this approach is [141], in which the robot plans a path from point A to point B that
takes it near a series of landmarks in order to mitigate localization difficulties
...

One of the fundamental disadvantages of multiple-hypothesis approaches involves decision-making
...
11 provides an example
...
If the goal of the robot is
to travel down one particular hallway, then given this belief state, what action should the
robot choose?
The challenge occurs because some of the robot’s possible positions imply a motion trajectory that is inconsistent with some of its other possible positions
...
But this approach demands that each possible position have an associated probability
...
But this leads us to the second major
disadvantage of multiple-hypothesis approaches
...
When one reasons in a 3D space of discrete possible positions, the number of possible belief states in the single-hypothesis case is limited to the
number of possible positions in the 3D world
...
When one
moves to an arbitrary multiple-hypothesis representation, then the number of possible
N
belief states is the power set of N , which is far larger: 2
...

There are, however, specific forms of multiple-hypothesis representations that are somewhat more constrained, thereby avoiding the computational explosion while allowing a
limited type of multiple-hypothesis belief
...
Alternatively, a highly tessellated map representation combined
with a limit of ten possible positions in the belief state, results in a discrete update cycle that
is, at worst, only ten times more computationally expensive than a single-hypothesis belief
update
...

In conclusion, the most critical benefit of the multiple-hypothesis belief state is the ability to maintain a sense of position while explicitly annotating the robot’s uncertainty about
its own position
...

5
...
Decisions made regarding the environmental representation can have impact on the choices available for robot
position representation
...

Three fundamental relationships must be understood when choosing a particular map
representation:
1
...

2
...

3
...

In the following sections, we identify and discuss critical design choices in creating a
map representation
...
As we shall see, the choice of possible map
representations is broad
...
In general, the
environmental representation and model can be roughly classified as presented in chapter
4, section 4
...

5
...
1 Continuous representations
A continuous-valued map is one method for exact decomposition of the environment
...
Mobile
robot implementations to date use continuous maps only in 2D representations, as further
dimensionality can result in computational explosion
...
This means that one assumes that the representation will specify all environmental objects in the map, and that any area in the map
that is devoid of objects has no objects in the corresponding portion of the environment
...


Mobile Robot Localization

201

Figure 5
...


One example of such a representation, shown in figure 5
...
This is
similar to the method used by Latombe [21, 98] and others to represent environments for
mobile robot path-planning techniques
...
Therefore, no real effort would have been expended to
attempt to use sets of polygons to describe a real-world environment, such as a park or
office building
...
The human map maker tends to capture on the map, for
localization purposes, only objects that can be detected by the robot’s sensors and, furthermore, only a subset of the features of real-world objects
...
In addition to this level
of simplification, a mobile robot map can further reduce memory usage by capturing only
aspects of object geometry that are immediately relevant to localization
...


202

Chapter 5

Figure 5
...
(a) Real map
...


One excellent example involves line extraction
...
Such robots can
automatically extract best-fit lines from the dense range data provided by thousands of
points of laser strikes
...
The continuous nature of
the map guarantees that lines can be positioned at arbitrary positions in the plane and at
arbitrary angles
...

Figure 5
...
Note that the only environmental features captured by the map are straight
lines, such as those found at corners and along walls
...

The impact of continuous map representations on position representation is primarily
positive
...
In the case of multiple-hypothesis position representation, the continuous map enables two types of multiple position representation
...
This is
shown in figure 5
...


Mobile Robot Localization

203

Yet, the continuous representation does not disallow representation of position in the
form of a discrete set of possible positions
...
This algorithm captures, within a continuous space, a discrete sampling of possible robot positions
...
The danger of a continuous representation is
that the map may be computationally costly
...
Together with the
use of the closed-world assumption, these techniques can enable a continuous-valued map
to be no more costly, and sometimes even less costly, than a standard discrete representation
...
5
...
Basically this transformation from the real world
to the map representation is a filter that removes all nonstraight data and furthermore
extends line segment data into infinite lines that require fewer parameters
...
In this section, we explore decomposition as applied
in its more extreme forms to the question of map representation
...
Both qualitatively, in terms of overall structure, and quantitatively, in terms of geometric precision, a highly abstract map does not
compare favorably to a high-fidelity map
...
The advantage of this approach is that the map representation
can potentially be minimized
...

A standard, lossless form of opportunistic decomposition is termed exact cell decomposition
...


204

Chapter 5

start

8

7

17
9

10

5
6

1

18
14

2

4

15

3
16
11

12

13

goal

Figure 5
...


Figure 5
...
The map representation tessellates the space into areas of free space
...

The underlying assumption behind this decomposition is that the particular position of
a robot within each area of free space does not matter
...
Therefore, as with other representations we will see, the resulting graph captures the adjacency of map locales
...

Such an exact decomposition is not always appropriate
...
If this information is expensive
to collect or even unknown, then such an approach is not feasible
...
Such a transformation is demonstrated in figure 5
...
The key disadvantage of this approach stems from its inexact nature
...
15
...
Yet another approach is adaptive cell decomposition, as presented in figure 5
...


Mobile Robot Localization

205

start

goal

start

goal

Figure 5
...


The concept of fixed decomposition is extremely popular in mobile robotics; it is perhaps the single most common map representation technique currently utilized
...
In an occupancy grid, the environment is represented by a discrete grid, where each
cell is either filled (part of an obstacle) or empty (part of free space)
...
16
Example of adaptive (approximate variable-cell) decomposition of an environment [21]
...
If the interior of a rectangle
lies completely in free space or in the configuration space obstacle, it is not decomposed further
...

The white cells lie outside the obstacles, the black inside, and the gray are part of both regions
...

In the occupancy grid, each cell may have a counter, whereby the value 0 indicates that
the cell has not been “hit” by any ranging measurements and, therefore, it is likely free
space
...
The values of cells are commonly discounted when a ranging strike travels through the cell, striking a further cell
...
Figure 5
...
One commercial robot that uses a standard occupancy grid for mapping and
navigation is the Cye robot [163]
...
First, the size of
the map in robot memory grows with the size of the environment and if a small cell size is
used, this size can quickly become untenable
...
In contrast, the

Mobile Robot Localization

207

Figure 5
...
Thrun [145])
...
Furthermore, any
fixed decomposition method such as this imposes a geometric grid on the world a priori,
regardless of the environmental details
...

For these reasons, an alternative, called topological decomposition, has been the subject
of some exploration in mobile robotics
...

Formally, a topological representation is a graph that specifies two things: nodes and the
connectivity between those nodes
...
When an arc connects two nodes, then the robot can
traverse from one node to the other without requiring traversal of any other intermediary
node
...
However, the
topological approach diverges in that the nodes are not of fixed size or even specifications
of free space
...

Figure 5
...
In this case, the robot is assumed to have an intersection detector, perhaps using sonar and vision to find intersections between halls and between halls and

208

Chapter 5

1

2

3
18

17

16 14

13

12

15

7

8

10

11

6

4

9

5

Figure 5
...


rooms
...

Another example of topological representation is the work of Simhon and Dudek [134],
in which the goal is to create a mobile robot that can capture the most interesting aspects of
an area for human consumption
...

In order to navigate using a topological map robustly, a robot must satisfy two constraints
...
Second, it must have a means for traveling between nodes using
robot motion
...
This ability to “tune” the representation to the robot’s particular sensors can be an important advantage of the topological
approach
...
Therein lies the compromise between the discrete cell-based map representations
and the topological representations
...
19
An artificial landmark used by Chips during autonomous docking
...

Yet, a chief motivation of the topological approach is that the environment may contain
important nongeometric features – features that have no ranging relevance but are useful
for localization
...

In contrast to these whole-image feature extractors, often spatially localized landmarks
are artificially placed in an environment to impose a particular visual-topological connectivity upon the environment
...
Examples of working systems operating with this landmark-based strategy have also
demonstrated success
...
Chips, the museum robot, is another robot that
uses man-made landmarks to obviate the localization problem
...
One such museum landmark is shown
in figure 5
...

In summary, range is clearly not the only measurable and useful environmental value for
a mobile robot
...

Choosing a map representation for a particular mobile robot requires, first, understanding
the sensors available on the mobile robot, and, second, understanding the mobile robot’s
functional requirements (e
...
, required goal precision and accuracy)
...
5
...
There are, however, fundamental real-world features that mobile robot map representations do not yet represent well
...

The real world is dynamic
...
This is particularly true when one considers the home environment with which domestic robots will someday need to contend
...
g
...
)
and transient obstacles (e
...
, humans, shipping packages, etc
...
Although vision
research is rapidly advancing, robust sensors that discriminate between moving animals
and static structures from a moving reference frame are not yet available
...

Usually, the assumption behind the above map representations is that all objects on the
map are effectively static
...
For example, occupancy grid techniques can be more robust to dynamic settings
by introducing temporal discounting, effectively treating transient obstacles as noise
...
One exception to this limitation involves topological
representations
...
Still, neither
the occupancy grid representation nor a topological approach is actively recognizing and
representing transient objects as distinct from both sensor error and permanent map features
...
A classic example involves occlusion by human crowds
...
If the robot’s sensing suite is located along the robot’s body, then the robot is

Mobile Robot Localization

211

effectively blind when a group of human visitors completely surround the robot
...
In the best case, the robot should recognize
its occlusion and make no effort to localize using these invalid sensor readings
...
A vision sensor that can discriminate the local conditions of the robot (e
...
we are
surrounded by people) can help eliminate this error mode
...
Existing localization techniques generally depend on local measures such as range,
thereby demanding environments that are somewhat densely filled with objects that the
sensors can detect and measure
...
Indeed, when populated with humans, the challenge is
exacerbated because any mapped objects are almost certain to be occluded from view by
the people
...
Both vision and state-of-the-art laser rangefinding devices offer outdoor performance
with ranges of up to a hundred meters and more
...
Such
long-range sensing may be required for robots to localize using distant features
...
Usually, topological representations make assumptions regarding spatial locality: a
node contains objects and features that are themselves within that node
...
Therefore, in an indoor environment,
each room can be a separate node, and this is reasonable because each room will have a
layout and a set of belongings that are unique to that room
...
Where should a single node
end and the next node begin? The answer is unclear because objects that are far away from
the current node, or position, can yield information for the localization process
...
The spatial locality assumption is violated and, instead, replaced by a visibility
criterion: the node or cell may need a mechanism for representing objects that are measurable and visible from that cell
...

We end this section with one final open challenge that represents one of the fundamental
academic research questions of robotics: sensor fusion
...
Sensor fusion is a research topic closely
related to map representation
...

Perhaps the only general implementation of sensor fusion to date is that of neural network classifier
...
For the mobile robot that must use a human-readable internal map representation, no equally general sensor fusion scheme has yet been born
...

5
...
6
...

Ideally, the robot’s belief state will change, over time, as is consistent with its motor outputs
and perceptual inputs
...
This method does not provide any indication
of the relative chances between various possible robot positions
...
In the following sections we present two classes of probabilistic localization
...
The second method, Kalman filter localization, uses
a Gaussian probability density representation of robot position and scan matching for localization
...
Interestingly, the Kalman filter
localization process results from the Markov localization axioms if the robot’s position
uncertainty is assumed to have a Gaussian form [3, pp
...

Before discussing each method in detail, we present the general robot localization problem and solution strategy
...
As it

Mobile Robot Localization

213

starts to move, say from a precisely known location, it can keep track of its motion using
odometry
...
2
...
To keep position uncertainty from growing
unbounded, the robot must localize itself in relation to its environment map
...
The information provided by the robot’s odometry, plus the information provided by such exteroceptive observations, can be combined to enable the robot
to localize as well as possible with respect to its map
...

Action update represents the application of some action model Act to the mobile robot’s
proprioceptive encoder measurements o t and prior belief state s t – 1 to yield a new belief
state representing the robot’s belief about its current position
...
If, for instance, a differential-drive robot had
motors without encoders connected to its wheels and employed open-loop control, then
instead of encoder measurements the robot’s highly uncertain estimates of wheel spin
would need to be incorporated
...


(5
...
17)

The perception model See and sometimes the action model Act are abstract functions
of both the map and the robot’s physical configuration (e
...
, sensors and their positions,
kinematics, etc
...
By contrast, perception update generally refines the belief state
...

In the case of Markov localization, the robot’s belief state is usually represented as separate probability assignments for every possible robot pose in its map
...

Kalman filter localization represents the robot’s belief state using a single, well-defined

214

Chapter 5

Gaussian probability density function, and thus retains just a µ and σ parameterization of
the robot’s belief about position with respect to the map
...
This fundamental difference in the representation of belief state leads to the following advantages and disadvantages of the two methods,
as presented in [73]:
• Markov localization allows for localization starting from any unknown position and can
thus recover from ambiguous situations because the robot can track multiple, completely
disparate possible positions
...
5
...
The required memory and
computational power can thus limit precision and map size
...
In particular, Kalman filter localization can be used in
continuous world representations
...
g
...

In recent research projects improvements are achieved or proposed by either only updating the state space of interest within the Markov approach [49] or by tracking multiple
hypotheses with Kalman filters [35], or by combining both methods to create a hybrid
localization system [73, 147]
...

5
...
2 Markov localization
Markov localization tracks the robot’s belief state using an arbitrary probability density
function to represent the robot’s position (see also [50, 88, 116, 119])
...
In actual applications, the number of possible poses can range from several hundred to millions of positions
...
g
...
The solution is born out of probability theory, and so the next section describes the
foundations of probability theory that apply to this problem, notably the Bayes formula
...


Mobile Robot Localization

215

5
...
2
...
From probability theory we use the term p ( A ) to denote the probability that A is true
...
For example we can use
p ( r t = l ) to denote the prior probability that the robot r is at position l at time t
...
In probability theory, we use the
term p ( A B ) to denote the conditional probability of A given that we know B
...

The question is, how can a term such as p ( r t = l i t ) be simplified to its constituent parts
so that it can be computed? The answer lies in the product rule, which states
p ( A ∧ B ) = p ( A B )p ( B )

(5
...
18) is intuitively straightforward, as the probability of both A and B being
true is being related to B being true and the other being conditionally true
...
19)

Using equations (5
...
19) together, we can derive the Bayes formula for computing p ( A B ) :
p ( B A )p ( A )
p ( A B ) = ----------------------------p( B)

(5
...
But to do this properly, we must recall the basic goal of
the Markov localization approach: a discrete set of possible robot positions L are represented
...

The See function described in equation (5
...
To do this, we must update the probability associated with each position l in L , and we can do this by directly applying the Bayes formula
to every such l
...
21)

The value of p ( i l ) is key to equation (5
...
An obvious strategy would be to
consult the robot’s map, identifying the probability of particular sensor readings with each
possible map position, given knowledge about the robot’s sensor geometry and the mapped
environment
...
It is simply the probability
p ( r = l ) associated with the belief state before the perceptual update process
...
21) to
all positions l in L , the denominator never varies
...
0
...
16)
...
e
...
In order to compute the probability of position l in the new belief state, one must integrate over all the possible ways in
which the robot may have reached l according to the potential positions expressed in the
former belief state
...
The same location l can be
reached from multiple source locations with the same encoder measurement o because the
encoder measurement is uncertain
...
22)

Thus, the total probability for a specific position l is built up from the individual contributions from every location l' in the former belief state given encoder measurement o
...
21) and (5
...
Formally, this means that their output is a function only of the
robot’s previous state and its most recent actions (odometry) and perception
...
After all, the
values of a robot’s sensors at time t do not really depend only on its position at time t
...
For example, the robot could have experienced a serious collision recently that
has biased the sensor’s behavior
...
Due
to its history of motion, one wheel may have worn more than the other, causing a left-turning bias over time that affects its current position
...
However the Markov
assumption greatly simplifies tracking, reasoning, and planning and so it is an approximation that continues to be extremely popular in mobile robotics
...
20
Dervish exploring its environment
...
6
...
2 Case study 1: Markov localization using a topological map
A straightforward application of Markov localization is possible when the robot’s environment representation already provides an appropriate decomposition
...

Consider a contest in which each robot is to receive a topological description of the environment
...
In addition, this supplied map would be imperfect, containing several false arcs (e
...
, a closed door)
...

Dervish, the winner of this contest, employed probabilistic Markov localization and
used a multiple-hypothesis belief state over a topological environmental representation
...

Dervish, shown in figure 5
...
The environment in this contest consisted of a recti-

218

Chapter 5

linear indoor office space filled with real office furniture as obstacles
...
Robots with such sensor configurations
are subject to both tripping over short objects below the ring and to decapitation by tall
objects (such as ledges, shelves, and tables) that are above the ring
...
In addition, the diagonal sonar pair also proved to
ably detect tables, enabling the robot to avoid wandering underneath tall tables
...
Finally, two sonars near
the robot’s base were positioned to detect low obstacles, such as paper cups, on the floor
...
Thus, it would be appropriate to design Dervish’s perceptual system to detect matching perceptual events: the detection and passage of connections between hallways and
offices
...
Interestingly, this perceptual system
would use time alone and no concept of encoder value to trigger perceptual events
...
If the sonar
strikes jump well beyond 17 cm for more than 1 second, an open door sensory event triggers
...
1
...
Interestingly, this
would result in a conservative perceptual system that frequently misses features, particularly when the hallway is crowded with obstacles that Dervish must negotiate
...

Dervish’s environmental representation was a discrete topological map, identical in
abstraction and information to the map provided by the contest organizers
...
21
depicts a geometric representation of a typical office environment overlaid with the topological map for the same office environment
...
5
...
As shown on the left in figure 5
...
The topological graph
shown on the right depicts the information captured in the example shown
...
21
A geometric office environment (left) and its topological analog (right)
...
This particular topological representation is particularly apt for Dervish given its task of navigating through hallways into a
specific room and its perceptual capability of recognizing discontinuities in hallway walls
...
As will become clear below, the probabilistic update used by Dervish was
approximate, therefore technically one should refer to the resulting values as likelihoods
rather than probabilities
...
1
Dervish’s certainty matrix
...
70

0
...
05

0
...
30

Closed door detected

0
...
60

0

0

0
...
90

0
...
15

Open hallway detected

0

0

0
...
90

0
...
21)
...
g
...

Each perceptual event consists of a percept-pair (a feature on one side of the robot or two
features on both sides)
...
21) enables the likelihood of each possible
position n to be updated using the formula:

220

Chapter 5

p ( n i ) = p ( i n )p ( n )

(5
...
The key simplification for Dervish is based upon
the realization that, because the feature extraction system only extracts four total features
and because a node contains (on a single side) one of five total features, every possible combination of node type and extracted feature can be represented in a 4 x 5 table
...
1) is just this lookup table
...
e
...

With this assumption in hand, we can populate the certainty matrix with confidence estimates for each possible pairing of perception and node type
...
In addition, this matrix assigns a likelihood that the sensory system will
fail to issue a perceptual event altogether (nothing detected)
...
1, if Dervish is next to an open hallway,
the likelihood of mistakenly recognizing it as an open door is 0
...
This means that for any
node n that is of type open hallway and for the sensor value i =open door, p (i n ) = 0
...

Together with a specific topological map, the certainty matrix enables straightforward
computation of p ( i n ) during the perception update process
...

Recall that Dervish has no encoders and that perceptual events are triggered asynchronously by the feature extraction processes
...
22)
...
This is because there is a chance that the
robot has traveled multiple topological nodes since its previous perceptual event (i
...
, falsenegative errors)
...
The likelihood of position n given perceptual event i is calculated as in equation (5
...
24)

The value of p ( n' t – i ) denotes the likelihood of Dervish being at position n' as represented by Dervish’s former belief state
...
22
A realistic indoor topological environment
...
The calculation of p ( n t n' t – i, i t ) is performed by multiplying the probability of generating perceptual event i at position n by the
probability of having failed to generate perceptual events at all nodes between n' and n :
p ( n t n' t – i, i t ) = p ( it, n t ) ⋅ p ( ∅, n t – 1 ) ⋅ p ( ∅, n t – 2 ) ⋅ … ⋅ p ( ∅, n t – i + 1 )

(5
...
22), suppose that the robot has only two nonzero nodes in its
belief state, {1-2, 2-3}, with likelihoods associated with each possible position:
p ( 1 – 2 ) = 1
...
2
...
Note that the likelihoods for nodes 1-2 and 2-3 do not sum to 1
...
These values
are not formal probabilities, and so computational effort is minimized in Dervish by avoiding normalization altogether
...

State 2-3 will progress potentially to states 3, 3-4, and 4
...

The likelihood of reaching state 4 is the product of the initial likelihood for state 2-3, 0
...
Note that we assume the likelihood of
detecting nothing at node 3-4 is 1
...

(a) occurs only if Dervish fails to detect the door on its left at node 3 (either closed or
open), [ 0
...
4 + ( 1 – 0
...
05 ] , and correctly detects nothing on its right, 0
...

(b) occurs if Dervish correctly identifies the open hallway on its left at node 4, 0
...
10
...
2 ⋅ [ 0
...
4 + 0
...
05 ] ⋅ 0
...
9 ⋅ 0
...
003 for state 4
...


222

Chapter 5

Turning to the other node in Dervish’s prior belief state, 1-2 will potentially progress to
states 2, 2-3, 3, 3-4, and 4
...
The likelihood of state 2 is
the product of the prior likelihood for state 1-2, (1
...
6 ⋅ 0 + 0
...
9 ] , and the likelihood of correctly detecting
an open hallway to the left, 0
...
The likelihood for being at state 2 is then
1
...
4 ⋅ 0
...
9 = 0
...
In addition, 1-2 progresses to state 4 with a certainty factor of
–6
4
...
00328
...

Empirically, Dervish’s map representation and localization system have proved to be
sufficient for navigation of four indoor office environments: the artificial office environment created explicitly for the 1994 National Conference on Artificial Intelligence; and the
psychology, history, and computer science departments at Stanford University
...
It is a demonstration of the power of probabilistic localization that, in spite of the tremendous lack of action and encoder information, the robot is
able to navigate several real-world office buildings successfully
...
Dervish was
not just a localizer but also a navigator
...
Generally, the most likely position
is a good measure of the robot’s actual world position
...
In
the case of Dervish, it nonetheless goes with the highest-likelihood position at all times,
save at one critical juncture
...

Therefore, from the point of view of its goal, it is critical that Dervish finish navigating only
when the robot has strong confidence in being at the correct final location
...
In such a case, Dervish will actively plan a path that causes it to move further down the hallway in an attempt
to collect more sensor data and thereby increase the relative likelihood of one position in
the multiple-hypothesis belief state
...
The robot can then reason and plan in order to achieve a goal confidence level, thus
explicitly taking into account not only robot position but also the measured likelihood of

Mobile Robot Localization

223

each position
...

5
...
2
...
The position of the robot is
usually limited to the resolution of a single node in such cases, and this may be undesirable
for certain applications
...

The robot used by this research, Rhino, is an RWI B24 robot with twenty-four sonars
and two Sick laser rangefinders
...
In order to make maximal use of these fine-grained sensory data, Rhino uses a 2D
geometric environmental representation of free and occupied space
...

Like Dervish, Rhino uses multiple-hypothesis belief representation
...
23)
...
Note
that unlike Dervish, which assumes its orientation is approximate and known, Rhino
explicitly represents fine-grained alternative orientations, and so its belief state formally
represents three degrees of freedom
...

Whereas Dervish made use of only perceptual events, ignoring encoder inputs and therefore metric distance altogether, Rhino uses the complete Markov probabilistic localization
approach summarized in section 5
...
2
...

The discrete Markov chain version of action update is performed because of the tessellated representation of position
...
26)

224

Chapter 5

Figure 5
...
Burgard and S
...


Note that equation (5
...
22)
...
e
...
Rhino’s kinematic configuration is a threewheel synchro-drive rather than a differential-drive robot
...
4 and 5
...

The perception model follows the Bayes formula precisely, as in equation (5
...
Given
a range perception i the probability of the robot being at each location l is updated as follows:
p (i l )p ( l )
p (l i ) = ----------------------p(i)

(5
...
This denominator acts as a normalizer to ensure that the probability measures in the belief state continue to sum to 1
...
In the case of Dervish, the
number of possible values for i and l were so small that a simple table could suffice
...
Thus, Rhino computes

Mobile Robot Localization

225

p (i l ) directly using a model of the robot’s sensor behavior, its position l , and the local
environmental metric map around l
...
Three key assumptions are used to construct this sensor model:
1
...

2
...

3
...
Therefore, there is a local peak in the probability
density distribution at the maximal reading of a range sensor
...

Figure 5
...
The robot begins with a flat probability density function for its possible location
...
As the robot encounters first one
door and then a second door, the probability density function over possible positions
becomes first multimodal and finally unimodal and sharply defined
...

The resulting robot localization system has been part of a navigation system that has
demonstrated great success both at the University of Bonn (Germany) and at a public
museum in Bonn
...
Rhino’s ability to function well in this setting is a demonstration of
the power of the Markov localization approach
...
A great many steps are
taken in real-world implementations such as Rhino in order to effect computational gains
...

One class of techniques deserves mention because it can significantly reduce the computational overhead of techniques that employ fixed-cell decomposition representations
...
24
Improving belief state by moving
...

Irrespective of the specific technique, the basic algorithm is the same in all these cases
...

For example, consider a robot with a complete belief state of 10,000 possible locations
at time t
...
By weighting this sampling process with the probability values of the locations, one can bias the system to generate more samples at local peaks in the probability

Mobile Robot Localization

227

density function
...
This biasing is desirable, but only to a point
...
This randomization of the sampling process can be performed by adding additional samples from a
flat distribution, for example
...
This further reduces the number of samples required
on average while guaranteeing that a large number of samples will be used when necessary
[68]
...
Of course, such sampling has a penalty: completeness
...
Of
course, recovery from a lost state is feasible just as with all Markov localization techniques
...
6
...
This approach is very general but, due to its generality, inefficient
...
One can argue that it is not the
exact replication of a probability density curve but the sensor fusion problem that is key to
robust localization
...
Optimal localization should take into account the information provided by all of
these sensors
...
This mechanism is in fact more efficient than Markov
localization because of key simplifications when representing the probability density function of the robot’s belief state and even its individual sensor readings, as described below
...
It incorporates all information, regardless of precision, to estimate the current value
of the variable of interest (i
...
, the robot’s position)
...

Figure 5
...
A measuring device enables measuring
some system states with errors
...
25
Typical Kalman filter application [106]
...
Thus the Kalman filter fuses sensor signals and system
knowledge in an optimal way
...
Within the Kalman filter theory the system is
assumed to be linear and white with Gaussian noise
...
In other engineering disciplines, the Gaussian error
assumption has in some cases been shown to be quite accurate [106]
...
6
...
2)
...
6
...
2 presents a case study of a mobile robot that navigates indoor spaces by virtue of
Kalman filter localization
...
6
...
1 Introduction to Kalman filter theory
The basic Kalman filter method allows multiple measurements to be incorporated optimally into a single estimate of state
...
g
...
After presenting this static case, we can introduce dynamic prediction readily
...
Suppose that our robot has two sensors, an ultrasonic range sensor and
a laser rangefinding sensor
...
For instance, a glass wall will be transparent to the laser but, when measured headon, the sonar will provide an accurate reading
...

The Kalman filter enables such fusion extremely efficiently, as long as we are willing to
approximate the error characteristics of these sensors with unimodal, zero-mean, Gaussian
noise
...
Based on each measurement individually we can estimate the robot’s position
...
As a simplified way of characterizing the error associated with each of these estimates, we presume a (unimodal) Gaussian
2
probability density curve and thereby associate one variance with each measurement: σ 1
2
and σ 2
...
26 depict two such measurements
...
28)

2
ˆ
q 2 = q 2 with variance σ 2
...
29)

ˆ
The question is, how do we fuse (combine) these data to get the best estimate q for the
robot position? We are assuming that there was no robot motion between time k and time
k + 1 , and therefore we can directly apply the same weighted least-squares technique of
equation (5
...
3
...
1
...
30)

i=1

with w i being the weight of measurement i
...

n

n

∂S
2
ˆ
ˆ
----- = ∂
wi ( q – qi ) = 2
w i ( q – qi ) = 0
ˆ
∂q
∂q
ˆ
i=1
i=1





(5
...
32)

i=1

i=1
n

∑ wi qi

i=1
ˆ
q = -----------------n

(5
...
34)

ˆ
then the value of q in terms of two measurements can be defined as follows:
1
1
-----q 1 + -----q 2
2
2
2
2
σ1
σ2
σ2
σ1
ˆ
q = ----------------------------- = ----------------- q 1 + ----------------- q 2
2
2
2
2
1
1
σ 1 + σ2
σ1 + σ2
----- + ----2
2
σ 1 σ2
2

2

2

(5
...
36)
2

Note that from equation (5
...
Thus the uncertainty of the position estimate has been decreased by combining the two measurements
...
26, depicting this result
...
This is a result that we expect based on information theory
...
35) can be rewritten as
2

σ1
ˆ
q = q 1 + ----------------- ( q 2 – q 1 )
2
2
σ 1 + σ2

(5
...
26
Fusing probability density of two estimates [106]
...
38)

where
2

σk
2
2
2
2
K k + 1 = ----------------- ; σ k = σ 1 ; σ z = σ 2
2
2
σk + σz

(5
...
38) tells us, that the best estimate x k + 1 of the state x k + 1 at time k + 1 is
ˆ k before the new measurement z k + 1 is taken, plus
equal to the best prediction of the value x
a correction term of an optimal weighting value times the difference between z k + 1 and the
ˆ
ˆ
best prediction x k at time k + 1
...
36)
2

2

2

σ k + 1 = σk – Kk + 1 σ k

(5
...
Its mean and variance are simply functions of the
inputs’ means and variances
...

Dynamic estimation
...
Suppose that the motion of the robot between times k and k + 1 is described
by the velocity u and the noise w which represents the uncertainty of the actual velocity:
dx
----- = u + w
dt

(5
...
42)

2

σ k′ = σ k + σ w [ t k + 1 – t k ]

(5
...

ˆ
Thus x k′ is the optimal prediction of the robot’s position just as the measurement is
taken at time k + 1
...
27)
...
38) and (5
...
42) and (5
...

ˆ
ˆ
ˆ
x k + 1 = x k′ + K k + 1 ( z k + 1 – x k′ )
ˆ
ˆ
ˆ
xk + 1 = [ xk + u ( tk + 1 – tk ) ] + Kk + 1 [ zk + 1 – xk – u ( tk + 1 – tk ) ]
2

2

(5
...
45)

Mobile Robot Localization

f(x)

233

 ( x – µ ) 2
1
f ( x ) = -------------- exp  – ------------------ 
2
 2σ 
σ 2π

σk ( t k )

u
σ k' ( t k + 1 )

ˆ
x k ( tk )

ˆ
x k' ( t k + 1 )

x(t)

Figure 5
...


The optimal estimate at time k + 1 is given by the last estimate at k and the estimate of
the robot motion including the estimated movement errors
...

5
...
3
...
Application of the
Kalman filter to localization requires posing the robot localization problem as a sensor
fusion problem
...
21) and (5
...

The key difference between the Kalman filter approach and our earlier Markov localization approach lies in the perception update process
...
In some
cases, the perception is abstract, having been produced by a feature extraction mechanism,
as in Dervish
...

By contrast, perception update using a Kalman filter is a multistep process
...
28
Schematic for Kalman filter mobile robot localization (see [23])
...
Given a set of possible features, the Kalman filter
is used to fuse the distance estimate from each feature to a matching object in the map
...
Figure 5
...

The first step is action update or position prediction, the straightforward application of
a Gaussian error motion model to the robot’s measured encoder travel
...
g
...
At the same time, based on its predicted
position in the map, the robot generates a measurement prediction which identifies the features that the robot expects to find and the positions of those features
...
Finally, the Kalman filter can fuse the
information provided by all of these matches to update the robot belief state in estimation
...
The presentation
is based on the work of Leonard and Durrant-Whyte [23, pp
...


Mobile Robot Localization

235

1
...
The robot’s position at timestep k + 1 is predicted based
on its old location (at timestep k ) and its movement due to the control input u ( k ) :
ˆ
ˆ
p ( k + 1 k ) = f ( p ( k k ), u ( k ) )

(5
...
6) and (5
...

Knowing the plant and error model, we can also compute the variance Σ p ( k + 1 k ) associated with this prediction [see equation
...
9), section 5
...
4]:
T

Σ p ( k + 1 k ) = ∇ p f ⋅ Σ p ( k k ) ⋅ ∇ p f + ∇ u f ⋅ Σ u ( k ) ⋅ ∇u f

T

(5
...
Note that the belief state is assumed to be Gaussian, and so
ˆ
we can characterize the belief state with just the two parameters p ( k + 1 k ) and
Σp( k + 1 k )
...
Observation
...
In this presentation, we assume that the observation is the result of a
feature extraction process executed on the raw sensor data
...
Formally,
each single observation can represent an extracted feature such as a line or door, or even a
single, raw sensor value
...
However, for matching we need to represent the observations and measurement predictions in the same frame { S }
...
This transformation is specified in the function h i discussed in the next paragraph
...
Measurement prediction
...
Each predicted feature has its
position transformed into the sensor frame:
ˆ
ˆ
z i ( k + 1 ) = h i ( z t, p ( k + 1 k ) )

(5
...
49)

ˆ
The predicted state estimate p ( k + 1 k ) is used to compute the measurement Jacobian
∇h i for each prediction
...

4
...
At this point we have a set of actual, single observations, which are features
in sensor space, and we also have a set of predicted features, also positioned in sensor space
...
In other
words, we will, for a subset of the observations and a subset of the predicted features, find
pairings that intuitively say “this observation is the robot’s measurement of this predicted
feature based on the map
...
For each measurement prediction for
which a corresponding observation is found we calculate the innovation v ij ( k + 1 )
...
50)

The innovation covariance Σ IN, ij ( k + 1 ) can be found by applying the error propagation
law [section 4
...
2, equation (4
...
51)

where Σ R, i ( k + 1 ) represents the covariance (noise) of the measurement z i ( k + 1 )
...
A possible definition of the validation
gate is the Mahalanobis distance:
T

–1

v ij ( k + 1 ) ⋅ Σ IN, ij ( k + 1 ) ⋅ v ij ( k + 1 ) ≤ g

2

(5
...

The validation equation is used to test observation z j ( k + 1 ) for membership in the validation gate for each predicted measurement
...
If one observation falls in multiple validation gates,

Mobile Robot Localization

237

the best matching candidate is selected or multiple hypotheses are tracked
...
Such observations
could have resulted from objects not in the map, such as new objects (e
...
, someone places
a large box in the hallway) or transient objects (e
...
, humans standing next to the robot may
form a line feature)
...

5
...
Next we compute the best estimate
ˆ
p ( k + 1 k + 1 ) of the robot’s position based on the position prediction and all the observations at time k + 1
...
Then we stack the measurement Jacobians ∇h i for each validated measurement
together to form the composite Jacobian ∇h and the measurement error (noise) vector
Σ R ( k + 1 ) = diag [ Σ R, i ( k + 1 ) ]
...
51) and by utilizing the well-known result
[3] that the Kalman gain can be written as
T

–1

K ( k + 1 ) = Σ p ( k + 1 k ) ⋅ ∇h ⋅ Σ IN ( k + 1 )

(5
...
54)

with the associated variance
T

Σ p ( k + 1 k + 1 ) = Σ p ( k + 1 k ) – K ( k + 1 ) ⋅ Σ IN ( k + 1 ) ⋅ K ( k + 1 )

(5
...
53) is simplified to
2

2

σp( k + 1 k )
σp ( k + 1 k )
K ( k + 1 ) = --------------------------- = --------------------------------------------------------2
2
2
σ IN ( k + 1 )
σp( k + 1 k ) + σR ( k + 1 )
corresponding to equation (5
...
54) simplifies to

(5
...
57)

corresponding to equation (5
...

5
...
3
...
In contrast to both Dervish and Rhino, the environmental representation of Pygmalion is continuous and abstract: the map consists of a set of infinite
lines describing the environment
...
The
value of its mean position µ is represented to a high level of precision, enabling Pygmalion
to localize with very high precision when desired
...
For simplicity we assume
that the sensor frame { S } is equal to the robot frame { R }
...

1
...
At the time increment k the robot is at position
T
ˆ
p ( k ) = x ( k ) y ( k ) θ ( k ) and its best position estimate is p ( k k )
...
29)
...
For
the differential drive that Pygmalion has we can use the model (odometry) developed in
section 5
...
4:
∆s r – ∆ s l
∆s r + ∆s
---------------------l cos  θ + ------------------ 

2b 
2
∆s r – ∆ s
ˆ
ˆ
ˆ
p ( k + 1 k ) = p ( k k ) + u ( k ) = p ( k k ) + ∆s r + ∆s- sin  θ + ------------------l
---------------------l

2b 
2
∆s r – ∆ s l
-----------------b
with the updated covariance matrix

(5
...
29
Prediction of the robot’s position (thick) based on its former position (thin) and the executed movement
...
g
...
The uncertainty of the orientation θ is not represented in the picture
...
59)

where

Σ u = cov ( ∆s r, ∆s l ) =

k r ∆s r

0

0

k l ∆s l

(5
...
Observation
...
e
...
36) respectively
...
61)

After acquiring the raw data at time k+1, lines and their uncertainties are extracted (figure 5
...
This leads to n 0 observed lines with 2n 0 line parameters (figure 5
...
3
...
1:

Σ R, j =

σ αα σ αr
σ rα σ rr

(5
...
30
Observation: From the raw data (a) acquired by the laser scanner at time k + 1, lines are extracted (b)
...


Mobile Robot Localization

241

3
...
Based on the stored map and the predicted robot position
ˆ
p ( k k ) , the measurement predictions of expected features z t, i are generated (figure 5
...

To reduce the required calculation power, there is often an additional step that first selects
the possible features, in this case lines, from the whole set of features in the map
...
Therefore
they need to be transformed to the robot frame { R } :
W
W

zt, i =

R

α t, i

R

→ zt, i =

r t, i

α t, i

(5
...
31), the transformation is given by
R

ˆ
zi ( k + 1 ) =

ˆ
zi ( k + 1 ) =

α t, i
r t, i

ˆ
= h i ( z t, i, p ( k + 1 k ) )

αt, i – θ ( k + 1 k )

W

(5
...
31
Representation of the target position in the world coordinate frame { W } and robot coordinate frame
{R}
...
65)

The measurement prediction results in predicted lines represented in the robot coordinate frame (figure 5
...
They are uncertain, because the prediction of robot position is
uncertain
...
Matching
...
33)
...
32
Measurement predictions: Based on the map and the estimated robot position the targets (visible
lines) are predicted
...


Mobile Robot Localization

243

r
model space

image space
No match!!
Wall was not observed
...
33
Matching: The observations (thick) and measurement prediction (thin) are matched and the innovation and its uncertainties are calculated
...
66)

with
ˆ
v ij ( k + 1 ) = [ z j ( k + 1 ) – h i ( z t, p ( k + 1 k ) ) ]
v ij ( k – 1 ) =

αj
rj


αt, i – θ ( k + 1 k )

W


W

W
W
W
W
ˆ
ˆ
rt, i – ( x ( k + 1 k ) cos ( αt, i ) + y ( k + 1 k ) sin ( αt, i ) )

(5
...
68)

244

Chapter 5

Figure 5
...


to enable finding the best matches while eliminating all other remaining observed and predicted unmatched features
...
Estimation
...
34)
• the pose estimates of each matched pairing of observed and predicted features;
• the robot position estimation based on odometry and observation positions
...
7

Other Examples of Localization Systems

Markov localization and Kalman filter localization have been two extremely popular strategies for research mobile robot systems navigating indoor environments
...
But there are a large number of other
localization techniques that have been used with varying degrees of success on commercial
and research mobile robot platforms
...
Refer to surveys such as [5] for such information
...

Not surprisingly, many implementations of these techniques in commercial robotics

Mobile Robot Localization

245

P3

50 m

L2
x2
P2

50 m

Z-shaped landmark
50 m

L1
P1

Figure 5
...
Komatsu Ltd
...
179-180]

employ modifications of the robot’s environment, something that the Markov localization
and Kalman filter localization communities eschew
...

5
...
1 Landmark-based navigation
Landmarks are generally defined as passive objects in the environment that provide a high
degree of localization accuracy when they are within the robot’s field of view
...

The control system for a landmark-based navigator consists of two discrete phases
...
But when the
robot is in no landmark “zone,” then only action update occurs, and the robot accumulates
position uncertainty until the next landmark enters the robot’s field of view
...

This in turn means the robot must consult its map carefully, ensuring that each motion
between landmarks is sufficiently short, given its motion model, that it will be able to localize successfully upon reaching the next landmark
...
35 shows one instantiation of landmark-based localization
...

One key advantage of the landmark-based navigation approach is that a strong formal
theory has been developed for this general system architecture [98]
...
This work also led to a real-world
demonstration of landmark-based localization
...
A Nomadics 200 mobile robot was fitted with a monochrome CCD camera aimed
vertically up at the ceiling
...

The primary disadvantage of landmark-based navigation is that in general it requires significant environmental modification
...
For example, the
Robotics Laboratory at Stanford made use of approximately thirty discrete landmarks, all
affixed individually to the ceiling
...
7
...
One way to reach
the Holy Grail of mobile robotic localization is to effectively enable such an assumption to
be valid no matter where the robot is located
...

Such a strategy for localization is surely aggressive, but the question of whether it can
be done is primarily a question of sensor technology and sensing software
...
Since vision does indeed collect far more information than previous sensors, it has
been used as the sensor of choice in research toward globally unique localization
...
49 depicts the image taken by a catadioptric camera system
...

One such approach has been attempted by several researchers and involves constructing
one or more image histograms to represent the information content of an image stably (see
e
...
, figure 4
...
3
...
2)
...
However, such a system is highly sensitive to
external illumination and provides only a level of localization resolution equal to the visual
footprint of the camera optics
...
39 of the previous chapter is another example
in which the robot’s sensor values are transformed into an identifier of location
...

One way of attempting to gather sufficient sonar information for global localization is
to allow the robot time to gather a large amount of sonar data into a local evidence grid (i
...
,
occupancy grid) first, then match the local evidence grid with a global metric map of the
environment
...
Most interesting is that the local evidence grid represents information well enough
that it can be used to correct and update the map over time, thereby leading to a localization
system that provides corrective feedback to the environmental representation directly
...

A most promising, new method for globally unique localization is called mosaic-based
localization [83]
...
This method succeeds primarily because of the recent ubiquity of very fast processors, very fast cameras, and very
large storage media
...
The robot begins by collecting images of the
entire floor in the robot’s workspace using this camera
...

Once the complete image mosaic is stored, the robot can travel any trajectory on the
floor while tracking its own position without difficulty
...
The resulting performance has been impressive: such a robot has been
shown to localize repeatedly with 1 mm precision while moving at 25 km/hr
...
The robot can move to any point and will
always be assured of localizing by collecting a sensor scan
...
There will always
be cases where local sensory information is truly ambiguous and, therefore, globally unique
localization using only current sensor information is unlikely to succeed
...
However, there are a number of environments in which
such immediate localization is challenging even for humans: consider hedge mazes and

248

Chapter 5

base station
ultrasonic
beacons

collection of robots
with ultrasonic receivers

Figure 5
...


large new office buildings with repeating halls that are identical
...
The floor of the factory floor had been freshly painted and was thus devoid of sufficient micro fractures to generate texture for correlation
...

5
...
3 Positioning beacon systems
One of the most reliable solutions to the localization problem is to design and deploy an
active beacon system specifically for the target environment
...
The GPS system can be considered as just such a system
(see section 4
...
5
...

Figure 5
...
Just as with
GPS, by designing a system whereby the robots localize passively while the beacons are
active, any number of robots can simultaneously take advantage of a single beacon system
...
In this case the robots must know the positions of the two active
ultrasonic beacons in the global coordinate frame in order to localize themselves to the
global coordinate frame
...
37
...
Given known positions for the
optical retroreflectors, a mobile robot can identify its position whenever it has three such

Mobile Robot Localization

249

θ

Figure 5
...


beacons in sight simultaneously
...

The advantage of such beacon-based systems is usually extremely high engineered reliability
...
Therefore, moving the robot to a different
factory floor will be both, time consuming and expensive
...

5
...
4 Route-based localization
Even more reliable than beacon-based systems are route-based localization strategies
...
There are many techniques for marking such a route and the subsequent intersections
...

For example, high ultraviolet-reflective, optically transparent paint can mark the route such
that only the robot, using a specialized sensor, easily detects it
...

In all such cases, the robot localization problem is effectively trivialized by forcing the
robot to always follow a prescribed path
...
Nevertheless, the cost of this extreme reliability is obvious: the robot is much more inflexible
given such localization means, and therefore any change to the robot’s behavior requires
significant engineering and time
...
8

Chapter 5

Autonomous Map Building

All of the localization strategies we have discussed require human effort to install the robot
into a space
...
Even if this not be
case, a map of the environment must be created for the robot
...
This ambition goes to the heart of autonomous mobile robotics
...

Accomplishing this goal robustly is probably years away, but an important subgoal is
the invention of techniques for autonomous creation and modification of an environmental
map
...
So, the robot must not only create a map but
it must do so while moving and localizing to explore the environment
...

The reason that SLAM is difficult is born precisely from the interaction between the
robot’s position updates as it localizes and its mapping actions
...
Similarly, the map becomes
correlated with the position estimate if an observation taken from an imprecisely known
position is used to update or add a feature to the map
...
For localization the robot needs to
know where the features are, whereas for map-building the robot needs to know where it is
on the map
...
Such crosscorrelated maps are called stochastic maps, and we begin with a discussion of the theory
behind this approach in the following section [55]
...
In
response a number of researchers have offered other solutions that have functioned well in
limited circumstances
...
8
...

5
...
1 The stochastic map technique
Figure 5
...
28 during the discussion of Kalman filter

Mobile Robot Localization

251

Map

Map Building and Maintenance
Refine Feature
Parameters
increase credibility

Encoder

position
estimate

Remove Offensive
Features
decrease credibility

Estimation (fusion)
using confirmed
map
matched predictions
and observations

predicted feature
observations

Prediction of Measurement and Position (odometry)

Add New
Features
extend map

YES

unexpected
observations

unobserved
predictions

YES

Matching

NO

Unexpected
Observation?

NO

Perception

raw sensor data or
extracted features

Observation
on-board sensors

Figure 5
...


localization [23]
...

Unexpected observations will effect the creation of new features in the map, whereas
unobserved measurement predictions will effect the removal of features from the map
...
The uncertainties of all of these quantities must be considered throughout this process
...
We represent this new map M with a set n of probabilistic feature locations z t , each

252

Chapter 5

x0
C αr =

2
σα

σ αr

r

y0

2

σ αr σ r

{S}

{S}

r

Extracted line

Updated
feature

α

x1
y1

{W}

r

{W}

Map feature

α

α

Figure 5
...


with the covariance matrix Σ t and an associated credibility factor c t between 0 and 1 quantifying the belief in the existence of the feature in the environment (see figure 5
...
69)

In contrast to the map used for Kalman filter localization previously, the map M is not
assumed to be precisely known because it will be created by an uncertain robot over time
...

Just as with Kalman filter localization, the matching step yields has three outcomes in
regard to measurement predictions and observations: matched prediction and observations,
unexpected observations, and unobserved predictions
...
However, the map is also updated now, using all three outcomes and complete propagation of all the correlated uncertainties (see [23] for more
details)
...
How should the robot’s failure to match
observed features to a particular map feature reduce that map feature’s credibility? And
also, how should the robot’s success at matching a mapped feature increase the chance that
the mapped feature is “correct?” In [23] the following function is proposed for calculating
credibility:

ct ( k ) = 1 – e

ns nu
–  ---- – -----
a b

(5
...
The update of the covariance matrix Σ t can be done similarly to the position update seen in the previous section
...
This
forces us to use a stochastic map, in which all cross-correlations must be updated in each
cycle [55, 113, 136]
...
71)

and a system state covariance matrix:
C rr C r1 C r2 … C rn
C 1r C 11 … … C 1n
Σ = C … … …C
2r
2n

(5
...

In contrast to localization based on an a priori accurate map, in the case of a stochastic
map the cross-correlations must be maintained and updated as the robot is performing automatic map-building
...
In short, this optimal approach requires every value
in the map to depend on every other value, and therein lies the reason that such a complete
solution to the automatic mapping problem is beyond the reach of even today’s computational resources
...
8
...
This field of
mobile robotics research is extremely large, and this text will not present a comprehensive
survey of the field
...


254

Chapter 5

Figure 5
...
By applying topological
correction, the grid map on the right is extracted (courtesy of S
...


5
...
2
...
The problem is simple: given an environment that has one or
more loops or cycles (e
...
, four hallways that intersect to form a rectangle), create a globally consistent map for the whole environment
...
And, given any local imperfection, accumulating
such imperfections over time can lead to arbitrarily large global errors between a map, at
the macrolevel, and the real world, as shown in figure 5
...
Such global error is usually
irrelevant to mobile robot localization and navigation
...
However, an extremely
large loop still eventually returns to the same spot, and the robot must be able to note this
fact in its map
...

In some of the earliest work attempting to solve the cyclic environment problem,
Kuipers and Byun [94] used a purely topological representation of the environment, reasoning that the topological representation only captures the most abstract, most important

Mobile Robot Localization

255

features and avoids a great deal of irrelevant detail
...
g
...

To check this hypothesis, the robot explicitly plans and moves to adjacent nodes to see if
its perceptual readings are consistent with the cycle hypothesis
...
Two important features are
found in most autonomous mapping systems that claim to solve the cycle detection problem
...
Each submap is treated as
a single sensor during the robot’s position update
...
Because odometry is relatively accurate over small distances, the relative registration
of features and raw sensor strikes in a local submap will be quite accurate
...
In a sense, this strategy at the very least defers the
problem of very large cyclic environments by increasing the map scale that can be handled
well by the robot
...
Some recent automatic mapping systems will attempt to
identify cycles by associating a topology with the set of metric submaps, explicitly identifying the loops first at the topological level
...
In
the case of [74], the topological level loop is determined by performing correspondence
tests between submaps, postulating that two submaps represent the same place in the environment when the correspondence is good
...

For example, the globally unique localization methods described in section 5
...
It is notable that the automatic mapping research
of the present has, in many ways, returned to the basic topological correctness question that
was at the heart of some of the earliest automatic mapping research in mobile robotics more
than a decade ago
...

5
...
2
...
All of these strategies tend to assume that
the environment is either unchanging or changes in ways that are virtually insignificant
...
However, in a great many

256

Chapter 5

cases this assumption is incorrect
...
Another class of dynamic environments are spaces such as factory floors and
warehouses, where the objects being stored redefine the topology of the pathways on a dayto-day basis as shipments are moved in and out
...
The subject of continuous mapping, or mapping of dynamic environments, is to some degree a direct outgrowth of successful strategies for automatic mapping of unfamiliar environments
...
Thus, a mapping system can become a map-modifying system by simply continuing
to operate
...
If map construction requires off-line global optimization, then the desire to make
small-grained, incremental adjustments to the map is more difficult to satisfy
...
One common argument for handling the detection of, for instance, humans
in the environment is that mechanisms such as c t can take care of all features that did not
deserve to be mapped in the first place
...

Each time the robot’s most recent local evidence grid is used to update a region of the global
occupancy grid, extraneous occupied cells in the global occupancy grid are freed if the local
occupancy grid detected no objects (with high confidence) at those same positions
...
When a robot’s sensor system can reliably detect
the difference between a wall and a human, using, for example, a vision system, then the
problem of mapping in dynamic environments will become significantly more straightforward
...
There is
still a great deal of research activity focusing on the general map-building and localization
problem [22, 23, 55, 63, 80, 134, 147, 156]
...
e
...
This field is certain to produce significant new results in the next several years, and as the perceptual power of robots
improves we expect the payoff to be greatest here
...
1

Planning and Navigation

Introduction

This book has focused on the elements of a mobile robot that are critical to robust mobility:
the kinematics of locomotion; sensors for determining the robot’s environmental context;
and techniques for localizing with respect to its map
...
Cognition generally represents the purposeful decision-making and
execution that a system utilizes to achieve its highest-order goals
...
Given partial knowledge about its environment and a
goal position or series of positions, navigation encompasses the ability of the robot to act
based on its knowledge and sensor values so as to reach its goal positions as efficiently and
as reliably as possible
...

Within the mobile robotics research community, a great many approaches have been
proposed for solving the navigation problem
...
The key difference between various navigation architectures is the manner in which they decompose the problem into
smaller subunits
...
3 below, we describe the most popular navigation architectures, contrasting their relative strengths and weaknesses
...
2 we discuss two key additional competences required for
mobile robot navigation
...
Path
planning is a strategic problem-solving competence, as the robot must decide what to do
over the long term to achieve its goals
...

Given real-time sensor readings, obstacle avoidance means modulating the trajectory of the
robot in order to avoid collisions
...


258

6
...
In fact, when applied to physical systems such as mobile
robots, planning and reacting have a strong complementarity, each being critical to the
other’s success
...
During execution, the robot must react to unforeseen
events (e
...
, obstacles) in such a way as to still reach the goal
...
Without
planning, the reacting effort cannot guide the overall robot behavior to reach a distant goal
– again, the robot will never reach its goal
...
Suppose that a robot M at time i has a map M i and an initial belief state
b i
...
Thus the robot must be at location p at or before timestep n
...
With this formulation, a plan q is nothing more than one or more trajectories from b i to b g
...

Of course the problem is that the latter condition may not be met
...
Furthermore, the real-world environment is dynamic
...

In order to reach its goal nonetheless, the robot must incorporate new information gained
during plan execution
...
This is precisely where reacting becomes relevant
...
At times, unanticipated new information will require changes to the robot’s strategic plans, and so ideally the planner also
incorporates new information as that new information is received
...
This theoretical extreme, at which point the concept of planning and the concept of
reacting merge, is called integrated planning and execution and is discussed in section
6
...
4
...


Planning and Navigation

259

A useful concept throughout this discussion of robot architecture involves whether particular design decisions sacrifice the system’s ability to achieve a desired goal whenever a
solution exists
...
More formally, the robot system is
complete if and only if, for all possible problems (i
...
, initial belief states, maps, and goals),
when there exists a trajectory to the goal belief state, the system will achieve the goal belief
state (see [27] for further details)
...
As you may expect, achieving completeness is an ambitious goal
...
Analytically, it is important to understand how completeness is compromised by
each particular system
...
For greater detail, refer
to [21, 30, chapter 25]
...
2
...
Interestingly, the path planning problem for a manipulator with, for instance, six degrees of freedom is far more complex than that of a differential-drive robot operating in a flat
environment
...
Furthermore, industrial
robots often operate at the fastest possible speed because of the economic impact of high
throughput on a factory line
...
In contrast, a number of
mobile robots operate at such low speeds that dynamics are rarely considered during path
planning, further simplifying the mobile robot instantiation of the problem
...
Path planning for manipulator robots and, indeed, even for most
mobile robots, is formally done in a representation called configuration space
...
g
...
Every state or configuration of
the robot can be described with k real values: q 1 , …, q k
...
This
description is convenient because it allows us to describe the complex 3D shape of the robot
with a single k -dimensional point
...
e
...
The goal of path planning is to find a path in the

260

Chapter 6

θ2

y
1

θ2

2

θ1

3
4

θ1

x
a)

b)

Figure 6
...
The motion is thereby constraint by the obstacles 1 to 4
...


physical space from the initial position of the arm to the goal position, avoiding all collisions with the obstacles
...
But in configuration space the problem is straightforward
...

Figure 6
...
The robot’s goal is to move its end effector from position start to
end
...
It is easy to see that the solution in C-space is a line from start to end
that remains always within the free space of the robot arm
...
But, as we have seen, most robots are nonholonomic, using differential-drive systems or Ackerman steered systems
...
For details regarding the construction of the appropriate free space to solve such
path-planning problems, see [21, p
...

In mobile robotics, the most common approach is to assume for path-planning purposes
that the robot is in fact holonomic, simplifying the process tremendously
...


Planning and Navigation

261

Furthermore, mobile roboticists will often plan under the further assumption that the
robot is simply a point
...
The result of all this simplification is that the configuration space looks essentially identical to a 2D (i
...
, flat) version
of the physical space, with one important difference
...
With
this new, simplified configuration space in mind, we can now introduce common techniques for mobile robot path planning
...
The robot’s environment representation can range from a continuous geometric description to a decomposition-based geometric map or even a topological map, as described in section 5
...
The first step of any path-planning system is to
transform this possibly continuous environmental model into a discrete map suitable for the
chosen path-planning algorithm
...
We can identify three general strategies for decomposition:
1
...

2
...

3
...

The following sections present common instantiations of the road map and cell decomposition path-planning techniques, noting in each case whether completeness is sacrificed
by the particular representation
...
2
...
1 Road map path planning
Road map approaches capture the connectivity of the robot’s free space in a network of 1D
curves or lines, called road maps
...
Path planning is thus reduced to connecting the initial and goal positions of the robot to the road network, then searching for a
series of roads from the initial robot position to its goal position
...
The challenge is to construct a set of roads that together enable the
robot to go anywhere in its free space, while minimizing the number of total roads
...
We describe two road map
approaches below that achieve this result with dramatically different types of roads
...
In the case of the Voronoi diagram, roads stay as far away
as possible from obstacles
...
2
Visibility graph [21]
...
All nodes which are visible from each other are connected by
straight-line segments, defining the road map
...


Visibility graph
...
The unobstructed straight lines (roads) joining those vertices
are obviously the shortest distances between them
...
2)
...
Particularly when the environmental representation
describes objects in the environment as polygons in either continuous or discrete space, the
visibility graph search can employ the obstacle polygon descriptions readily
...

First, the size of the representation and the number of edges and nodes increase with the
number of obstacle polygons
...

The second caveat is a much more serious potential flaw: the solution paths found by
visibility graph planning tend to take the robot as close as possible to obstacles on the way

Planning and Navigation

263

start

goal

Figure 6
...
The Voronoi diagram consists of the lines constructed from all points that are
equidistant from two or more obstacles
...
The direction of movement on the Voronoi diagram is also selected so that the distance to the boundaries increases fastest
...


to the goal
...
This powerful result also means that all sense of safety,
in terms of staying a reasonable distance from obstacles, is sacrificed for this optimality
...
Of course such actions sacrifice the optimal-length results of visibility graph path planning
...
Contrasting with the visibility graph approach, a Voronoi diagram is a
complete road map method that tends to maximize the distance between the robot and
obstacles in the map
...
Plot that distance in figure 6
...
The height
increases as you move away from an obstacle
...
The Voronoi diagram consists of the
edges formed by these sharp ridge points
...
Algorithms that

264

Chapter 6

find paths on the Voronoi road map are complete just like visibility graph methods, because
the existence of a path in the free space implies the existence of one on the Voronoi diagram
as well (i
...
, both methods guarantee completeness)
...

The Voronoi diagram has an important weakness in the case of limited range localization sensors
...
If such short-range sensors are used for localization, then the
chosen path will be quite poor from a localization point of view
...

There is, however, an important subtle advantage that the Voronoi diagram method has
over most other obstacle avoidance techniques: executability
...
This control system will naturally keep the robot on
Voronoi edges, so that Voronoi-based motion can mitigate encoder inaccuracy
...

6
...
1
...
The basic cell decomposition path-planning algorithm can be summarized as follows [30]:
• Divide F into simple, connected regions called “cells”
...

• Find the cells in which the initial and goal configurations lie and search for a path in the
connectivity graph to join the initial and goal cell
...

An important aspect of cell decomposition methods is the placement of the boundaries
between cells
...

If the decomposition results in an approximation of the actual map, the system is termed

Planning and Navigation

start

265

8

7

17
10

9
5
6

1

18
14

2

4

15

3
16
11

7

2

8

5

1

13

12

6

3

4

9

goal

10

17

14

11

12

13

18

15

16

Figure 6
...


approximate cell decomposition
...
5
...
Here, we briefly
summarize these two cell decomposition techniques once again, providing greater detail
about their advantages and disadvantages relative to path planning
...
Figure 6
...
The resulting cells are each either completely free or completely occupied, and therefore path planning in the network is complete,
like the road map based methods above
...

The key disadvantage of exact cell decomposition is that the number of cells and, therefore, overall path planning computational efficiency depends upon the density and complexity of objects in the environment, just as with road mapbased systems
...
In environments that are extremely sparse,
the number of cells will be small, even if the geometric size of the environment is very

266

Chapter 6

large
...

Practically speaking, due to complexities in implementation, the exact cell decomposition
technique is used relatively rarely in mobile robot applications, although it remains a solid
choice when a lossless representation is highly desirable, for instance to preserve completeness fully
...
By contrast, approximate cell decomposition is one of
the most popular techniques for mobile robot path planning
...
These grid-based representations are
themselves fixed grid-size decompositions and so they are identical to an approximate cell
decomposition of the environment
...
15 of chapter 5, is the fixed-size cell
decomposition
...

Practically speaking, this is rarely a problem owing to the very small cell size used (e
...
,
5 cm on each side)
...

For example, NF1, often called grassfire, is an efficient and simple-to-implement technique for finding routes in such fixed-size cell arrays [96]
...
This process continues until the cell corresponding to the initial robot
position is reached
...

Given that the entire array can be in memory, each cell is only visited once when looking
for the shortest discrete path from the initial position to the goal position
...
Thus complexity does not depend on the sparseness and
density of the environment, nor on the complexity of the objects’ shapes in the environment
...
For more information on breadth-first search and
other graph search techniques, refer to [30]
...
For a large environment, even when sparse, this grid must be represented in its entirety
...

The Cye robot is an example of a commercially available robot that performs all its path
planning on a 2D 2 cm fixed-cell decomposition of the environment using a sophisticated
grassfire algorithm that avoids known obstacles and prefers known routes [42]
...
16 of chapter 5 illustrates a variable-size approximate cell decomposition
method
...
The rectangle is recursively decomposed into smaller rectangles
...
At each level of resolution only the
cells whose interiors lie entirely in the free space are used to construct the connectivity
graph
...

Starting with a coarse resolution, the resolution is reduced until either the path planner identifies a solution or a limit resolution is attained (e
...
In contrast to the
exact cell decomposition method, the approximate approach can sacrifice completeness,
but it is mathematically less involving and thus easier to implement
...

6
...
1
...
This approach was
originally invented for robot manipulator path planning and is used often and under many
variants in the mobile robotics community
...
The robot moves by following the field, just as a ball would roll downhill
...
The
superposition of all forces is applied to the robot, which, in most cases, is assumed to be a
point in the configuration space (see figure 6
...
Such an artificial potential field smoothly
guides the robot toward the goal while simultaneously avoiding known obstacles
...
The resulting
field is also a control law for the robot
...

The basic idea behind all potential field approaches is that the robot is attracted toward
the goal, while being repulsed by the obstacles that are known in advance
...
In the simplest case, we assume that the robot is a point, thus the robot’s
orientation θ is neglected and the resulting potential field is only 2D ( x, y )
...

F ( q ) = – ∇U ( q )
where ∇U ( q ) denotes the gradient vector of U at position q
...
1)

268

Chapter 6

a)

b)

c)

Figure 6
...
(a) Configuration
of the obstacles, start (top left) and goal (bottom right)
...
(c) Resulting potential field generated by the goal attractor and obstacles
...
2)

The potential field acting on the robot is then computed as the sum of the attractive field
of the goal and the repulsive fields of the obstacles:
U ( q ) = U att ( q ) + U rep ( q )

(6
...
4)

Attractive potential
...

2
1
U att ( q ) = -- k att ⋅ ρ goal ( q )
2

(6
...
This attractive potential is differentiable, leading to the attractive force F att
F att ( q ) = – ∇U att ( q )

(6
...
7)

F att ( q ) = – k att ⋅ ( q – q goal )

(6
...

Repulsive potential
...
This repulsive potential should be very strong when the robot is
close to the object, but should not influence its movement when the robot is far from the
object
...
9)

if ρ ( q ) ≥ ρ0

where k rep is again a scaling factor, ρ ( q ) is the minimal distance from q to the object and
ρ 0 the distance of influence of the object
...

If the object boundary is convex and piecewise differentiable, ρ ( q ) is differentiable
everywhere in the free configuration space
...
10)


q – q obstacle
1
1
- 1
 k  ---------- – ----  ------------- --------------------------- if ρ ( q ) ≤ ρ 0
F rep ( q ) =  rep  ρ ( q ) ρ0 ρ 2 ( q )
ρ( q)

if ρ ( q ) ≥ ρ 0
 0
The resulting force F ( q ) = F att ( q ) + F rep ( q ) acting on a point robot exposed to the
attractive and repulsive forces moves the robot away from the obstacles and toward the goal
(see figure 6
...
Under ideal conditions, by setting the robot’s velocity vector proportional
to the field force vector, the robot can be smoothly guided toward the goal, similar to a ball
rolling around obstacles and down a hill
...
One is local minima that appear
dependent on the obstacle shape and size
...
This might lead to a situation for which several minimal distances ρ ( q ) exist,
resulting in oscillation between the two closest points to the object, which could obviously
sacrifice completeness
...

The extended potential field method
...
Like all potential field methods this approach makes use of attractive and repulsive forces that originate from an artificial potential field
...

The rotation potential field assumes that the repulsive force is a function of the distance
from the obstacle and the orientation of the robot relative to the obstacle
...
6
Comparison between a classical potential field and an extended potential field
...


a gain factor which reduces the repulsive force when an obstacle is parallel to the robot’s
direction of travel, since such an object does not pose an immediate threat to the robot’s
trajectory
...

The task potential field considers the present robot velocity and from that it filters out
those obstacles that should not affect the near-term potential based on robot velocity
...
The sector Z is defined as the space which the robot will
sweep during its next movement
...
An
example comparing a classical potential field and an extended potential field is depicted in
figure 6
...

A great many variations and improvements of the potential field methods have been proposed and implemented by mobile roboticists [67, 111]
...

Potential fields are extremely easy to implement, much like the grassfire algorithm
described in section 6
...
1
...
Thus it has become a common tool in mobile robot applications in spite of its theoretical limitations
...
Of course, as the complexity of a robot increases (e
...
, large degree of

272

Chapter 6

freedom nonholonomics) and, particularly, as environment dynamics becomes more significant, then the path-planning techniques described above become inadequate for grappling
with the full scope of the problem
...

But a path planner can only take into consideration the environment obstacles that are
known to the robot in advance
...
Therefore,
it is critical that the robot modify its path in real time based on actual sensor values
...

6
...
2 Obstacle avoidance
Local obstacle avoidance focuses on changing the robot’s trajectory as informed by its sensors during robot motion
...

The obstacle avoidance algorithms presented below depend to varying degrees on the existence of a global map and on the robot’s precise knowledge of its location relative to the
map
...
We first present the simplest obstacle avoidance systems that are
used successfully in mobile robotics
...
More
sophisticated algorithms are presented afterward, taking into account recent sensor history,
robot kinematics, and even dynamics
...
2
...
1 Bug algorithm
The Bug algorithm [101, 102] is perhaps the simplest obstacle avoidance algorithm one
could imagine
...

With Bug1, the robot fully circles the object first, then departs from the point with the
shortest distance toward the goal (figure 6
...
This approach is, of course, very inefficient
but guarantees that the robot will reach any reachable goal
...
In general this improved Bug algorithm
will have significantly shorter total robot travel, as shown in figure 6
...
However, one can
still construct situations in which Bug2 is arbitrarily inefficient (i
...
, nonoptimal)
...
We mention one
more, the Tangent Bug [82], which adds range sensing and a local environmental represen-

Planning and Navigation

273

goal
L2

H1

H2

L1

start

Figure 6
...


goal
L2
H2
L1
H1

start
Figure 6
...


tation termed the local tangent graph (LTG)
...
In many simple environments, Tangent Bug
approaches globally optimal paths
...
Because of the popularity and simplicity of
Bug2, we present a specific example of obstacle avoidance using a variation of this technique
...
8
...
We will call the former state GOALSEEK and the latter WALLFOLLOW
...
The following pseudocode provides the highest-level control
code for such a decomposition:
public void bug2(position goalPos){
boolean atGoal = false;
while( ! atGoal){
position robotPos = robot
...
atan2(goalPos, robotPos)-robot
...
out
...
SetState(DONE);
atGoal = true;
}
else{
forwardVel = ComputeTranslation(&sonars);
if(robot
...
SetState(WALLFOLLOW);
}
if(robot
...
SetState(GOALSEEK);
}
}
robot
...
In this
simple example we have only right wall following, a simplification for didactic purposes
that ought not find its way into a real mobile robot program
...
Consider for our purposes a robot with a ring of sonars placed radially around the robot
...
Furthermore, the robot accepts motion commands of the form
shown above, with a rotational velocity parameter and a translational velocity parameter
...

There is one condition we must define in terms of the robot’s sonar readings, ObstaclesInWay()
...
In this simplified example,
we define translation speed as being proportional to the largest range readings in the robot’s
approximate forward direction:
private int ComputeTranslation(sensorvals sonars) {
int minSonarFront;
minSonarFront = MinRange(sonars, -pi/4
...
0);
if (minSonarFront < 200) return 0;
else return (Math
...
2
...
3
...
Alternatively, many will consider
short-range readings to be repulsive forces, again engaging in vector addition to determine
an overall motion command for the robot
...
The larger the difference, the faster the robot will turn in the direction of the longer range readings
...
abs(goalAngle) < pi/10) return 0;
else return (goalAngle * 100);
} // end ComputeGoalSeekRot() //
private int ComputeRWFRot(sensorvals sonars) {
int minLeft, minRight, desiredTurn;
minRight = MinRange(sonars, -pi/2, 0);
minLeft = MinRange(sonars, 0, pi/2);
if (Math
...
inttorange(-400, desiredTurn, 400);
return desiredTurn;
} // end else
} // end ComputeRWFRot() //

Note that the rotation function for the case of right wall following combines a general
avoidance of obstacles with a bias to turn right when there is open space on the right,
thereby staying close to the obstacle’s contour
...
For example, the wall follower could do a far better job
by mapping the contour locally and using a PID control loop to achieve and maintain a specific distance from the contour during the right wall following action
...
For example, the Bug2 approach does not take
into account robot kinematics, which can be especially important with nonholonomic
robots
...
The following obstacle avoidance techniques are designed to overcome one or more of these limitations
...
2
...
2 Vector field histogram
Borenstein, together with Koren, developed the vector field histogram (VFH) [43]
...
Later, Borenstein,
together with Ulrich, extended the VFH algorithm to yield VFH+ [150] and VFH*[149]
...
9
Polar histogram [93]
...
This can lead to undesirable and yet preventable problems in cases where the robot’s instantaneous sensor readings do not provide enough information for robust obstacle avoidance
...
This
local map is a small occupancy grid, as described in section 5
...
For obstacle avoidance, VFH generates a polar histogram as
shown in figure 6
...
The x-axis represents the angle α at which the obstacle was found and
the y-axis represents the probability P that there really is an obstacle in that direction based
on the occupancy grid’s cell values
...
First all openings large enough
for the vehicle to pass through are identified
...
The passage with the lowest cost is chosen
...
11)

target_direction = alignment of the robot path with the goal;
wheel_orientation = difference between the new direction and the current wheel orientation;
previous_direction = difference between the previously selected direction and the new
direction
...
The parameters a , b , c in the cost function G tune the
behavior of the robot
...
For a complete definition of the cost function, refer to [92]
...
10
Example of blocked directions and resulting polar histograms [54]
...

(b) Polar histogram
...


In the VFH+ improvement one of the reduction stages takes into account a simplified
model of the moving robot’s possible trajectories based on its kinematic limitations (e
...
,
turning radius for an Ackerman vehicle)
...
An obstacle thus blocks all of the robot’s allowable trajectories which pass through
the obstacle (figure 6
...
This results in a masked polar histogram where obstacles are
enlarged so that all kinematically blocked trajectories are properly taken into account (figure 6
...

6
...
2
...
The original elastic band concept applied only to holonomic

Planning and Navigation

279

Figure 6
...


vehicles and so we focus only on the bubble band extension made by Khatib, Jaouni, Chatila, and Laumod [85]
...
The
bubble is generated using a simplified model of the robot in conjunction with range information available in the robot’s map
...
11)
...
12)
...

Once the path planner’s initial trajectory has been computed and the bubble band is calculated, then modification of the planned trajectory ensues
...
These internal forces try to minimize the “slack” (energy) between adjacent bubbles
...

Of course, so far this is more akin to path optimization than obstacle avoidance
...

As the robot encounters unforeseen sensor values, the bubble band model is used to deflect
the robot from its originally intended path in a way that minimizes bubble band tension
...
However, the method is most applicable only when the environment configuration is well-known ahead of time, just as with off-line path-planning techniques
...
12
A typical bubble band (Courtesy of Raja Chatila [85])
...
2
...
4 Curvature velocity techniques
The basic curvature velocity approach
...
CVM begins by adding physical constraints from the
robot and the environment to a velocity space
...

Two types of constraints are identified: those derived from the robot’s limitations in
acceleration and speed, typically – v max < v < v max , – ω max < ω < ω max ; and, second, the
constraints from obstacles blocking certain v and ω values due to their positions
...
13
...

To achieve real-time performance the obstacles are approximated by circular objects
and the contours of the objects are divided into few intervals
...

The final decision of a new velocity ( v and ω ) is made by an objective function
...
13
Tangent curvatures for an obstacle (from [135])
...
The use of a Cartesian grid
for initial obstacle representation enables straightforward sensor fusion if, for instance, a
robot is equipped with multiple types of ranging sensors
...
However a
limitation of the method is the circular simplification of obstacle shape
...
The CVM method can also suffer from local minima since no a priori
knowledge is used by the system
...
Ko and Simmons presented an improvement of the CVM
which they named the lane curvature method, (LCM) [87] based on their experiences with
the shortcomings of CVM
...
The problems stemmed from the approximation that the robot moves only along
fixed arcs, whereas in practice the robot can change direction many times before reaching
an obstacle
...
The lane with the best properties is chosen using an objective function
...

Experimental results have demonstrated better performance as compared to CVM
...


282

Chapter 6

Figure 6
...
The rectangular window shows the
possible speeds ( v, ω ) and the overlap with obstacles in configuration space
...
2
...
5 Dynamic window approaches
Another technique for taking into account robot kinematics constraints is the dynamic
window obstacle avoidance method
...
Two such approaches are represented in the literature
...

The local dynamic window approach
...
The
velocity space is all possible sets of tuples ( v , ω ) where v is the velocity and ω is the
angular velocity
...

Given the current robot speed, the algorithm first selects a dynamic window of all tuples
( v , ω ) that can be reached within the next sample period, taking into account the acceleration capabilities of the robot and the cycle time
...
The remaining velocities are called admissible velocities
...
14,
a typical dynamic window is represented
...

A new motion direction is chosen by applying an objective function to all the admissible
velocity tuples in the dynamic window
...
15
An example of the distance transform and the resulting path as it is generated by the NF1 function
...


maintenance of large distances to obstacles and alignment to the goal heading
...
12)

heading = Measure of progress toward the goal location;
velocity = Forward velocity of the robot → encouraging fast movements;
dist = Distance to the closest obstacle in the trajectory
...
The global dynamic window approach adds, as
the name suggests, global thinking to the algorithm presented above
...
2
...
2 and
figure 6
...
Recall that NF1 labels the cells in the occupancy grid with the total distance
L to the goal
...
The
width of the region is enlarged and recalculated if the goal cannot be reached within the
constraints of this chosen region
...
The occupancy grid is updated
from range measurements as the robot moves in the environment
...
If the NF1 cannot be calculated due to the fact that the robot is
surrounded by obstacles, the method degrades to the dynamic window approach
...


284

Chapter 6

o4
o3
o
o12

l2

l4 obstacle points
l3

l1
M

forklift

Figure 6
...


The global dynamic window approach promises real-time, dynamic constraints, global
thinking, and minimal free obstacle avoidance at high speed
...
This system
produced a cycle frequency of about 15 Hz when the occupancy grid was 30 × 30 m with
a 5 cm resolution
...

6
...
2
...
The approach is adopted for raw laser data measurements and sensor fusion
using a Cartesian grid to represent the obstacles in the environment
...

As with previous methods we have described, the basic assumption is that a robot moves
in trajectories built up by circular arcs, defined as curvatures i c
...
16
...

For example, the search space window V s is defined for a differential-drive robot to be
all the possible speeds of the left and right wheels, v r, v l
...
Finally, an objective function chooses
the best speed and direction by trading off goal direction, speed, and distance until collision
...
Two robot chassis were used,
one with synchro-drive kinematics and one with tricycle kinematics
...
17
Flow diagram of the ASL approach [122]
...
Thus the demonstration of reliable obstacle avoidance with the forklift is an impressive result
...
In their experiments, the authors
used lookup tables of up to 2
...

6
...
2
...
It merges
three approaches in order to have a system that moves smoothly without stopping for
replanning and is able to carefully nudge its way through when it is safe to do so
...
An overview is given in figure 6
...


286

Chapter 6

Local path planning is performed by NF1
...
This keeps path
updates as simple as possible
...

6
...
2
...
It was also used in [108] to take into
account more precise geometric, kinematic, and dynamic constraints
...
Global reasoning was added to the
approach and termed the global nearness diagram (GND) in [110], somewhat similar to the
GDWA extension to the DWA, but based on a workspace representation (instead of configuration space) and updating free space in addition to obstacle information
...
2
...
9 Gradient method
Realizing that current computer technology allows fast recalculation of wavefront propagation techniques, the gradient method [89] formulates a grid-based global path planning
that takes into account closeness to obstacles and allows generating continuous interpolations of the gradient direction at any given point in the grid
...

6
...
2
...
The ego-dynamic space is equally applicable to workspace
and configuration space methods
...
In
combination with the proposed spatial window (PF) to represent acceleration capabilities,
the approach was tested in conjunction with the ND and PF methods and gives satisfactory
results for circular holonomic robots, with plans to extend it to nonholonomic, noncircular
architectures
...
2
...
11 Other approaches
The approaches described above are some of the most popularly referenced obstacle avoidance systems
...
For example Tzafestas and Tzafestas [148] provide an
overview of fuzzy and neurofuzzy approaches to obstacle avoidance
...
The network is then applied to a model of a four-wheeled vehicle
...
In the paper of Vanualailai, Nakagiri, and Ha [153] the Liapunov functions are used to implement a control strategy for two-point masses moving in a known
environment
...
The antitargets are then used when building up the control laws for the system
...

6
...
2
...
1 gives an overview on the different approaches for obstacle avoidance
...
1
Overview of the most popular obstacle avoidance algorithms

architecture

cycle
time

remarks
efficient in many inefficient, very inefficient,
cases, robust
robust
robust

tactile
range

local tangent
graph

tested
robots

sensors

performance

tactile

path
planner

global
map

local
map

view
local
local
local

dynamics

point
point

kinematics

Bug2
[101, 102]

point

other requisites

Tangent Bug
[82]

Bug

Bug1
[101, 102]

method

shape

model fidelity

kinematics
dynamics
local
histogram grid

basic
simplistic
local
histogram grid

basic
simplistic
essentially local
histogram grid

global
map
path
planner

polygonal
required

various

required

various

architecture
remarks

20 MHz, 386 AT
local minima,
oscillating trajectories

fewer local
minima

local minima

other requisites

66 MHz, 486 PC 66 MHz, 486 PC

cycle
time
27 ms
6 ms

6 … 242 ms

tested
robots

synchro-drive
(hexagonal)

nonholonomic
(GuideCane)

nonholonomic
(GuideCane)

sensors

range

sonars

sonars

model fidelity

local
map

polygonal

view

global

shape

simplistic

circle

circle

local

exact

C-space

VFH
[43]

VFH+
[92, 150]

VFH*
[149]

Vector Field Histogram (VFH)
method

C-space

Bubble band Elastic band
[86]
[85]

Bubble band

288
Chapter 6

Table 6
...
7 ms

450 MHz, PC

turning into corridors

tested
robots

synchro-drive (circular)

synchro-drive
(circular)

synchro-drive (circular)

180° FOV SCK
laser scanner

sensors

24 sonars ring,
30° FOV laser

24 sonars ring,
30° FOV laser

path
planner

NF1
24 sonars ring, 56 infrared
ring, stereo camera

global
map

C-space grid

model fidelity

local
map

shape

Curvature velocity
method [135]

Lane curvature
method [87]

Dynamic window
approach [69]

Curvature velocity
method

Global dynamic
window [44]

Dynamic window

Planning and Navigation
289

Table 6
...
circle)

2x 180° FOV SCK
laser scanner

graph (topological),
NF1

grid

remarks

architecture

cycle
time

tested
robots

sensors

path
planner

global
map

local
map

view

dynamics

kinematics

shape

other requisites

100 ms
(core algorithm: 10 ms)

holonomic (circular)

180° FOV SCK
laser scanner

NF1

180° FOV
distance sensor

fused

local perceptual
space

grid

global

basic

exact

polygon

Schlegel
[128]
model fidelity

grid

local

exact

local

global

(holonomic)

polygon

global

(holonomic)

exact

circle (but general
formulation)

ASL approach
[122]

basic

circle (but general
formulation)

circle

Nearness diagram
[107, 108]

method

basic

Global nearness
diagram [110]

Gradient method
[89]

Other

290
Chapter 6

Table 6
...
3

291

Navigation Architectures

Given techniques for path planning, obstacle avoidance, localization, and perceptual interpretation, how do we combine all of these into one complete robot system for a real-world
application? One way to proceed would be to custom-design an application-specific, monolithic software system that implements everything for a specific purpose
...
But for any sophisticated and long-term mobile robot system, the
issue of mobility architecture should be addressed in a principled manner
...
Using a well-designed navigation architecture has
a number of concrete advantages:
6
...
1 Modularity for code reuse and sharing
Basic software engineering principles embrace software modularity, and the same general
motivations apply equally to mobile robot applications
...
For example, one may introduce a Sick laser
rangefinder to a robot that previously used only ultrasonic rangefinders
...

We would like to change part of the robot’s competence without causing a string of side
effects that force us to revisit the functioning of other robot competences
...
In a more extreme example, it would be ideal if the nonholonomic
obstacle avoidance module could remain untouched even when the robot’s kinematic structure changes from a tricycle chassis to a differential-drive chassis
...
3
...
The
basic reason is that a robot architecture includes multiple types of control functionality
(e
...
, obstacle avoidance, path planning, path execution, etc
...
For example, consider collision avoidance
...
At the other extreme, high-level planning
and task-based decision-making are required for robots to perform useful roles in their

292

Chapter 6

environment
...
A final advantage of localization is associated with learning
...
Such targeted learning is likely to be
the first strategy that yields successful integration of learning and traditional mobile robotics
...

One way to characterize a particular architecture is by its decomposition of the robot’s
software
...
For descriptions of such high-level architectures, refer to [2] and [26]
...
For this purpose, two decompositions are particularly relevant: temporal decomposition and control decomposition
...
3
...
Then, in section 6
...
4 we
present three types of navigation architectures, describing for each architecture an implemented mobile robot case study
...
3
...
Decompositions also serve as a way to classify various mobile robots
into a more quantitative taxonomy
...
Control decomposition identifies the way in which various control outputs within the mobile robot architecture combine
to yield the mobile robot’s physical actions
...

6
...
3
...
Figure 6
...
In this figure, the most real-time processes are shown at the
bottom of the stack, with the highest category being occupied by processes with no realtime demands
...
In contrast, a quasi real-time layer may
capture processes that require, for example, 0
...
A tactical layer can represent decision-making that

Planning and Navigation

293

off-line planning

strategic decisions

tactical decisions

quasi real-time

hard real-time
Figure 6
...


affects the robot’s immediate actions and is therefore subject to some temporal constraints,
while a strategic or off-line layer represents decisions that affect the robot’s behavior over
the long term, with few temporal constraints on the module’s response time
...
These are not
set in stone; there are exceptions
...
A particular module’s sensor response time can be defined as the
amount of time between acquisition of a sensor-based event and a corresponding change in
the output of the module
...
18 the sensor response time
tends to increase
...
At the highest-level modules, sensor response
can be limited by slow and deliberate decision-making processes
...
Temporal depth is a useful concept applying to the temporal window
that affects the module’s output, both backward and forward in time
...
Temporal memory describes the historical time span of sensor input that is used by
the module to determine the next output
...


294

Chapter 6

Path planning

0
...
19
Sample four-level temporal decomposition of a simple navigating mobile robot
...


Spatial locality
...
Real-time modules tend to control wheel speed and orientation, controlling spatially localized behavior
...

Context specificity
...
Lowest-level modules tend to produce outputs directly as a result of immediate sensor inputs, using little context and therefore being
relatively context insensitive
...
For strategic decision-making, given the same sensor values, dramatically different
outputs are nevertheless conceivable depending on other contextual parameters
...
19, which shows a temporal decomposition of a simplistic navigation architecture into four modules
...
An emergency stop
module uses short-range optical sensors and bumpers to cut current to the motors when it
predicts an imminent collision
...
The next module uses longerrange laser rangefinding sensor returns to identify obstacles well ahead of the robot and
make minor course deviations
...


Planning and Navigation

295

action
specification

r

perceptual
output

Figure 6
...


Note that the cycle time, or bandwidth, of the modules changes by orders of magnitude
between adjacent modules
...

6
...
3
...
Presentation of control decomposition requires the
evaluator to understand the basic principles of discrete systems representation and analysis
...

Consider the robot algorithm and the physical robot instantiation (i
...
, the robot form
and its environment) to be members of an overall system whose connectivity we wish to
examine
...
The system is closed, meaning that the input
of every module m is the output of one or more modules in M
...
The one output can be connected to any number of other
modules inputs
...
Usually by r we represent the physical object on which the robot algorithm is
intended to have impact, and from which the robot algorithm derives perceptual inputs
...
The input of r represents the complete
action specification for the physical robot
...
Of course the physical robot may have many possible degrees of
freedom and, equivalently, many discrete sensors
...
For simplicity we will refer to the input of r as O and to the robot’s sensor readings I
...

Control decomposition discriminates between different types of control pathways
through the portion of this system comprising the robot algorithm
...
20 we can consider a perfectly linear, or sequential control pathway
...
21
Example of a pure parallel decomposition
...
A pure serial
architecture has advantages relating to predictability and verifiability
...
Therefore, the overall
behavior of the system can be evaluated using well-known discrete forward simulation
methods
...
21 depicts the extreme opposite of pure serial control, a fully parallel control
architecture
...
Intuitively, the fully parallel system distributes responsibility for the system’s control
output O across multiple modules, possibly simultaneously
...
Here, the control flow
contains a combination step at which point the result of multiple modules may impact O
in arbitrary ways
...
In this case, called switched parallel,
the system has a parallel decomposition but at any particular instant in time the output O
can be attributed to one specific module
...
For instance, suppose that a robot
has an obstacle avoidance module and a path-following module
...


Planning and Navigation

297

The advantage of such switched control is particularly clear if switching is relatively
rare
...
If each module has been tested independently, there is a good chance
the switched control system will also perform well
...
First, the overall behavior of the robot can become quite poor if the switching is itself
a high-frequency event
...
Another disadvantage of switched control is that the robot has no path-following bias when it is obstacle avoiding (and vice versa)
...

In contrast, the much more complex mixed parallel model allows control at any given
time to be shared between multiple modules
...
Then the output of the
robot would never be due to a single module, but would result from the mathematical combination of both modules outputs
...
Whereas with
switched control most poor behavior arises out of inopportune switching behavior, in
mixed control the robot’s behavior can be quite poor even more readily
...
Thus great care must be taken
in mixed parallel control implementations to fashion mixture formulas and individual
module specifications that lead to effective mixed results
...
Arkin [2] proposes the motor-schema architecture in which behaviors
(i
...
, modules in the above discussion) map sensor value vectors to motor value vectors
...
In contrast, Maes [103, 104] produces a
switched parallel architecture by creating a behavior network in which a behavior is chosen
discretely by comparing and updating activation levels for each behavior
...
For a
further discussion, see [2]
...
Because such systems often include truly parallel, multithreaded
implementations, the intricacies of robot-environment interaction and sensor timing

298

Chapter 6

Real World
Environment

Path

Position
Feedback

Obstacle
Avoidance

Environment Model
Local Map

Perception

Cognition

Position
Position
Local Map
Local Map
Perception
to Action

Mixed Approach

Localization

Motion Control

Figure 6
...


required to properly represent all conceivable module-module interactions can be difficult
or impossible to simulate
...

An important advantage of parallel control is its biomimetic aspect
...
g
...

6
...
4 Case studies: tiered robot architectures
We have described temporal and control decompositions of robot architecture, with the
common theme that the roboticist is always composing multiple modules together to make
up that architecture
...
Clearly, robot behaviors play an important role at the real-time
levels of robot control, for example, path-following and obstacle avoidance
...
Higher still, a global planner
could generate paths to provide tactical tasks with global foresight
...
The relevant figure is
shown here again as figure 6
...

In such a representation, the arcs represent aspects of real-time and non real-time competence
...
In contrast, PID position feedback loops bypass all high-level processing, tying the
perception of encoder values directly to lowest-level PID control loops in motion control
...
23
A general tiered mobile robot navigation architecture based on a temporal decomposition
...

Using the tools of this chapter, we can now present this same architecture from the perspective of a temporal decomposition of functionality
...

Figure 6
...
This figure is
similar to figure 6
...
However, the boundaries separating each module from adjacent modules are specific to robot
navigation
...
Path
planning uses all available global information in non real time to identify the right sequence
of local actions for the robot
...
At its lowest level,
this includes motor velocity PID loops
...

In between the path planner and real-time control tiers sits the executive, which is
responsible for mediating the interface between planning and execution
...
The executive is also responsible for recognizing failure, saving (placing the
robot in a stable state), and even re-initiating the planner as necessary
...
24
A two-tiered architecture for off-line planning
...

It is interesting to note the similarity between this general architecture, used in many
specialized forms in mobile robotics today, and the architecture implemented by Shakey,
one of the very first mobile robots, in 1969 [115]
...
The implementation of each LLA included the use of
sensor values in a tight loop just as in today’s behaviors
...
Finally, the topmost tier
for Shakey was STRIPS (Stanford Research Institute Planning System), which provided
global look ahead and planning, delivering a series of tasks to the intermediate executive
layer for execution
...
23 is useful as a model for robot
navigation, variant implementations in the robotics community can be quite different
...
For broader discussions
of various robot architectures, see [26]
...
3
...
1 Off-line planning
Certainly the simplest possible integration of planning and execution is no integration at
all
...
24, in which there are only two software tiers
...

The strategy of leaving out a planner altogether is of course extremely limiting
...
However
such robotic systems do exist, and this method can be useful in two cases:
Static route-based applications
...
For example, in factory or warehouse
settings, a robot may travel a single looping route by following a buried guidewire
...
The Chips mobile robot is an example of a museum robot that also uses this architecture (118)
...
Furthermore, it has only twelve discrete locations at which it is allowed to stop
...

Extreme reliability demands
...
Since planning software can be the most sophisticated
portion of a mobile robot’s software system, and since in theory at least planning can take
time exponential to the complexity of the problem, imposing hard temporal constraints on
successful planning is difficult if not impossible
...
A real-world example of off-line planning for this reason can be seen in the contingency plans designed for space shuttle flights
...
The fundamental goal is to provide an absolute upper limit
on the amount of time that passes before the astronauts begin resolving the problem, sacrificing a great deal of ground time and paperwork to achieve this performance guarantee
...
3
...
2 Episodic planning
The fundamental information-theoretic disadvantage of planning off-line is that, during
run-time, the robot is sure to encounter perceptual inputs that provide information, and it
would be rational to take this additional information into account during subsequent execution
...

As shown in figure 6
...
23
...
Planning is compu-

302

Chapter 6

Path planning
Global
knowledge, map

Local
knowledge
Executive

Real-time controller
behavior 1

behavior 2

behavior 3

PID motion control

Robot Hardware

Figure 6
...


tationally intensive, and therefore planning too frequently would have serious disadvantages
...
g
...
At such points, the executive will invoke the planner to generate, for
example, a new path to the goal
...
For example, in [129] the path-following behavior returns failure if it fails to make progress for a number of seconds
...

A common technique to delay planning until more information has been acquired is
called deferred planning
...
For example, the commercially available Cye robot can be given a set of goal locations
...
Upon reaching this goal location, its map will have changed
based on the perceptual information extracted during motion
...

The robot Pygmalion implements an episodic planning architecture along with a more
sophisticated strategy when encountering unforeseen obstacles in its way [36, 122]
...
This is valuable because the

Planning and Navigation

Global
knowledge, map

303

Global Executive

Real-time controller
behavior 1

behavior 2

behavior 3

PID motion control

Robot Hardware

Figure 6
...


robot is not kinematically symmetric, and so servoing through a particular obstacle course
may be easier in one direction than the other
...
Thus, if repeated attempts to
clear the obstacle fail, then the robot’s executive will temporarily cut the topological connection between the two appropriate nodes and will launch the planner again, generating a
new set of waypoints to the goal
...
25), a geometric path planner will generate a path from the robot’s
current position to the next waypoint
...
They combine the versatility of responding to environmental changes
and new goals with the fast response of a tactical executive tier and behaviors that control
real-time robot motion
...
25, it is common in such systems to have both
a short-term local map and a more strategic global map
...

6
...
4
...
But limiting this discussion to the question of navigation architectures
leads to what may at first seem a degenerate solution
...
26 may look similar to the off-line planning architecture of figure 6
...
In this case, the planner tier
has disappeared because there is no longer a temporal decomposition between the executive

304

Chapter 6

and the planner
...

The idea of speeding up planning to the point that its execution time is no longer significant may seem wishful
...
Consider the work of
Stentz [139]
...
Using advanced caching techniques from computer science, Stentz optimized a grassfire path-planning algorithm called
D* so that global path planning would be possible within the basic control loop of the executive
...
26, is an architecture in which the local and global representations are the same, and in which the executive has all global planning functionality
required for the problem built in
...
Of course, the method is computationally challenging and will not be practical in more complex environments until processor speeds
increase even further
...

The somewhat recent success of an integrated planning and execution method, D*,
underlines the fact that the designer of a robot navigation architecture must consider not
only all aspects of the robot and its environmental task but must also consider the state of
processor and memory technology
...
All forms of technological
progress, from robot sensor inventions to microprocessor speed increases, are likely to catalyze new revolutions in mobile robot architecture as previously unimaginable tactics
become realities
...
D
...
World Scientific Series in Robotics and Intelligent Systems
...
Ltd
...

Arkin, R
...
, Behavior-Based Robotics
...

Bar-Shalom, Y
...
-R
...
Norwood, MA, Artech House, 1993
...
, Everet,t H
...
, Feng, L
...
Natick, MA, A
...
Peters, Ltd
...

Borenstein, J
...
R
...
, Where Am I? Sensors and Methods for
Mobile Robot Positioning
...
Available at http://
www-personal
...
umich
...
htm
...
M
...
New York, John Wiley
& Sons, 1970
...
(editor), Artificial Intelligence Techniques, a Comprehensive Catalogue
...

Canudas de Wit, C
...
, and Bastin G
...

New York, Spinger-Verlag, 1996
...
J
...
, Transformation and Weighting in Regression
...

Cox, I
...
, Wilfong, G
...
(editors), Autonomous Robot Vehicles
...

Craig, J
...
, Introduction to Robotics: Mechanics and Control
...
Boston,
Addison-Wesley, 1989
...
W
...
Upper Saddle River, NJ, Prentice
Hall, 1989
...
F
...
Bristol, UK, Adam Hilger,
1991
...
R
...
, Applied Regression Analysis
...
New York, John
Wiley & Sons, 1988
...
R
...
New York, Natick, MA, A
...
Peters, Ltd
...


306

[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]

Bibliography

Faugeras, O
...
Campridge MIT Press, MA, 1993
...
R
...
J
...
Palo
Alto, CA, Morgan Kaufmann, 1987
...
M
...
G
...
Boston, AddisonWesley, 1993
...
, Flynn, A
...
Natick, MA,
A
...
Peters, Ltd
...
[also available in German and French]
...
, Bonasso, R
...
, Murphy, R
...
(editors), Artificial Intelligence and
Mobile Robots; Case Studies of Successful Robot Systems
...

Latombe, J
...
, Robot Motion Planning
...

Lee, D
...
Cambridge, UK, Cambridge University Press, 1996
...
E
...
F
...
Norwood, MA, Kluwer Academic Publishers, 1992
...
, Durrant-Whyte, H
...
, Data Fusion and Sensor Management: A Decentralized Information-Theoretic Approach
...

Mason, M
...
Cambridge, MA, MIT Press,
2001
...
R
...
Cambridge, MA, MIT Press, 2000
...
, Interleaving Planning and Execution for Autonomous Robots, Norwood, MA, Kluwer Academic Publishers, 1997
...
H
...

Ritter, G
...
, Wilson, J
...
, Handbook of Computer Vision Algorithms in Image Algebra
...

Russell, S
...
, Artificial Intelligence, a Modern Approach
...

Schraft, R
...
, Schmierer, G
...
Natick, MA, A
...
Peters, Ltd, 2000
...

Sciavicco, L
...
, Modeling and Control of Robot Manipulators
...

Todd, D
...
Kogan Page Ltd,
1985
...
V
...
van Leeuwen (editor),
Handbook of Theoretical Computer Science, Cambridge, MA, MIT Press, 1990,
Volume A, chapter 5, 255–300
...
O
...
A
...
, “Feature-Based Multi-Hypothesis
Localization and Tracking for Mobile Robots Using Geometric Constraints,” in
Proceedings of the IEEE International Conference on Robotics and Automation
(ICRA’2002), Washington, DC, May 11–15, 2002
...
O
...
, Tomatis, N
...
, “Real-Time Obstacle Avoidance
for Polygonal Robots with a Reduced Dynamic Window,” in Proceedings of the
IEEE International Conference on Robotics and Automation (ICRA 2002), Washington, DC, May 11–15, 2002
...
O
...
Y
...
3210, 1997, 42–53,
...
O
...
, “Improving Robustness and Precision in Mobile Robot
Localization by Using Laser Range Finding and Monocular Vision,” in Proceedings
of the Third European Workshop on Advanced Mobile Robots (Eurobot 99), Zurich,
September 6–9, 1999
...
, “Exponential Stabilization of a Mobile Robot,” in Proceedings of 3rd
European Control Conference, Rome, September 1995
...
, Cardei V
...
, “A Comparison of Computational Color Constancy
Algorithms
...
11: 972–984, 2002
...
L
...
J
...
S
...
” International Journal of Computer Vision, 12:43–77, 1994
...
, Nourbakhsh, I
...

Borenstein, J
...
, “The Vector Field Histogram – Fast Obstacle Avoidance
for Mobile Robots
...

Brock, O
...
, “High-Speed Navigation Using the Global Dynamic Window
Approach,” in Proceeding of the IEEE International Conference on Robotics and
Automation, Detroit, May 1999
...
, “A Robust Layered Control System for a Mobile Robot,” IEEE Transactions of Robotics and Automation, RA-2:14–23, March 1986
...
B
...
Z
...

Bruce, J
...
, and Veloso, M
...

Burgard,W
...
, Fox, D
...
, Lakemeyer, G
...
, Steiner,
W
...
, “Experiences with an Interactive Museum Tour-Guide Robot,” Artificial Intelligence, 114, 1–53, 2000
...
, Derr, A
...
, Cremers, A
...
C
...

Burgard, W
...
, Henning, D
...

Burgard, W
...
, Jans, H
...
, Thrun, S
...


308

[52]
[53]
[54]
[55]

[56]
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64]
[65]
[66]
[67]

Bibliography

Campion, G
...
, D’Andréa-Novel, B
...
” IEEE Transactions on Robotics and Automation, 12, No
...

Canudas de Wit, C
...
J
...
” IEEE Transactions on Robotics and Automation,
37, 1791–1797, 1993
...
, Estier, T
...
, “Fascination of Down Scaling–Alice the Sugar
Cube Robot
...

Castellanos, J
...
, Tardos, J
...
, Schmidt, G
...

Chen, C
...
, Quinn, R
...
, “A Crash Avoidance System Based upon the Cockroach
Escape Response Circuit,” in Proceedings of the IEEE International Conference on
Robotics and Automation, Albuquerque, NM, April 1997
...
, Crowley, J
...
, “Position Estimation for a Mobile Robot Using Vision
and Odometry,” in Proceedings of the IEEE International Conference on Robotics
and Automation, Nice, France, May 1992
...
S
...
, “Accurate Odometry and Error Modelling for a Mobile
Robot,” in Proceedings of the IEEE International Conference on Robotics and Automation, Albuquerque, NM, April 1997
...
, Walker, S
...
, Burdick, J
...
” The
International Journal of Robotics Research, 19, 126–148, 2000
...
J
...
J
...
” Artificial Intelligence, 66, 311–44, 1994
...
, Guzikowski, R
...
, Pangels, H
...
, Whittaker, W
...
,
“NAVLAB: An Autonomous Navigation Testbed
...

Dugan, B
...

Elfes, A
...

Ens, J
...
, “An Investigation of Methods for Determining Depth from
Focus
...
on Pattern Analysis and Machine Intelligence, 15: 97–
108, 1993
...
S
...
D
...

Falcone, E
...
, Porter, E
...
, “The Personal Rover Project:
The Comprehensive Design of a Domestic Personal Robot,” Robotics and Autonomous Systems, Special Issue on Socially Interactive Robots, 42, 245–258, 2003
...
J
...
, Slotin, J-J
...
, “Real-Time Path Planning Using Harmonic Potentials
in Dynamic Environments,” in Proceedings of the IEEE International Conference
on Robotics and Automation, Albuquerque, NM, April 1997
...
, “KLD-Sampling: Adaptive Particle Filters and Mobile Robot Localization
...
MIT Press, 2001
...
, Burgard,W
...
, “The Dynamic Window Approach to Collision
Avoidance
...

Gander,W
...
H
...
, “Least-Squares Fitting of Circles and
Ellipses
...
34, no
...
558–578, December 1994
...
R
...
” Technical Report Logic-87-2
...

Golub, G
...
, “Calculating the Singular Values and Pseudo-Inverse of a
Matrix
...

Gutmann, J
...
, Burgard, W
...
, Konolige, K
...
Conference of Intelligent Robots and Systems (IROS’98), Victoria, B
...
, Canada, October 1998
...
S
...
, “Incremental Mapping of Large Cyclic Environments,”
in Proceedings of the IEEE International Symposium on Computational Intelligence
in Robotics and Automation (CIRA), Monterey, November 1999
...
, “Humanoid Robots in Waseda University - Hadaly-2 and
WABIAN,” in IARP First International Workshop on Humanoid and Human
Friendly Robotics, Tsukuba, Japan, October 1998
...
, Kleeman, L
...

Horn, B
...
P
...
G
...

Horswill, I
...

Jacobs, R
...
, “Planning Smooth Paths for Mobile Robots,” in Proceeding
...
2–7
...
, Kirkwood-Watts, C
...
, “Distributed Map-making and Navigation in Dynamic Environments,” in Proceedings of the 1998 IEEE/RSJ Intl
...
C
...

Jensfelt, P
...
, Wijk, O
...
, “Feature Based Condensation for
Mobile Robot Localization,” in Proceedings of the IEEE International Conference
on Robotics and Automation, San Francisco, May 24–28, 2000
...
, Rivlin, E
...
, “A New Range-Sensor Based Globally Convergent Navigation Algorithm for Mobile Robots,” in Proceedings of the IEEE International Conference on Robotics and Automation, Minneapolis, April 1996
...
, “Pose Determination and Tracking in Image Mosaic Based Vehicle Position Estimation,” in Proceeding of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’00), Takamatsu, Japan, October 31–November 5,
2000
...
, Chatila, R
...
490–496
...
, Jaouni, H
...
, Laumod, J
...
, “Dynamic Path Modification for
Car-Like Nonholonomic Mobile Robots,” in Proceedings of IEEE International
Conference on Robotics and Automation, Albuquerque, NM, April 1997
...
, Quinlan, S
...

Ko, N
...
, Simmons, R
...
C
...

Koenig, S
...
, “Xavier: A Robot Navigation Architecture Based on Partially Observable Markov Decision Process Models,” in [20]
Konolige, K
...

Konolige, K
...

Koperski, K
...
, Han, J
...
in Proceedings of the ACM SIGMOD Workshop on Research Issues
on Data Mining and Knowledge Discovery, Montreal, June 1996
...
, Borenstein, J
...
382-384
...
, Borenstein, J
...

Kuipers, B
...
-T
...
” Journal of Robotics and Autonomous Systems, 8:47–63, 1991
...
, Nourbakhsh, I
...
, Siegwar,t R
...

Latombe, J-C
...
, “Robot Motion Planning: A Distributed Presentation
Approach
...

Lauria, M
...
, Siegwart, R
...
” in Video Proceedings of the 2000 IEEE International Conference on Robotics and Automation, San Francisco, May 24–28, 2000
...
, Latombe, J
...
, “Landmark-Based Robot Navigation,” in Proceedings
of the Tenth National Conference on AI
...

Lazanas, A
...
C
...
” Artificial Intelligence, 76:285–317, 1995
...
-O
...
-J
...
, You, B
...
, Oh, S
...
: “A Stabile TargetTracking Control for Unicycle Mobile Robots,” in Proceedings of the 2000 IEEE/
RSJ International Conference on Intelligent Robots and Systems, Takamatsu, Japan,
October 31–November 5, 2000
...
, Skewis, T
...
” IEEE Transactions on Systems, Man, and Cybernetics, 20:1990, pp
...

[102] Lumelsky, V
...
, “Path-Planning Strategies for a Point Mobile Automaton Moving Amidst Unknown Obstacles of Arbitrary Shape,” in [10]
...
, “The Dynamics of Action Selection,” in Proceedings of the Eleventh
International Joint Conference on Artificial Intelligence, Detroit, 1989, pp
...

[104] Maes, P
...
1990
...
, Siegwart, R
...

[106] Maybeck,P
...
, “The Kalman Filter: An Introduction to Concepts,” in [10]
...
, Montano, L
...

[108] Minguez, J
...
, “Robot Navigation in Very Complex, Dense, and Cluttered Indoor / Outdoor Environments,” in Proceeding of International Federation of
Automatic Control (IFAC2002), Barcelona, April 2002
...
, Montano, L
...
, “Reactive Collision Avoidance for Navigation with Dynamic Constraints,” in Proceedings of the 2002 IEEE/RSJ International
Conference on Intelligent Robots and Systems, 2002
...
, Montano, L
...
, Alami, R
...

[111] Montano, L
...
R
...
2
...
526–532
...
and Elfes, A
...
, “High Resolution Maps from Wide Angle Sonar,” in
Proceedings of the 1985 IEEE International Conference on Robotics and Automation, IEEE Press, At
...
116–121
...
, Chatila, R
...
207–216
...
K
...
” IEEE CVPR, pp
...

[115] Nilsson, N
...
, “Shakey the Robot
...
325
...
R
...


312

Bibliography

[117] Nourbakhsh, I
...
, Andre
...
, Tomasi, C
...
R
...

[118] Nourbakhsh, I
...
, Bobenage, J
...
, Lutz, R
...
, “An
Affective Mobile Educator with a Full-Time Job
...

[119] Nourbakhsh, I
...
, Powers, R
...
, “DERVISH, an Office-Navigation
Robot
...

[120] Pell, B
...
, Chien, S
...
, Muscettola, N
...
, Wagner, M
...
, “An Autonomous Spacecraft Agent Prototype
...
5, 1–27, 1998
...
P
...
” IEEE Transactions on Pattern
Analysis and Machine Intelligence (PAMI), 9:523–531, 1987
...
, Siegwart, R
...

[123] Pratt, J
...
, “Intuitive Control of a Planar Bipedal Walking Robot,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA
'98), Leuven, Belgium, May 16–21, 1998
[124] Raibert, M
...
, Brown, H
...
, Jr
...
, “Experiments in balance with a
3D One-Legged Hopping Machine
...

[125] Ringrose, R
...

[126] Rowe, A
...
, Nourbakhsh, I
...
Kuaii, Hawaii, December
2001
...
, Tomasi, C
...
, “The Earth Mover’s Distance as a Metric for
Image Retrieval,” STAN-CS-TN-98-86, Stanford University, 1998
...
, “Fast Local Obstacle under Kinematic and Dynamic Constraints,” in
Proceedings of the IEEE International Conference on Intelligent Robot and Systems
(IROS 98), Victoria, B
...
Canada 1998, pp
...

[129] Schultz, A
...
, “Continuous Localization Using Evidence Grids,” in Proceedings of the IEEE International Conference on Robotics and Automation
(ICRA’98), May 16–21, 1998, Leuven, Belgium, pp
...

[130] Schweitzer, G
...
, “ROBOTRAC – a Mobile Manipulator Platform for
Rough Terrain,” in Proceedings of the International Symposium on Advanced Robot
Technology (ISART), Tokyo, Japan, March, 1991
...
, Malik, J
...
” IEEE Transactions
on Pattern Analysis and Machine Intelligence (PAMI), 82:888–905, 2000
...
, et al
...
02: A Large Scale Installation of Personal
Robots
...

[133] Siegwart, R
...
, Estier, T
...
, “Innovative Design for
Wheeled Locomotion in Rough Terrain,” Journal of Robotics and Autonomous Systems, 40:151–162, 2002
...
, Dudek, G
...
C
...

[135] Simmons, R
...

[136] Smith, R
...
, Cheeseman, P
...
J
...
T
...
167–193
...
J
...
, “Exponential Control Law for a Mobile Robot:
Extension to Path Following,” IEEE Transactions on Robotics and Automation,
9:837–842, 1993
...
M
...
, Brunner, B
...
, “Autonomous VisionBased Navigation of the Nanokhod Rover,” in Proceedings of i-SAIRAS 6th International Symposium on Artificial Intelligence, Robotics and Automation in Space,
Montrea1, June 18–22, 2001
...
, “The Focussed D* Algorithm for Real-Time Replanning,” in Proceedings of IJCAI-95, August 1995
...
S
...
, Rey, L
...
485–493
...
, Facchinetti, C
...
C
...
” IEEE Transactions on Pattern Analysis and
Machine Intelligence, 16:1002–1017, 1994
...
, Burgard, W
...
, “A Probabilistic Approach to Concurrent Mapping
and Localization for Mobile Robots
...
1998
...
, et al
...

[144] Thrun, S
...
, Burgard, W
...
, “Robust Monte Carlo Localization for
Mobile Robots,” Artificial Intelligence, 128:99–141, 2001
...
, Gutmann, J
...
, Fox, D
...
, Kuipers, B
...

[146] Tomasi, C
...
, “Image Deformations Are Better Than Optical Flow
...

[147] Tomatis, N
...
, Siegwart, R
...
” Robotics and
Autonomous Systems 44, 3–14, 2003
...
S
...
G
...

[149] Ulrich, I
...
, “VFH*: Local Obstacle Avoidance with Look-Ahead Verification,” in Proceedings of the IEEE International Conference on Robotics and
Automation, San Francisco, May 24–28, 2000
...
, Borenstein, J
...

[151] Ulrich, I
...
, “Appearance-Based Obstacle Detection with Monocular
Color Vision,” in the Proceedings of the AAAI National Conference on Artificial
Intelligence
...
August 2000
...
, Nourbakhsh, I
...
1023–1029, April 2000
...
, Nakagiri, S
...
, “Collision Avoidance in a Two-Point System
via Liapunov’s Second Method
...

[154] Van Winnendael, M
...
, Bertrand, R
...
, “Nanokhod Microrover
Heading towards Mars,” in Proceedings of the Fifth International Symposium on
Artificial Intelligence, Robotics and Automation in Space (ESA SP-440), Noordwijk, Netherlands, pp
...

[155] Weiss, G
...
, Puttkamer, E
...
595–601, September 12-16, 1994
...
H
...
O
...
J
...

[157] Yamauchi, B
...
, Adams, W
...

[158] Zhang, Z
...
” Microsoft
Research Technical Report 98-71
...
microsoft
...

Referenced Webpages
[159] Fisher, R
...
(editor), “CVonline: On-line Compendium of Computer Vision,”
Available at www
...
ed
...
uk/CVonline/
...
intel
...

[161] Source code release site: www
...
cmu
...

[162] Newton Labs website: www
...
com
...
personalrobots
...

Interesting Internet Links to Mobile Robots
General homepages with mainly mobile robots
http://ranier
...
nasa
...
html
http://www-robotics
...
umass
...
html
http://www
...
mit
...
ri
...
edu/project_lists/index
...
roboticsclub
...
html
http://www
...
com/
http://asl
...
ch (at EPFL)
http://robotics
...
ch (at EPFL)
http://www
...
cmu
...
cs
...
edu/~mercator/ (at CMU)
Homepages with mainly wheeled mobile robots
http://www
...
fr/RIA/RIA
...
cs
...
edu/~illah/lab
...
cs
...
edu/projects/amrl/amrl
...
cc
...
edu/ai/robot-lab/
http://www
...
umich
...
html
Homepages with walking and climbing robots
http://www
...
ac
...
fzi
...
automation
...
fi/
http://www
...
mit
...
epfl
...
cs
...
edu/~personalrover/
Homepages for robots on the Web
http://telerobot
...
uwa
...
au/secn/links
...
IEOR
...
EDU/~goldberg/art/telerobotics-links
...
cs
...
de/~rhino/
http://www
...
cmu
...
mech
...
edu
...
ieor
...
edu/~goldberg/GC/www
...
rwu
...
graham
...
cs
...
edu/~illah/SAGE/index
...
dislocation
...
informatik
...
de/rr/
http://www
...
org/Eventscope/main
Title: Robotics
Description: Robotics