Search for notes by fellow students, in your own course and all over the country.

Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.

My Basket

You have nothing in your shopping cart yet.

Title: Introduction to Mathematical Optimal control theory
Description: We use optimal control theory as an extension of the calculus of variations for dynamic systems with one independent variable, usually time, in which control (input) variables are determined to maximize (or minimize) some measures of the performance (output) of a system while satisfying specified constraints. We give an explicit mathematical statement of “the optimal control problem” and then we study one of the most interesting approaches of solving it: the Pontryagin Maximun Principle. Using optimal control theory, problems not amenable by classical methods such as problem of designing a spacecraft attitude control system that minimizes fuel expenditure, have been made feasible by the development of digital computers.

Document Preview

Extracts from the notes are below, to see the PDF you'll receive please use the links above


Introduction to mathematical optimal
control theory
A Project
submitted to The Abdus Salam International Center
for Theoretical Physics (ICTP)
Trieste-Italy
in partial fulfillment of the requirement for

Postgraduate Diploma in Mathematics

By
Ngouanfo Fopa Edith Laure
Supervisor:
Professor Giovanni Bellettini

1

International Center for Theoretical Physics
http://www
...
it, Trieste, 34151, Italy
...

a

Contents
Acknowledgments

1

Introduction

2

Chapter 1
...
The optimal control problem
2
...
The mathematical model
2
...
Performance criterion
2
...
Physical constraints
2
...
Optimality criteria
2
...
An exampe of control of production and consumption
2
...
A problem with no minimum
2
...
Existence of optimal controls

5
5
6
7
8
9
9
10

Chapter 3
...
1
...
2
...
3
...
4
...
May God help to never stop
your charity services
...

Secondly, I wish to thank Professor Giovanni Bellettini, who encouraged me to work in
this area of mathematics
...
I am
indepted to the head of mathematics department Professor Ramadas Ramakrishnan, to
Professor Stefano Luzzatto, to Mabilo, Sandra, Patrizia and Adelaide for their special
carings toward my humble personality
...
Finally, I wish
to thank my mother Meli Odile for her unconditional love
...
In
order to implement this influence, engineers for instance build devices that incorporate
various mathematical techniques
...
The techniques are closely related to those of classical calculus of variations [8]
and to other areas of optimization theory
...
Thus, one can define optimal control theory as an extension of the calculus of variations for dynamic systems with one independent variable, usually time, in
which control (input) variables are determined to maximize (or minimize) some measure
of the performance (output) of a system while satisfying specified constraints
...
We shall give an explicit mathematical
statement of “the optimal control problem”
...
The announcement of the
the Pontryagin Maximum Principle in the late 1950s can properly be regarded as the birth
of the mathematical theory of optimal control
...

Let E be a Banach space
...
For each f ∈ E ∗ , we associate a map φf : E −→ R defined by
φf (x) := f, x = f (x)
for all x ∈ E
...
1
...

Theorem 1
...
Let K be a closed and convex subset of E
...

Proof
...
7]
...

Definition 1
...
The weak star topology is the coarsest topology on E ∗ for which all
maps φx are continuous
...

Remark 1
...

(i) The weak and the weak star topologies are Hausdorff
...
3 and Proposition 3
...

(ii) If E is of finite dimensional, then the strong and the weak topologies coincide
...

Proposition 1
...
Let {fn } ⊂ E ∗ be a sequence
...

3

1
...
See [3, Proposition 3
...

Let E be a topological space
...
6
...
We say that f is lower semicontinuous on E if it is lower
semicontinuous at each point x ∈ E
...
7
...
8
...
We say that f is sequentially lower
semicontinuous on E if f is sequentially lower semicontinuous at each point x ∈ E
...
9
...
A function f : X −→ Y is said to be
coercive if for every compact set K1 ⊆ Y there exists a compact set K1 ⊆ X such that
f (X \ K2 ) ⊆ Y \ K1
Proposition 1
...
A function f : Rn −→ Rm is coercive iff
lim

x −→+∞

f (x) = +∞

Concerning lower semicontinuity and coerciveness, we refer to [5]
...

2
...
The mathematical model
The objective is to obtain the simplest mathematical description that adequately predicts
the response of the physical system, and usually this is not straightforward
...
1)
x0 = x(t0 )
...

Let the system be described by (2
...

We shall call every measurable function
- α : I −→ Rm , a control;
- x : I −→ Rn , a space trajectory,
for some given n, m ∈ N
...
In the case where f does not depend
explicitely on t, the system is said to be autonomous
...
1) must have a unique
solution
...

- f to be continuous in the variables t, x, α and continuously differentiable with
respect to its second argument
...
1) is required to hold out of the discontinuity
points of the control
...
2
...
1) and further, its solution which is called the response of the system corresponding to
the control α for the initial condition x(t0 ) = x0 , is piecewise continuously differentiable in
its maximum interval of existence
...
1 (Piecewise Continuous Functions)
...
, N
...
The discontinuity points of one such control are by definition those
points where at least one of its components jumps
...
2 (Admissible control)
...
2)

is said to be an admissible control3
...

Remark 2
...
It follows from the definition of admissible control that every admissible
control α ∈ A[t0 , s] is bounded
...
2
...

It may be defined in the so-called Lagrange form,
s

P (α) :=

r(t, x(t), α(t)) dt
...
3)

t0

We shall assume that the Lagrangian density r(t, x, α) is defined and continuous, together
with its partial derivatives x r(t, x, α) on R × Rn × Rm
...

2See

[10], [Page 1-12] for a summary of local existence and uniqueness theorems for the solutions of
nonlinear ODEs, as well as theorems on their continuous dependence and differentiability with respect to
parameters
...
2) is required to hold at any continuity point
...
3
...
4)

where φ : R × Rn × R × Rn −→ R being a real valued function
...

More generally, we may consider the cost functional in the Bolza form, which corresponds to the sum of an integral term and a terminal term as
s

r(t, x(t), α(t)) dt,

P (α) = φ(t0 , x(t0 ), s, x(s)) +

(2
...
4
...
, xn ), and an additional differential
¯
equation
xr (t) = r(t, x(t), α(t)); xr (t0 ) = 0
...
3 is transformed into one of the Mayer form 2
...
4) can be rewritten in the Lagrange form (2
...

¯
- Finally, the foregoing transformations can be used to rewrite Bolza problems (2
...

¯
¯

2
...
Physical constraints
A great variety of constraints may be imposed in an optimal control problem
...
They can be of equality or inequality type
...
4
...
5 (Feasible Control, Feasible Pair)
...

The pair (¯ , x) is then called a feasible pair
...


2
...
Optimality criteria
Having defined a performance criterion, the set of physical constraints to be satisfied, and
the set of admissible controls, one can then state the optimal control problem as follows:
”find an admissible control α ∈ A[t0 , s] which satisfies the physical constraints in such a
manner that the cost functional P (α) has a minimum value
...


This assignment is global in nature and does not require consideration of a norm
...
Having chosen the class
of controls to be piecewise continuous functions (Definition 2
...


t∈(t0 ,s)

Under the additional assumption that the controls are continuously differentiable between
two successive discontinuities [θk , θk+1 ], k = 0,
...
1), another possible
norm is
α

1,∞

= sup
t∈(t0 ,s)

α(t) + sup
t∈(t0 ,s)

α(t)
...
6
...
5
...
Let us begin to construct a
mathematical model by setting
x(t) = amount of output produced at time t ≥ 0
We suppose that we consume some fraction of our output at each time, and likewise can
reinvest the remaining fraction
...

This will be our control, and is subject to the obvious constraint that
0 ≤ α(t) ≤ 1 for each time t ≥ 0
Given such a control, the corresponding dynamics are provided by the ODE
x(t) = kα(t)x(t)
˙
x(0) = x0 ,
the constant k > 0 modelling the growth rate of our reinvestment
...

0

The meaning is that we want to maximize our total consumption of the output, our consumption at a given time t being (1 − α(t))x(t)
...


2
...
A problem with no minimum
Consider the problem to minimize the functional
1

x(t)2 + α(t)2 dt

P (α) =
0

for α ∈ C([0, 1]), subject to the terminal constraint
x(1) = 1,
where the response x(t) is given by the linear initial value problem,
x(t) = α(t); x(0) = 0
...
7
...
We show that this variational problem does
not have a solution
...

Now, consider the sequence of functions {xk }k∈N in C 1 ([0, 1]) defined by xk (t) := tk
...
Overall, we have thus shown that
inf P (x) = 1
...

2
...
Existence of optimal controls
We present a result that states and shows the conditions of solvability for an extremum
problem for a functional P
...

Suppose we need to find a function minimizing the functional P on a given set of admissible controls A
...
This
means that there exists a minimizing sequence, that is, a sequence of elements αk ∈ A
such that P (αk ) −→ inf A
...


2
...
EXISTENCE OF OPTIMAL CONTROLS

11

Assume that A is a bounded subset of the normed vector space (E,
...
In this case, the
sequence is uniformly bounded, that is, αk ≤ c for all k
...
If we
denote by the subequence {αk }k∈N again, the convergence αk −→ α means that the scalar
products < αk , λ > converges to < α, λ > for every function λ ∈ E
...
But
it is not clear if this limit belongs to the set of admissible controls
...
It is known from the theorem of Hilbert spaces that every convex closed subset of a Hilbert space is weakly convex, that is, it contains the limits of all
weakly converging sequences of its elements
...
However, it is not known whether the functional P
achieves its lower bound at α
...
Every convex continuous functional is weakly lower semicontinuous
...


(2
...
As follows from 2
...

Since the subject of our consideration is not an arbitrary weakly converging saequence,
but the one that minimizes the functional P , {P (αk )}k∈N not only has converging subsequences, but converges itself to the lower bound of the functional P on A
...
6 can be written in the form
P (α) ≤ inf lim P (αk ) = inf P (A),
which means that the value of the functional P at the element α does not exceed its lower
bound on the set A
...

Since none of the elements of a set of numbers can be less than its lower bound, the
foregoing relation turns out to be an equality
...
Thus, the admissible control α is a solution to
the problem in question
...


2
...
EXISTENCE OF OPTIMAL CONTROLS

12

Theorem 2
...
The problem of minimizing a convex lower semicontinuous functional
bounded from below on a convex closed bounded subset of a Hilbert space is solvable
...

Theorem 2
...
Let E be a topological space
...
sequentially coercive and sequencially lower semicontinuous)
...


Remark 2
...

(i) Even if an optimal control exists, it may not be unique
...


CHAPTER 3

Methods of solving optimal control problems

3
...
Variational approach
Sufficient conditions for an optimal control problem to have a solution, such as those
discussed above, are not at all useful in helping us find solutions
...
For many optimal control problems, such conditions allow to single out
a small subset of controls, sometimes even a single control
...
However, it should be
reemphasized that necessary conditions may delineate a nonempty set of candidates, even
though an optimal control does not exist for the problem
...
e
...
More general optimal control problems
with control and state path constraints, shall be considered later on in the statement of
the maximum principle
...
2
...
1)

t0

subject to
x(t) = f (t, x(t), α(t)),
˙
x(t0 ) = x0 ,

(3
...
A control function α, on [t0 , s], together
with the initial value problem 3
...

Thus, we may speak of finding a control, since the corresponding response is implied
...
2
...
1 (First-Order Necessary Conditions)
...
3)

t0

subject to
x(t) = f (t, x(t), α(t)),
˙
x(t0 ) = x0 ,

(3
...
Suppose that α∗ ∈ C([t0 , s], Rm ) is a local minimizer for the problem with
respect to the · ∞ norm,, and let x∗ ∈ C 1 ([t0 , s], Rn ) denote the corresponding response
...
4), the system
p(t) = −
˙
p(s) = 0;

x r(t, x(t), α(t))



x f (t, x(t), α(t))

· p(t),

(3
...
6)

for t0 ≤ t ≤ s
...
5) is
often referred to as the adjoint equation (or the costate equation)
...
Consider a one-parameter family of comparison controls
v(t; η) := α(t) + ηq(t),
where q(t) ∈ C([t0 , s], Rm ) is some fixed function, and η ∈ R is a parameter with |η|
sufficiently small
...
4 exists, is unique, and is differentiable with respect to
η, for all η ∈ Bη0 (0) and for all t ∈ [t0 , s]1
...
Since the control v(t; η) is admissible and its associated response
is y(t; η), we have, remembering that f (t, y(t; η), v(t; η)) − y(t; η) = 0,
˙
s

r(t, y(t; η), v(t; η)) + p(t) · [f (t, y(t; η), v(t; η)) − y(t; η)] dt
˙

P (v(t; η)) =
t0
s

[r(t, y(t; η), v(t; η)) + p(t) · f (t, y(t; η), v(t; η)) + p(t) · y(t; η)] dt
˙

=
t0

− p(s) · y(s; η) + p(t0 ) · y(t0 ; η),
1See

[4], Appendix A
...
2
...

Based on the differentiability properties of r and y, and by the theorem of differentiation
under the integral sign [4, Theorem 2
...
59, p
...
Hence, α∗ being a local minimizer, we have
d
P (v(·; η))|η=0 = 0
...


η y(s; 0),

where we have used
condition

η y(t0 ; η)

= 0, which is a consequence of the validity of the initial
y(t0 , η) = x0
...
Because the effect of
a variation of the control on the course of the response is hard to determine (i
...
, η y(t; 0)),
we choose p∗ (t), t0 ≤ t ≤ s , so as to obey the differential equation
p(t) = −
˙

xf



· p(t) −

xr



,

(3
...

Note that (3
...
50, p
...
e
...
That is, the condition
s

0=

[
t0

αr



+

αf



· p∗ (t)] · q(t) dt,

3
...
EULER-LAGRANGE EQUATIONS

16

must hold for any q ∈ C([t0 , s], Rm )
...
, n, which in turn implies the necessary condition that
0=

αi r



+

αi f



· p(t) · p∗ (t),

i = 1,
...

Remark 3
...

- In the special case where f (t, x(t), α(t)) = α(t) with n = m,
(3
...
Then from (3
...

˙
- It is convenient to introduce the Hamiltonian function
H : R × Rn × Rm × Rn −→ R
associated with the optimal control problem 3
...
2, by adjoining the right-hand
side of the differential equations to the cost integrand as
H(t, x, α, p) := r(t, x, α) + p · f (t, x, α)
...
8)

Thus the Euler-Lagrage equations can be written as
x = p H,
˙
x(t0 ) = x0

(3
...
10)

0 = αH
(3
...
Note that a necessary condition for the triple (x , α , p ) to give a
local minimum of P is that α∗ (t) be a stationary point of the Hamiltonian function
with x∗ (t) and p∗ (t), at each t ∈ [t0 , s]
...

˙
˙
d
If neither r nor f depend explicitly on t, we get dt H ≡ 0, hence H is constant
along an optimal trajectory
...
9), (3
...
11) are necessary conditions both
for a minimization and for a maximization problem
...
2
...
3
...


(3
...

2

∗ ∗
Candidate solutions (α , x , p ) are those satisfying the Euler-Lagrange equations, i
...
,
x∗ (t) =
˙

pH

p∗ (t) = −
˙
0=

αH

= 2[1 − α∗ (t)]; x∗ (0) = 1

x H;


p∗ (1) = 0
...
Finally, substituting the optimal control candidate
back into 3
...
Integrating the latter equation, and drawing
the results together, we obtain
α∗ (t) = 2(t − 1)
x∗ (t) = −2t2 + 6t + 1
p∗ (t) = t − 1
...

Finally, we illustrate the optimality of the control α∗ by considering the modified controls
v(t; η) := α∗ (t) + ηq(t), and their associated responses y(t; η)
...

˙
Note that the minimum of P (v(t; η)) is always attained at η = 0
...
3
...
3
...
That is, what we want to determine is the
global minimum value of P , not merely local minima
...
e
...
Conditions under which the necessary conditions are also sufficient for optimality (i
...
,
provide a global optimal control) are essentially given by convexity
...
4 (Mangasarian Sufficient Conditions)
...
13)

t0

subject to
x(t) = f (t, x(t), α(t)),
˙
x(t0 ) = x0 ,

(3
...
Suppose that the triple
α∗ ∈ C([t0 , s], Rm ), x∗ ∈ C 1 ([t0 , s], Rn ), p∗ ∈ C 1 ([t0 , s], Rn ) satisfies the Euler-Lagrange
equations
...
15)
for t0 ≤ t ≤ s
...


Proof
...
Since the triple (α∗ , x∗ , p∗ ) satisfies the Euler-Lagrange
equations, we obtain
s

P (α) − P (α∗ ) ≥

(−[

xf



· p∗ (t) + p∗ (t)] · [x(t) − x∗ (t)]
˙

t0

−[

αf



· p∗ (t)] · [α(t) − α∗ (t)])dt
...
4
...


Note that the integrand is positive due to (3
...
That is,
P (α) ≥ P (α∗ ),
for each feasible control
...
5
...

- The Mangasarian sufficient conditions have limited applicability for, in most practical problems, either the terminal cost, the integral cost, or the differential equations fail to be convex or concave
...
6
...
3
...
Moreover, the candidate solution (α∗ , x∗ , p∗ ) obtained satisfies
the Euler-Lagrange equations, for each t ∈ [0, 1]
...
12))
...
4
...
In this section, we shall present more general necessary conditions of
optimality for those optimal control problems having path constraints
...


3
...
1
...
Throughout
this subsection, we shall consider the problem to minimize the cost functional
...
4
...
Before we can state the PMP, some notation and
analysis are needed
...


c(t) =
t0

If α is feasible, x(s) = xs for some s ≥ t0 , and the associated cost is P (α, s) = c(s)
...


Theorem 3
...
Consider the optimal control problem to minimize the functional
s

P (α, s) =

r(x(t), α(t)) dt

(3
...
17)

(3
...
4
...
Let r and f be continuous in (x, α)
and have continuous first partial derivatives with respect to x, for all (x, α) ∈ Rn × Rm
...
Then, there exists a (n + 1)-dimensional piecewise
continuously differentiable vector function p∗ = (p0 ∗ , p1 ∗ ,
...
, 0) such that
¯
˙
p∗ (t) = −
¯

z H(x



(t), α∗ (t), p∗ (t))),
¯

(3
...
20)

for every t0 ≤ t ≤ s∗ ;
(ii) the following relations
p0 ∗ (t) = const ≥ 0

(3
...
22)

are satisfied at every t ∈ [t0 , s∗ ]
...

¯

(3
...

Remark 3
...

- The Pontryagin maximum principle as stated in Theorem 3
...
If instead, one wishes to maximize the cost
functional (3
...
21) should be reversed,
p0 ∗ (t) = const ≤ 0
...
20) should not be made a maximum condition for
a maximization problem!
- It may appear on first thought that the requirement in (3
...
It turns out, however, that the requirement (3
...
This gives a slight link to the first-order necessary conditions
...
4
...
4
...
Extensions of the Pontryagin maximum principle
...
The first extension is for the case where the terminal
condition x(s) = xs is replaced by the target set condition x(s) ∈ Xs ⊆ Rn
...
Regarding target
set terminal conditions, we have the following theorem:
Theorem 3
...
Consider the optimal control problem to
minimize the functional
s

P (α, s) =

r(x(t), α(t)) dt

(3
...
25)

α(t) ∈ A,
(3
...
Let r and f be continuous in (x, α) and have continuous first partial derivatives with respect to x, for all (x, α) ∈ Rn × Rm
...
Then there exists a piecewise continuously differentiable vector function
p∗ = (p0 ∗ , p1 ∗ ,
...
, 0) solving 3
...
7
...
, p∗ (s∗ )) is orthogonal to the tangent
1
2
n
plane, Tx∗ (s∗ ) Xs , to Xs at x∗ (s∗ ):
p∗ (s∗ ) · d = 0,

∀d ∈ Tx∗ (s∗ ) Xs

Notice that when the set Xs degenerates into a point, the transversality condition at s∗
can be replaced by the condition that the optimal response x pass through this point, as
in Theorem 3
...

In order to state the PMP for non-autonomous problems
...
7, but for the case in which r and
f depend explicitly on time (the control region A is assumed independent of time)
...

˙

3
...
MAXIMUM PRINCIPLES

23

It is obvious that xn+1 (t) = t, t0 ≤ t ≤ s
...

˙
Next, we apply the autonomous version of the PMP with transversal conditions (Theorem 3
...

˙
Using the same notations as in Section 3
...
1 for the extended response, z := (c, x), and the
extended system, g := (r, f ), the equations giving the (n + 2) adjoint variables (¯, pn+1 ) :=
p
(p0 , p1 ,
...
, n
˙
p
 i
p (t) = −¯(t) · g(x , x, α)
...
, pn , pn+1 )
...

Overall, a version of the PMP for non-autonomous system is as follows:
Theorem 3
...
Consider the optimal control problem to minimize the functional
s

P (α, s) =

r(t, x(t), α(t)) dt

(3
...
28)
with fixed initial time t0 and free terminal time s
...
Suppose that (α∗ , s∗ ) ∈ C([t0 , T ], Rm ) × [t0 , T ) is a minimizer for the problem,
and let z ∗ denote the optimal extended response
...
, pn ∗ ) = (0, 0,
...
29)

3
...
MAXIMUM PRINCIPLES

24

with H(t, x, α, p) = p · g(t, x, α), and:
¯
¯
(i) the function H(x∗ (t), v, p∗ (t)) attains its minimum on A at v = α∗ (t);
¯
H(t, x∗ (t), v, p∗ (t)) ≥ H(t, x∗ (t), α∗ (t), p∗ (t)),
¯
¯

∀v ∈ A,

(3
...

¯
¯

(3
...
32)

are satisfied at every t ∈ [t0 , s∗ ]
...

¯

(3
...
4
...
Application: Linear Time-Optimal Problems
...
34)

t0

subject to

x(t) = F x(t) + Gα(t),
˙
x(t ) = x0 ,
 0
x(s) = x
s
α(t) ∈ A,

(3
...
F and G are respectively, n × n and
n × m matrices independent of t
...
On the other hand, when A is bounded, e
...
, A := [u, w], it is reasonable to expect
that the control will lie on the boundary of A, and that it will jump from one boundary
of A to another during the time of operation of the system
...

Example 3
...
Consider the problem
s

minimize

P (α, s) =

x(t)
˙
0

(3
...
4
...
37)

α(t) ∈ A = S1 , the unit sphere in R2
...
Thus P (α, s) = 0 dt = s, which is the
2
1
time it takes to reach X1
...
We
will prove that it is of course a straight line
...

and
p∗ = − Hx (x, α, p) = 0
˙
thus
p∗ (t) ≡ constant = p0 = 0
...
Thus α∗ is a constant a0 in time
...
37 we have x∗ = a0 , and consequently x∗ is a straight line
...

In this subsection, we shall only consider the so-called linear time-optimal problem, and
limit our discussion to the case of a scalar control
...
38)

t0

subject to

x(t) = F x(t) + Gα(t),
˙
x(t ) = x0 ,
 0
x(s) = 0

(3
...
4
...

The Hamiltonian function for this problem reads

(3
...

¯
It shall be assumed throughout that p0 (t) = 1
...
7), a necessary condition for α∗ to be an optimal control is
α∗ (t) =

w
u

if p∗ (t) · G < 0
if p∗ (t) · G > 0
...
41)

∗ ∗

with boundary conditions x (t0 ) = x0 and x (s ) = 0; moreover, s is obtained from the
transversal condition 3
...
If p∗ (t) · G = 0
cannot be sustained over a finite interval of time, then the optimal control is of bang-bang
type; in other words, α∗ (t) is at v when the switching function is positive, and at w when
the switching function is negative
...
12 (Bang-Bang example
...
38, 3
...
40 with
x1 (t) = x2 (t), x2 (t) = α(t) − 1 ≤ α(t) ≤ 1
˙
˙
For this system, since
0 1
,
0 0
an optimal control α∗ must satisfy
F =

α∗ (t) =

G=

0
,
1

1 if p2 ∗ (t) < 0
−1 if p2 ∗ (t) > 0
...
41),
p∗ (t) = 0
˙1
p∗ (t) = −p1 ∗ (t),
˙2
which are readily solved as

p1 ∗ (t) = A1
p2 ∗ (t) = −A1 t + A2 ,

3
...
MAXIMUM PRINCIPLES

27

where A1 , and A2 are constants of integration
...

- For the time interval on which α∗ (t) = 1, we have
x2 ∗ (t) = t + K1 ,
t2
1
K2 2
+ K2 t + K1 = (t + K2 )2 + (K1 −
),
2
2
2
(where K1 and K2 are constants of integration), from which we get
x1 ∗ (t) =

1
x1 ∗ (t) = [x2 ∗ (t)]2 + K,
(3
...
Thus, the portion of the optimal response for which
α(t) = 1 is an arc of the parabola 3
...

˙
- analogously, for the time interval on which α∗ (t) = −1, we have
2

x2 ∗ (t) = −t + K1 , x1 ∗ (t) =

t2
1
K2
+ K2 t + K1 = − (−t + K2 )2 + (K1 +
),
2
2
2

from which we obtain
1
x1 ∗ (t) = − [x2 ∗ (t)]2 + K
...
43)
2
thus, the portion of the optimal response for which α(t) = −1 is an arc of the
parabola 3
...

˙

Bibliography
[1] A
...
Agrachev: Introduction to optimal control theory
...

[2] L
...
Kltzler, W
...
Schmidt: Variational Calculus, Optimal Control, and Applications:
International Conference in Honour of L
...
Klotzler
...
124
...
Brezis: Functional Analysis, Sobolev Spaces and Partial Differential Equations, Universitext,
Springer-Verlag, 2011
...
C
...
Laboratoire d’automatique, ecole polytechnique federale de Lausanne
...
Dal Maso: An introduction to Γ-convergence
...

[6] L
...
Evans: Partial differential equations
...

[7] L
...
Evans: An introduction to mathematical optimal control theory
...
2
...
Giaquinta, S
...
I, II
...

[9] Henry Hermes and Joseph P
...

[10] L
...

o
[11] D
...
Kirk: Optimal Control Theory: An Introduction
...
R
...
Mathematics in science and engineering,
Volume 156
...
Press, 2011
...
Macki and A
...
Introduction to Optimal Control Theory: Undergraduate texts in mathematics
...

[15] S
...
Serovaiskii: Counter examples in optimal control theory
[16] E
...
Sontag: Mathematical Control Theory: Deterministic Finite Dimensional Systems, Second Edition, Springer, New York, 1998
...
C
...
G
...
): Mathematical Methods for Robust and Nonlinear Control, Springer,
Lecture Notes in Control and Information Sciences, (367)
Title: Introduction to Mathematical Optimal control theory
Description: We use optimal control theory as an extension of the calculus of variations for dynamic systems with one independent variable, usually time, in which control (input) variables are determined to maximize (or minimize) some measures of the performance (output) of a system while satisfying specified constraints. We give an explicit mathematical statement of “the optimal control problem” and then we study one of the most interesting approaches of solving it: the Pontryagin Maximun Principle. Using optimal control theory, problems not amenable by classical methods such as problem of designing a spacecraft attitude control system that minimizes fuel expenditure, have been made feasible by the development of digital computers.