## Description

### Table of content:

Copertina

Calculus of Variations and Optimal Control Theory

Copytright

Contents

Preface

1. Introduction

1.1 Optimal control problem

1.2 Some background on ﬁnite-dimensional optimization

1.2.1 Unconstrainedoptimization

1.2.2 Constrainedoptimization

1.3 Preview of inﬁnite-dimensional optimization

1.3.1 Function spaces, norms, and local minima

1.3.2 First variation and ﬁrst-order necessary condition

1.3.3 Second variation and second-order conditions

1.3.4 Global minima and convex problems

1.4 Notes and references for Chapter 1

2. Calculus of Variations

2.1 Examples of variational problems

2.1.1 Dido’sisoperimetricproblem

2.1.2 Lightreﬂectionandrefraction

2.1.3 Catenary

2.1.4 Brachistochrone

2.2 Basic calculus of variations problem

2.2.1 Weakandstrongextrema

2.3 First-order necessary conditions for weak extrema

2.3.1 Euler-Lagrangeequation

2.3.2 Historicalremarks

2.3.3 Technicalremarks

2.3.4 Twospecialcases

2.3.5 Variable-endpointproblems

2.4 Hamiltonian formalism and mechanics

2.4.1 Hamilton’s canonical equations

2.4.2 Legendretransformation

2.4.3 Principle of least action and conservation laws

2.5 Variational problems with constraints

2.5.1 Integralconstraints

2.5.2 Non-integralconstraints

2.6 Second-order conditions

2.6.1 Legendre’s necessary condition for a weak minimum

2.6.2 Sufficient condition for a weak minimum

2.7 Notes and references for Chapter 2

3. From Calculus of Variations to Optimal Control

3.1 Necessary conditions for strong extrema

3.1.1 Weierstrass-Erdmann corner conditions

3.1.2 Weierstrassexcessfunction

3.2 Calculus of variations versus optimal control

3.3 Optimal control problem formulation and assumptions

3.3.1 Controlsystem

3.3.2 Costfunctional

3.3.3 Targetset

3.4 Variational approach to the ﬁxed-time, free-endpoint problem

3.4.1 Preliminaries

3.4.2 Firstvariation

3.4.3 Secondvariation

3.4.4 Somecomments

3.4.5 Critique of the variational approach and preview of themaximumprinciple

3.5 Notes and references for Chapter

4. The Maximum Principle

4.1 Statement of the maximum principle

4.1.1 Basic ﬁxed-endpoint control problem

4.1.2 Basic variable-endpoint control problem

4.2 Proof of the maximum principle

4.2.1 FromLagrangetoMayerform

4.2.2 Temporal control perturbation

4.2.3 Spatialcontrolperturbation

4.2.4 Variationalequation

4.2.5 Terminalcone

4.2.6 Keytopologicallemma

4.2.7 Separatinghyperplane

4.2.8 Adjointequation

4.2.9 PropertiesoftheHamiltonian

4.2.10 Transversalitycondition

4.3 Discussion of the maximum principle

4.3.1 Changesofvariables

4.4 Time-optimal control problems

4.4.1 Example:doubleintegrator

4.4.2 Bang-bang principle for linear systems

4.4.3 Nonlinear systems, singular controls, and Lie brackets

4.4.4 Fuller’sproblem

4.5 Existence of optimal controls

4.6 Notes and references for Chapter 4

5. The Hamilton-Jacobi-Bellman Equation

5.1 Dynamic programming and the HJB equation

5.1.1 Motivation: the discrete problem

5.1.2 Principleofoptimality

5.1.3 HJBequation

5.1.4 Sufficient condition for optimality

5.1.5 Historicalremarks

5.2 HJ B equation versus the maximum principle

5.2.1 Example: nondifferentiable value function

5.3 Viscosity solutions of the HJB equation

5.3.1 One-sideddifferentials

5.3.2 ViscositysolutionsofPDEs

5.3.3 HJB equation and the value function

5.4 Notes and references for Chapter 5

6. The Linear Quadratic Regulator

6.1 Finite-horizon LQR problem

6.1.1 Candidate optimal feedback law

6.1.2 Riccatidifferentialequation

6.1.3 Valuefunctionandoptimality

6.1.4 Global existence of solution for the RDE

6.2 Inﬁnite-horizon LQR problem

6.2.1 Existence and properties of the limit

6.2.2 Inﬁnite-horizon problem and its solution

6.2.3 Closed-loopstability

6.2.4 Completeresultanddiscussion

6.3 Notes and references for Chapter 6

7. Advanced Topics

7.1 Maximum principle on manifolds

7.1.1 Differentiablemanifolds

7.1.2 Re-interpreting the maximum principle

7.1.3 Symplectic geometry and Hamiltonian ﬂows

7.2 HJ B equation, canonical equations, and characteristics

7.2.1 Methodofcharacteristics

7.2.2 Canonical equations as characteristics of the HJB equation

7.3 Riccati equations and inequalities in robust control

7.3.1 L2 gain

7.3.2 Ho controlproblem

7.3.3 RiccatiinequalitiesandLMIs

7.4 Maximum principle for hybrid control systems

7.4.1 Hybridoptimalcontrolproblem

7.4.2 Hybridmaximumprinciple

7.4.3 Example:lightreﬂection

7.5 Notes and references for Chapter 7

Bibliography

Index

## Reviews

There are no reviews yet.