When Is a Linear Control System Optimal? OPEN ACCESS

[+] Author and Article Information
R. E. Kalman

Research Institute for Advanced Studies (RIAS), Baltimore, Md.

J. Basic Eng 86(1), 51-60 (Mar 01, 1964) (10 pages) doi:10.1115/1.3653115 History: Received March 13, 1963; Online November 03, 2011


The purpose of this paper is to formulate, study, and (in certain cases) resolve the Inverse Problem of Optimal Control Theory, which is the following: Given a control law, find all performance indices for which this control law is optimal. Under the assumptions of (a) linear constant plant, (b) linear constant control law, (c) measurable state variables, (d) quadratic loss functions with constant coefficients, (e) single control variable, we give a complete analysis of this problem and obtain various explicit conditions for the optimality of a given control law. An interesting feature of the analysis is the central role of frequency-domain concepts, which have been ignored in optimal control theory until very recently. The discussion is presented in rigorous mathematical form. The central conclusion is the following (Theorem 6): A stable control law is optimal if and only if the absolute value of the corresponding return difference is at least equal to one at all frequencies. This provides a beautifully simple connecting link between modern control theory and the classical point of view which regards feedback as a means of reducing component variations.

Copyright © 1964 by ASME
This article is only available in the PDF format.






Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In