J. Basic Eng. 1964;86(1):1-8. doi:10.1115/1.3653102.

In many hydraulic control and other systems the effect of fluid carrying lines is an important factor in system dynamics. Following electrical transmission line technique a hydraulic line between two cross sections is characterized by a four-terminal network with pressure and flow the interacting variables. Use of this four-terminal network in a variety of system problems leads to transfer functions relating pairs of variables in the system, where these transfer functions are transcendental. These transfer functions cause serious mathematical difficulties when employed for the computation of system transients. The standard mathematical technique of using power series expansions fails in that this yields instability in most applications where this instability does not actually occur. In this paper these difficulties are overcome by writing these functions as quotients of infinite products of linear factors. It is shown that it is necessary to keep only a few of these factors to compute transients accurately. The transfer functions are thus replaced by rational approximations. However, in contrast to the classical lumped constant approach to distributed systems the accuracy of the approximation can be seen from the factors directly, facilitating system analysis and synthesis. The technique applies to electrical transmission lines as well as hydraulic pipes. This method yields a technique for automatically smoothing stepwise transient responses obtained in water hammer studies. Good agreement has been obtained between theory and experiment on the four terminal hydraulic network approach. The paper covers the results of the experiments made in the United States to verify the theory.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):11-21. doi:10.1115/1.3653095.

The optimal controls for various types of performance criteria are investigated for second-order systems by means of the Pontryagin’s Maximum Principle. Optimal control solutions for several examples are shown. The results presented show widely different modes of control depending upon the performance criteria, and also indicate a possibility of closed loop control. The methods used in the various solutions may be extended to other performance criteria and systems.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):23-31. doi:10.1115/1.3653105.

A study of fluid-temperature transients in an experimental dual-heat-exchanger system based on a combination of analog simulation and analytical techniques.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):32-36. doi:10.1115/1.3653106.

A graphical method provides, without templates and without factoring, a means for rapidly estimating log-amplitude versus log-frequency response curves from transfer functions in their unfactored polynomial form. The technique can also be used to design or synthesize transfer functions which will produce a specified frequency response characteristic. Included is a graphical stability criterion for third and fourth-order systems which provides a quick check on system stability during the course of design or of analysis of the system response.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):37-42. doi:10.1115/1.3653109.

A technique for analytical representation of the root-locus is developed in this paper for both negative and positive feedback. With the help of the derived general equation for the root-locus, such items as intersection of the asymptotes, break-away points, and intersection of loci with imaginary axis are investigated and defined. Simple examples of possible applications are given. Finally, a number of selected root-loci and their equations are shown.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):44-49. doi:10.1115/1.3653112.

A quick method is described by which it may be determined for which ranges of gain, K, all the roots of the characteristic equation have negative real parts. These ranges of stability are determined by the relative position of two curves. The curves also indicate where the stability is marginal. The method is especially well suited for multiple loop systems as it is based on the coefficients of the powers of s. The criterion does not require any tables, charts, or special equipment. Few calculations are needed and they are all of the simplest nature with real numbers. Instead of gain K could be a damping ratio, time constant, and so forth.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):51-60. doi:10.1115/1.3653115.

The purpose of this paper is to formulate, study, and (in certain cases) resolve the Inverse Problem of Optimal Control Theory, which is the following: Given a control law, find all performance indices for which this control law is optimal. Under the assumptions of (a) linear constant plant, (b) linear constant control law, (c) measurable state variables, (d) quadratic loss functions with constant coefficients, (e) single control variable, we give a complete analysis of this problem and obtain various explicit conditions for the optimality of a given control law. An interesting feature of the analysis is the central role of frequency-domain concepts, which have been ignored in optimal control theory until very recently. The discussion is presented in rigorous mathematical form. The central conclusion is the following (Theorem 6): A stable control law is optimal if and only if the absolute value of the corresponding return difference is at least equal to one at all frequencies. This provides a beautifully simple connecting link between modern control theory and the classical point of view which regards feedback as a means of reducing component variations.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):61-66. doi:10.1115/1.3653116.
Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):67-79. doi:10.1115/1.3653117.

This paper presents a general discussion of the optimum control of distributed-parameter dynamical systems. The main areas of discussion are: (a) The mathematical description of distributed parameter systems, (b) the controllability and observability of these systems, (c) the formulation of optimum control problems and the derivation of a maximum principle for a particular class of systems, and (d) the problems associated with approximating distributed systems by discretization. In order to illustrate the applicability of certain general results and manifest some of the properties which are intrinsic to distributed systems, specific results are obtained for a simple, one-dimensional, linear-diffusion process.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):80-86. doi:10.1115/1.3653118.

A fundamental equation which yields the limit-cycle feature of PWM feedback system is derived in this paper. The application of this equation to obtain the response of the autonomous as well as the forced PWM system is indicated. The application of this fundamental equation to other types of nonlinear sampled-data feedback systems is also demonstrated. The maximum mode of the limit cycles that can exist in relay-mode oscillations of PWM systems as well as the limitations on the maximum period is obtained in this paper. Based on the foregoing derivations, the sufficient conditions for eliminating all saturated oscillations is derived. The experimental study performed on the digital computer confirms the theoretical results. Stability curves for certain PWM systems are being calculated which will aid considerably in this design. The basic advantage of PWM controllers on relay sampled-data systems with regard to sensitivity and stability is well pointed out in this paper as well as a few examples illustrating the application of the fundamental equations derived.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):87-90. doi:10.1115/1.3653119.

A mathematically rigorous concept of relative stability based on the v-functions of the direct method of Lyapunov is introduced. Two systems of the type representable by ẋ = f (x ) are considered, where under the proper restrictions on f (x ), a Lyapunov function, v(x ) is uniquely determined by a positive definite error criterion r(x ) and the equation v̇(x ) = −r(x ). The definition of the relative stability proposed, eventually leads to conditions on the linear approximation systems which are sufficient to assure the relative stability of the nonlinear systems. This leads to conditions on the eigenvalues of the linear approximation system which are necessary but not sufficient for relative stability. Additional conditions on the choice of the error criteria are needed. The present definition permits the gap between concepts of stability in classical control theory and that due to the direct method of Lyapunov to be at least partially bridged.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):91-96. doi:10.1115/1.3653120.

The development and design considerations of a counterbalance system utilizing a servomechanism are given. The purpose of the counterbalance system is to allow dynamic testing of equipment to be used in a zero-g field. The problem of cancellation of gravitational torques and obtaining valid test data is discussed. Various techniques of counterbalancing are investigated. The method having the most flexibility uses a servo-mechanism and an energy-storage device. For this system a linear torsion spring is attached to the device being counterbalanced and is used for the energy-storage element. A strain gage gives a voltage indicating the counterbalance torque being applied by this spring. The reference voltage is proportional to the counterbalance torque required and comes from a resolver. The difference between these two signals controls the position of the other end of the torsion spring. With this mechanization it is shown that the requirements on the performance of the servomechanism are not severe. It is concluded that the technique using an energy-storage device and a servomechanism offers advantages which other techniques do not. The possibility of using other mechanizations of this counterbalancing technique is discussed.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):97-106. doi:10.1115/1.3653121.

A versatile and practical method of searching a parameter space is presented. Theoretical and experimental results illustrate the usefulness of the method for such problems as the experimental optimization of the performance of a system with a very general multipeak performance function when the only available information is noise-distributed samples of the function. At present, its usefulness is restricted to optimization with respect to one system parameter. The observations are taken sequentially; but, as opposed to the gradient method, the observation may be located anywhere on the parameter interval. A sequence of estimates of the location of the curve maximum is generated. The location of the next observation may be interpreted as the location of the most likely competitor (with the current best estimate) for the location of the curve maximum. A Brownian motion stochastic process is selected as a model for the unknown function, and the observations are interpreted with respect to the model. The model gives the results a simple intuitive interpretation and allows the use of simple but efficient sampling procedures. The resulting process possesses some powerful convergence properties in the presence of noise; it is nonparametric and, despite its generality, is efficient in the use of observations. The approach seems quite promising as a solution to many of the problems of experimental system optimization.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):107-115. doi:10.1115/1.3653092.

The following optimal regulator problem is considered: Find the scalar control function u = u(t) which minimizes the performance index

  120Tx(t), Qx(t)〉dt,
subject to the conditions
 = Ax + u(t)f,|u(t)| ≦ 1x(0) = x0(x0 is unrestricted)x(T) = 0(T is free)
Q , A are constant n × n-matrices; f is a constant n-vector. It is shown that optimal control includes both a bang-bang mode and a linear mode, the latter arising from the “singular” solutions of the Pontriagin canonical equations. Conditions are given under which nth-order systems are equivalent, for control purposes, to systems of first or second order. One example of a second-order system is worked in detail and some results of an analog computer study are presented.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):116-120. doi:10.1115/1.3653094.

This paper presents a method for finding necessary conditions such that a subharmonic oscillation may exist in certain types of nonlinear feedback systems. The method is applicable to feedback systems with one, instantaneous, nonmemory-type, nonlinear element. Equations are derived giving the fundamental output of a nonlinear element when forced by two sine waves of integer ratio frequency. Normal describing-function assumptions are made with regard to attenuation of higher-order harmonics. An example of a system incorporating a perfect relay is presented. The results of the analysis have been verified experimentally on the analog computer.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):121-131. doi:10.1115/1.3653096.

Performance of a servomechanism that provides accurate, continuous control of mechanical power is fundamentally governed by the actuator used to meter power to the load. These actuators, when used in exacting control applications, tend to be complex, expensive, and critical. This paper describes a mechanical servoactuator concept based on the toric-transmission principle that has the potential of providing precise power control in a relatively simple and rugged package. Actuator kinematic principles are explained and equations that describe these relationships are presented to illustrate both steady-state and dynamic performance characteristics. Closed-loop control techniques are discussed with specific emphasis on the load-acceleration control feature inherent with this actuator concept. Experimental performance data obtained from testing a 1-hp prototype model operated as a velocity servo are presented. Static velocity control accuracy of ±0.1 percent is readily obtained; transient-response times vary from 20 to 40 milliseconds depending on signal amplitude and load conditions.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):132-138. doi:10.1115/1.3653097.

A dual-input describing function (DIDF) is derived for sine waves and Guassian noise. The derivation follows the correlation method used in [1]. In this paper only single-valued nonlinearities are discussed but extension to multivalued nonlinear elements appears possible. The DIDF is used to investigate the stability and closed-loop response of nonlinear systems excited by random noise. Previous investigations have provided only for the random component at the input to the nonlinear element. It is shown that previous work is invalid insofar as it neglects the possibility of oscillations in the nonautonomous system if the autonomous system is stable and vice versa. Two examples are presented which show (i) the necessity of the DIDF approach for systems which are stable without input, and (ii) the possibility of successfully obtaining stable response to certain classes of inputs with systems which appear unstable without inputs. The present investigation is an extension of the authors’ previous work on the stability and closed-loop response of nonlinear systems excited by sinusoidal inputs [2].

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):139-144. doi:10.1115/1.3653098.

Analysis of piecewise linear systems may require the solution of high-order linear differential equations whose parameters are constants within a given region but change into different constants for adjacent regions. The multiple regions of such a system may be identified with discrete intervals and it is a simple matter to obtain the system response by the method of integral equations. These solutions are given in the form of convergent infinite series, the terms of which may be easily evaluated by a digital computer. The time interval of each region is found by substituting successive values of these truncated series until the required boundary conditions are satisfied. The method is applied to a third order-type two system whose sustained oscillation, when subjected to dry friction, is to be eliminated by dead-zone compensation. The system has four regions with different parameters for each region of the differential equations which are converted into Volterra integral equations of the second kind. The variables are iterated within the digital computer until a convergent solution is found for the condition of sustained oscillation. Procedures are given to determine critical values of dead zone for various ramp rates at which the system is stable.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):145-150. doi:10.1115/1.3653099.

A solution of the linear, sampled, minimum-time problem is developed which permits the determination of the control policy in real time by an on-line digital control computer. The solution is illustrated by means of two examples.

Topics: Computers
Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):151-159. doi:10.1115/1.3653100.

The paper concerns an approach to adaptive optimal control of nonlinear dynamic systems which has been introduced by Kulikowski. In this approach, the required identification is carried out at each stage of constructing a sequence of inputs (xn (t)), tε(0, T) converging to a relative extremum of a given performance functional. The major contributions of this paper relate to the identification problem and its incorporation into the optimal control formulation.

Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):160-168. doi:10.1115/1.3653101.

The optimal control as a function of the instantaneous state, i.e., the optimal “feedback” or “closed-loop” control, is derived for the controlled second-order linear process with constant coefficients

ẍ + 2bẋ + c2x = u
for so-called minimum-fuel or minimum-effort operation (i.e., such that the time integral of the magnitude of the control u is minimized), subject to an amplitude limitation on the control |u| ≤ L. The objective is to force the phase state from an arbitrary instantaneous value (x, ẋ) to the origin within an arbitrarily prescribed time-to-run T. The solution is obtained for the nonoscillatory cases (b2 ≥ c2 ≥ 0) when L is finite, and for arbitrary real b and c when L is infinite; i.e., when the control is not amplitude-limited. The form of the optimal control is shown to be “bang-off-bang” with the most general initial conditions; i.e., during successive time intervals, u is constant at one limit, identically zero, and constant at the limit of opposite polarity. Explicit expressions for the switching surfaces in state space (T, x, ẋ) at which u changes value and, hence, of the optimal feedback control u (T, x, ẋ), are given, both with and without amplitude limitation. Without such (L = ∞) the optimal control is impulsive and the areas of the impulses in terms of the current state are obtained by a limiting procedure.

Commentary by Dr. Valentin Fuster


Commentary by Dr. Valentin Fuster
J. Basic Eng. 1964;86(1):49-50. doi:10.1115/1.3653113.
Topics: Stability
Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In