The purpose of this paper is to formulate, study, and (in certain cases) resolve the Inverse Problem of Optimal Control Theory, which is the following: Given a control law, find all performance indices for which this control law is optimal. Under the assumptions of (a) linear constant plant, (b) linear constant control law, (c) measurable state variables, (d) quadratic loss functions with constant coefficients, (e) single control variable, we give a complete analysis of this problem and obtain various explicit conditions for the optimality of a given control law. An interesting feature of the analysis is the central role of frequency-domain concepts, which have been ignored in optimal control theory until very recently. The discussion is presented in rigorous mathematical form. The central conclusion is the following (Theorem 6): A stable control law is optimal if and only if the absolute value of the corresponding return difference is at least equal to one at all frequencies. This provides a beautifully simple connecting link between modern control theory and the classical point of view which regards feedback as a means of reducing component variations.