





Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Control Theory is a branch of system theory concerned with changing the behavior of complex systems through external actions. This scientific discipline, which is mathematically-oriented, offers principles applicable to various fields, from engineering and physics to economics and social sciences. the history and applications of control theory, including feedback control, adaptive control, and optimization.
What you will learn
Typology: Study notes
1 / 9
This page cannot be seen from the preview
Don't miss anything!
As long as human culture has existed, control has always meant some kind of power over man's environment. Cuneiform fragments suggest that the control of irrigation systems in Mesopotamia was a well-developed art at least by the 20th century BC. There were some ingenious control devices in the Greco-Roman culture, the details of which have been preserved. Methods for the automatic operation of windmills go back at least to the Middle Ages. Large-scale implementation of the idea of control, however, was impossible without a high-level of technological sophistication, and it is probably no accident that the principles of modern control started evolving only in the 19th century, concurrently with the Industrial Revolution. A serious scientific study of this field began only after World War II and is now a major aspect of what has come to be called the second industrial revolution.
Although control is sometimes equated with the notion of feedback control (which involves the transmission and return of information)--an isolated engineering invention, not a scientific
One of Bowlby’s key insights was that many of Freud’s best ideas about close relationships and the importance of early experience were logically independent of the drive reduction motivation theory that Freud used to explain them. In order to preserve these insights, Bowlby looked for a scientifically defensible alternative to Freud’s drive reduction motivation theory.
Freud viewed infants as clingy and dependent; interested in drive reduction rather than in the environment. Ethological observations present a very different view. The notion that human infants are competent, inquisitive, and actively engaged in mastering their environments was also familiar to Bowlby from Piaget’s detailed observations of his own three children presented in The Origin of Intelligence in Infants.
One of Bowlby’s key insights was that the newly emerging field of control systems theory offered a way of explaining infants’ exploration, monitoring of access to attachment figures, and awareness of the environment. This was a scientifically defensible alternative to Freud’s drive reduction theory of motivation. It placed the emphasis on adaptation to the real world rather than to drive states and emphasized actual experience rather than intra-psychic events as influences on develop- ment and individual differences.
Note that the first step toward this alternative motivation model was reformulating athe infant- mother (and implicitly adult-adult) bonds in terms of the secure base phenomenon. Without the secure base concept, we have no control systems alternative to Freud’s drive theory. Thus, it is logically necessary, at every turn, to keep the secure base formulation at the center of attachment theory. See Waters & Cummings, Child Development, June 2000 for elaboration on this point.
The following material was compiled from articles on control theory and optimization available on-line at Britanica.com. E.W.
discipline--modern usage tends to favour a rather wide meaning for the term; for instance, control and regulation of machines, muscular coordination and metabolism in biological organisms, prosthetic devices; also, broad aspects of coordinated activity in the social sphere such as optimization of business operations, control of economic activity by government policies, and even control of political decisions by democratic processes. Scientifically speaking, modern control should be viewed as that branch of system theory concerned with changing the behaviour of a given complex system by external actions. (For aspects of system theory related to information, see below.) If physics is the science of understanding the physical environment, then control should be viewed as the science of modifying that environment, in the physica, biological, or even social sense.
Much more than even physics, control is a mathematically-oriented science. Control principles are always expressed in mathematical form and are potentially applicable to any concrete situation. At the same time, it must be emphasized that success in the use of the abstract principles of control depends in roughly equal measure on the status of basic scientific knowledge in the specific field of application, be it engineering, physics, astronomy, biology, medicine, econometrics, or any of the social sciences. This fact should be kept in mind to avoid confusion between the basic ideas of control (for instance, controllability) and certain spectacular applications of the moment in a narrow area (for instance, manned lunar travel).
Examples of modern control systems
To clarify the critical distinction between control principles and their embodiment in a real machine or system, the following common examples of control may be helpful. There are several broad classes of control systems, of which some are mentioned below.
Machines that cannot function without (feedback) control
Many of the basic devices of contemporary technology must be manufactured in such a way that they cannot be used for the intended task without modification by means of control external to the device. In other words, control is introduced after the device has been built; the same effect cannot be brought about (in practice and sometimes even in theory) by an intrinsic modification of the characteristics of the device. The best known examples are the vacuum-tube or transistor amplifiers for high-fidelity sound systems. Vacuum tubes or transistors, when used alone, introduce intolerable distortion, but when they are placed inside a feedback control system any desired degree of fidelity can be achieved. A famous classical case is that of powered flight. Early pioneers failed, not because of their ignorance of the laws of aerodynamics, but because they did not realize the need for control and were unaware of the basic principles of stabilizing an inherently unstable device by means of control. Jet aircraft cannotbe operated without automatic control to aid the pilot, and control is equally critical for helicopters. The accuracy of inertial navigation equipment (the modern space compass) cannot be improved indefinitely because of basic mechanical limitations, but these limitations can be reduced by several orders of magnitude by computer- directed statistical filtering, which is a variant of feedback control.
Robots
On the most advanced level, the task of control science is the creation of robots. This is a collective term for devices exhibiting animal-like purposeful behaviour under the general command of (but without direct help from) man. Industrial manufacturing robots are already fairly common, but real breakthroughs in this field cannot be anticipated until there are fundamental scientific advances with regard to problems related to pattern recognition and the mathematical structuring of brain processes.
A control system is a means by which a variable quantity or set of variable quantities is made to conform to a prescribed norm. It either holds the values of the controlled quantities constant or causes them to vary in a prescribed way. A control system may be operated by electricity, by mechanical means, by fluid pressure (liquid or gas), or by a combination of means. When a computer is involved in the control circuit, it is usually more
addition of damping somewhere in the system. Damping slows down system response and avoids excessive overshoots or over-corrections. Damping can be in the form of electrical resistance in an electronic circuit, the application of a brake in a mechanical circuit, or forcing oil through a small orifice as in shock-absorber damping.
Another method of ascertaining the stability of a control system is to determine its frequency response--i.e., its response to a continuously varying input signal at various frequencies. The output of the control system is then compared to the input with respect to amplitude and to phase--i.e., the degree with which the input and output signals are out of step. Frequency response can be either determined experimentally--especially in electrical systems--or calculated mathematically if the constants of the system are known. Mathematical calculations are particularly useful for systems that can be described by ordinary linear differential equations. Graphic shortcuts also help greatly in the study of system responses.
Several other techniques enter into the design of advanced control systems. Adaptive control is the capability of the system to modify its own operation to achieve the best possible mode of operation. A general definition of adaptive control implies that an adaptive system must be capable of performing the following functions: provid- ing continuous information about the present state of the system or identifying the process; comparing present system In direct-digital control a single digital computer replaces a group of single-loop analogue controllers. Its greater computational ability makes the substitution possible and also permits the application of more complex advanced- control techniques.
Hierarchy control attempts to apply computers to all the plant-control situations simultaneously. As such, it requires the most advanced computers and most sophisticated automatic-control devices to integrate the plant operation at every level from top-management decision to the movement of a valve.
The advantage offered by the digital computer over the conventional control system described earlier, costs being equal, is that the computer can be programmed readily to carry out a wide variety of separate tasks. In addition, it is fairly easy to change the program so as to carry out a new or revised set of tasks should the nature of the process change or the previously proposed system prove to be inadequate for the proposed task. With digital computers, this can usually be done with no change to the physical equipment of the control system. For the conventional control case, some of the physical hardware apparatus of the control system must be replaced in order to achieve new functions or new implementations of them.
Control systems have become a major component of the automation of production lines in modern factories. Automation began in the late 1940s with the development of the transfer machine, a mechanical device for moving and positioning large objects on a production line (e.g., partly finished automobile engine blocks). These early machines had no feedback control as described above. Instead, manual intervention was required for any final adjustment of position or other corrective action necessary. Because of their large size and cost, long production runs were necessary to justify the use of transfer machines.
The need to reduce the high labour content of manufactured goods, the requirement to handle much smaller production runs, the desire to gain increased accuracy of manufacture, combined with the need for sophisticated tests of the product during manufacture, have resulted in the recent development of computerized production monitors, testing devices, and feedback-controlled production robots. The programmability of the digital computer to handle a wide range of tasks along with the capability of rapid change to a new program has made it invaluable for these purposes. Similarly, the need to compensate for the effect of tool wear and other variations in automatic machining operations has required the institution of a feedback control of tool positioning and cutting rate in place of the formerly used direct mechanical motion. Again, the result is a more accurately finished final product with less chance for tool or manufacturing machine damage.
The scientific formulation of a control problem must be based on two kinds of information: (A) the behaviour of the system (e.g., industrial plant) must be described in a mathematically precise way; (B) the purpose of control (criterion) and the environment (disturbances) must be specified, again in a mathematically precise way.
Information of type A means that the effect of any potential control action applied to the system is precisely known under all possible environmental circumstances. The choice of one or a few appropriate control actions, among the many possibilities that may be available, is then based on information of type B; and this choice, as stated before, is called optimization.
The task of control theory is to study the mathematical quantification of these two basic problems and then to deduce applied-mathematical methods whereby a concrete answer to optimization can be obtained. Control theory does not deal with physical reality but only with its mathematical description (mathematical models). The knowledge embodied in control theory is always expressed with respect to certain classes of models, for instance, linear systems with constant coefficients, which will be treated in detail below. Thus control theory is applicable to any concrete situation (e.g., physics, biology, economics) whenever that situation can be described, with high precision, by a model that belongs to a class for which the theory has already been developed. The limitations of the theory are not logical but depend only on the agreement between available models and the actual behaviour of the system to be controlled. Similar comments can be made about the mathematical representation of the criteria and disturbanes.
Once the appropriate control action has been deduced by mathematical methods from the information mentioned above, the implementation of control becomes a technological task, which is best treated under the various specialized fields of engineering. The detailed manner in which a chemical plant is controlled may be quite different from that of an automobile factory, but the essential principles will be the same. Hence further discussion of the solution of the control problem will be limited here to the mathematical level.
To obtain a solution in this sense, it is convenient (but not absolutely necessary) to describe the system to be controlled, which is called the plant, in terms of its internal dynamical state. By this is meant a list of numbers (called the state vector) that expresses in quantitative form the effect of all external influences on the plant before the present moment, so that the future evolution of the plant can be exactly given from the knowledge of the present state and the future inputs. This situation implies, in an intuitively obvious way, that the control action at a given time can be specified as some function of the state at that time. Such a function of the state, which determines the control action that is to be taken at any instant, is called a control law. This is a more general concept than the earlier idea of feedback; in fact, a control law can incorporate both the feedback and feed forward methods of control.
In developing models to represent the control problem, it is unrealistic to assume that every component of the state vector can be measured exactly and instantaneously. Consequently in most cases the control problem has to be broadened to include the further problem of state determination, which may be viewed as the central task in statistical prediction and filtering theory. Thus, in principle, any control problem can be solved in two steps: (1) Building an optimal filter (so-called Kalman filter) to determine the best estimate of the present state vector; (2) determining an optimal control law and mechanizing it by substituting into it the estimate of the state vector obtained in step 1.
In practice, the two steps are implemented by a single unit of hardware, called the controller, which may be viewed as a special-purpose computer. The theoretical formulation given here can be shown to include all other previous methods as a special case; the only difference is in the engineering details of the controller.
The mathematical solution of a control problem may not always exist. The determination of rigorous existence conditions, beginning in the late 1950s, has had an important effect on the evolution of modern control, equally from the theoretical and the applied point of view. Most important is controllability; it expresses the fact that some kind of control is possible. If this condition is satisfied, methods of optimization can pick out the right kind of control using information of type B.
light-responsive device directly to the machine in question. Related examples are the remote control of position (servomechanisms), speed control of motors (governors). It is emphasized that in such case a machine could function by itself, but a more useful system is obtained by letting the measuring device communicate with the machine in either a feed-forward or feed-back fashion.
Control of large systems
More advanced and more critical applications of control concern large and complex systems the very existence of which depends on coordinated operation using numerous individual control devices (usually directed by a computer). The launch of a spaceship, the 24-hour operation of a power plant, oil refinery, or chemical factory, the control of air traffic near a large airport, are well-known manifestations of this technological trend. An essential aspect of these systems is the fact that human participation in the control task, although theoretically possible, would be wholly impractical; it is the feasibility of applying automatic control that has given birth to these systems.
Biocontrol
The advancement of technology (artificial biology) and the deeper understanding of the processes of biology (natural technology) has given reason to hope that the two can be combined; man-made devices should be substituted for some natural functions. Examples are the artificial heart or kidney, nerve-controlled prosthetics, and control of brain functions by external electrical stimuli. Although definitely no longer in the science-fiction stage, progress in solving such problems has been slow not only because of the need for highly advanced technology but also because of the lack of fundamental knowledge about the details of control principles employed in the biological world.
Control Theory is a field of applied mathematics that is relevant to the control of certain physical processes and systems. Although control theory has deep connections with classical areas of mathematics, such as the calculus of variations and the theory of differential equations, it did not become a field in its own right until the late 1950s and early 1960s. After World War II, problems arising in engineering and economics were recognized as variants of problems in differential equations and in the calculus of variations, though they were not covered by existing theories. At first, special modifications of classical techniques and theories were devised to solve individual problems. It was then recognized that these seemingly diverse problems all had the same mathematical structure, and control theory emerged.
The systems, or processes, to which control theory is applied have the following structure. The state of the system at each instant of time t can be described by n quantities, which are labeled x1(t), x2(t),... , xn(t). For example, the system may be a mixture of n chemical substances undergoing a reaction. The quantities x1(t),... , xn(t) would represent the concentrations of the n substances at time t.
At each instant of time t, the rates of change of the quantities x1(t),... , xn(t) depend upon the quantities x1(t),... , xn(t) themselves and upon the values of k so-called control variables, u1(t),... , uk(t), according to a known law. The values of the control variables are chosen to achieve some objective. The nature of the physical system usually imposes limitations on the allowable values of the control variables. In the chemical-reaction example, the kinetic equations furnish the law governing the rate of change of the concentrations, and the control variables could be pressure and temperature, which must lie between fixed maximum and minimum values at each time t.
Systems such as those just described are called control systems. The principal problems associated with control systems are those of controllability, observability, stabilizability, and optimal control.
The problem of controllability is the following. Given that the system is initially in state a1, a2,... , an, can the controls u1(t),... , uk(t) be chosen so that the system will reach a preassigned state b1,... , bn in finite time? The observability problem is to obtain information about the state of the system at some time t when one cannot measure the state itself, but only a function of the state. The stabilizability problem is to choose control variables u1(t),... , uk(t) at each instant of time t so that the state x1(t),... , xn(t) of the system gets closer and closer to a preassigned state as the time of operation of the system gets larger and larger.
Probably the most prominent problem of control theory is that of optimal control. Here, the problem is to choose the control variables so that the system attains a desired state and does so in a way that is optimal in the following sense. A numerical measure of performance is assigned to the operation of the system, and the control variables u1(t),... , uk(t) are to be chosen so that the desired state is attained and the value of the performance measure is made as small as possible. To illustrate what is meant, consider the chemical-reaction example as representing an industrial process that must produce specified concentrations c1 and c2 of the first two substances. Assume that this occurs at some time T, at which time the reaction is stopped. At time T, the other substances, which are by-products of the reaction, have concentrations x3(T), x4(T),... , xn(T). Some of these substances can be sold to produce revenue, while others must be disposed of at some cost. Thus the concentrations x3(T),... , x(T) of the remaining substances contribute a "cost" to the system, which is the cost of disposal minus the revenue. This cost can be taken to be the measure of performance. The control problem in this special case is to choose the temperature and pressure at each instant of time so that the final concentrations c1 and c2 of the first two substances are attained at minimal cost.
The control problem discussed here is often called deterministic, in contrast to stochastic control problems, in which the state of the system is influenced by random disturbances. The system, however, is to be controlled with objectives similar to those of deterministic systems.
George B. Dantzig, Linear Programming and Extensions (1963, reissued 1974) outline the history, theory, and applications of linear programming.
G. Hadley, Linear Programming (1962), is also informative.
L.R. Ford, Jr., and D.R. Fulkerson, Flows in Networks (1962), stands as the classic work on the subject.
T.C. Hu, Integer Programming and Network Flows (1969).
James K. Strayer, Linear Programming and Its Applications (1989), addresses problems and applications.
G.V. Shenoy, Linear Programming (1989), focuses on the field from the point of view of management and is ac- cessible to readers without a background in mathematics.
Howard Karloff, Linear Programming (1991), provides an introduction from the perspective of theoretical com- puter science.
Ami Arbel, Exploring Interior-Point Linear Programming: Algorithms and Software (1993) is similar.
G. Zoutendijk, Methods of Feasible Directions (1960) is one of the path breaking books on linear and nonlinear programming.
Leon S. Lasdon, Optimization Theory for Large Systems (1970), deals with linear and nonlinear programming problems.
F.H. Clarke, Optimization and Nonsmooth Analysis (1990), serves as a good exposition of optimization in terms of a general theory.
G.B. Dantzig and B.C. Eaves (eds.), Studies in Optimization (1974), is a volume of survey papers by leading ex- perts.