“Optimal control gained a large boost when Bryson and Denham (1962) showed that the path of a supersonic aircraft should actually dive at one point in order to reach a given altitude in minimum time. This nonintuitive result was later demonstrated to skeptical fighter pilots in flight tests.”
Franklin et al, “Feedback Control of Dynamic Systems”, p. 14.
The most awkward thing about describing control theory in general, whether I phrase it as an overview or as categorizing my books, is that I don’t understand all of control theory. I speak as one who owns a whole lot of different kinds of books on controls, rather than as one who has understood all those books. That said, let’s see what I can say.
Looking at the categories got me to thinking that I didn’t have much of anything on pure digital control, and very little on robust control. So I bought a few more books. (My friends long ago observed that it’s only news from me when I say I did not buy a book.)
Fact is, the categories exist, whether they’re explicit or semi-vague thoughts in my head. I have at least one book which I value precisely because it crosses the categories I have (Ellis). So what are they?
Discrete or continuous.
EE or ChE (electrical engineering or chemical engineering).
Modern or classical.
Let’s take an easy one: discrete or continuous. Maybe it’s only easy when we speak of the modeling rather than the practice. Our early physical models of equipment grow out of our early physics classes: linear and rotational motion masses and forces and flows, etc.
Differential equations. The Laplace transform.
But if we work with sampled data, which exists for discrete times, then we’re talking difference equations. The z-transform.
As it happens, almost everything I’ve looked at has been a continuous model. Regardless of having sampled data, chemical and power plants, for example, still have such things as valves which physically control flow rates. Computer control can tell you what flow rate you want, but eventually a piece of equipment has to move and a fluid flow has to change. It may be a discrete control system but it’s affecting a continuous model.
The magic of discrete systems, to my mind, is the “corresponding” continuous system. Suppose you have a continuous system, but you sample its output. Can you reconstruct the Laplace transform from the discrete data? I’ve seen it done, and it looked like magic, no bones about it. As usual, I know what they did, but not why it worked.
A slightly more complicated distinction between books is what I think of as “EE” or “ChE”, electrical engineering or chemical engineering. The following discussion also focuses on classical rather than modern.
As an aside, as a rough guide, if you want to find theoretical applied mathematics, go look at electrical engineering. If you want to find applied applied mathematics, go look at chemical engineering.
It’s because of that general observation that I chose the names for this categorization. What I’m calling ChE is “process control”; what I’m calling EE is everything else.
(While I’m at it, if you want applied theoretical mathematics, go look at general relativity, or quantum mechanics, i.e. theoretical physics.)
There’s more to it than that. As usual, the lines are not clearly drawn, but the mainstay of ChE control is PID, proportional–integral–derivative. More general books may use PID, or they may use “lead-lag compensation”, but process control will use PID. A book might go so far as to hint that they sort of correspond:
I haven’t decided if that correspondence can be made exact.
Nevertheless, I think the key distinction between the ChE and the EE is different. What I’m about to say is a little too simple, but not entirely wrong. The starting point of process control is to say: get a pile of PID controllers from the vendor, and figure out what settings you need for each one of them for each piece of equipment they’re to control. To quote Stephanopoulos: “Select the type of the controller [P, PI, PD, PID] and the values of its adjusted parameters in such as way as to minimize [some measure of] the system’s response.” for precise level control, you may need PI, but generally P suffices; for a temperature control of a heated tank, P suffices, etc. and if PID doesn’t suffice, you use an “advanced control technique”, cascade or feedforward or whatever.
By contrast, EE control theory would draw a block diagram, and use the classical analysis methods (bode plot, nyquist plot, nichols chart, and/or root-locus) to decide on the kind of controller, whether they go with PID or lead-lag. EE more commonly chooses the controller type at the same as it chooses the parameters. Again, that’s more a rule of thumb than an exact characterization.
In a discussion of process control, you’d look like a fool if you didn’t know that X is controlled by a Y-type controller; for any other control design, you’d probably look like a fool if you didn’t start with a bode plot.
Now is probably the appropriate place to remark that the classical analysis methods do apply to both continuous and discrete systems.
Classical versus modern? A less pejorative pair would be “transforms” versus “state space”, as Franklin et al. say, but even they fall back to using the words “classical” and “modern”. State space also does apply to both continuous and discrete systems.
I have done very little in state space, except for time series analysis. To my mind it is essential for multivariable control; unfortunately, process control is still struggling with true multivariable control. “We have ways”, they say, but I find them too ad hoc to be interesting. I might come back to them after I have state space under my belt.
The linear model which we are really referring to when we say “state space” looks deceptively simple for such a marvel. First of all, I love the model itself: we have two equations, one of which describes how a “state” of the system evolves; we have another equation which describes how our measurements are determined by the state. We have distinguished the model from the measurements. But this is pretty much EE control, anything but process control.
Modern ChE process control, by contrast, has found other ways to incorporate the model into the analysis. It has picked up that idea, but tempered it because the parameters of parts of chemical or power plants are not that well known, and the models themselves –never mind their parameters – are severe approximations. But, after all, we chose some particular control scheme because we had some model of the process; why not include that model in the control scheme? It’s just another block in a block diagram. “Internal model control” seems to be the generic name for such schemes. Robust process control seems to focus on the reality that for poorly defined systems, such as chemical plants, optimal control is rather precarious, and we really care about robustness, working well when the parameters of the model aren’t what we assumed them to be.
Modern process control has incorporated a system model, without adopting the state space framework. Again, too simple a statement; I believe that modern refineries use something closer to robust control in a state space framework.