Control Theory – Example 1 P-only control

Introduction

Everyone under the sun seems to have their own way of organizing control theory. I may very well end up – I hope I end up – with a few different conceptual approaches for myself. But to begin with, I’m going to just start with examples, so that I’ve got a lot of material to try organizing. If you’re new to control theory, my beginning may seem rather chaotic. Sorry.

It is also my intention to stay with classical control theory for quite a while… I’ll move to state space after I’m comfortable with SISO – single input, single output – and the pre-state-space methodologies.

This first example does one positive thing: it illustrates the effect of proportional control.

It also does a few things that are not quite positive: that is, it raises a few questions. I’m more than a little glad it’s not perfectly straight-forward. Let me not keep secrets:

  • Carstens’ first real design problem, later in the book, suggests that this solution is not an acceptable real-world design.
  • Tuning rules such as Ziegler-Nichols, which require that the system be brought to the edge of instability, cannot be applied.
  • Minimizing the obvious measures of error do not agree with his solution – although his looks pretty standard to me.

My point is not that his solution is incorrect, but that it’s sort of incomplete. It’s a student exercise. Let me rephrase that: I’m delighted to know if the solution to an exercise really is a real-world solution, regardless of whether it is or is not… just so I know which.

Let me start by grabbing a drawing from an earlier post.

The key result from that post was that the closed loop transfer function between r and y is given by

G K / (1 + G H K).

There is a special case, called unity feedback, when H = 1, and we will assume H = 1, so the transfer function for the output will be:

G K / (1 + G K).

But we need another result, too. We want – OK, I want! – the closed loop transfer function between u and r: that is, I want to know what signal u comes out of the controller K. For many textbook exercises, we are not asked for that… but I think it’s worthwhile to get used to looking at it from the beginning. This can mean that I may make an arbitrary decision to split K and G, when the author provided only the product KG. As Carstens did in this example.

Anyway, let’s find the relationship between u and r, the same way we found the relationship between y and r. This time we start with u = K e and end with y -> G u, to get an expression involving ony u and r. We start with u from e… substitute for e… substitute for r… and finally substitute for y:

That says that the transfer function for the control effort u is

K / (1 + G H K).

For H = 1, of course, we get

K / (1 + G K).

numerator = 100: let this be G

I’m going to try to use rules to set G and K, so that I can write symbolic equations when I need them. (I may wish I had used rules for everything, but I didn’t.)

This example comes from Carstens (6.4 on p. 177, 6.6 on p. 183, and then pp. 187-195; he’s on the Bibliographies page.) In other words, it’s a major example, his first illustration of closing the loop. I originally worked it using slightly different numbers… but, I want to apply a lesson he taught later, and I want to apply it to his own work.

He started with a rather large constant in the numerator, namely 100. He didn’t distinguish G and K… in order to follow him, I will take it that G (“the plant”) has the 100 in it, so K is 1. That is, these parameters are…

The corner frequency (the breakpoint) is 4 rad/sec. Here’s the open-loop Bode plot. Hey, since I forget it periodically, note the Directive command for the StabilityMarginStyle: Thick instead of Thickness[ ].

The phase margin is 11° – so this should be very jittery but stable. (Ah, I have read that the phase margin must be positive for stability. I’m honestly not sure why the same phase difference in either direction of -180° isn’t equally good.)

Let’s get the closed loop response of this. Note that TransferFunctionCancel is required to eliminate the common poles and zeroes explicitly. Oh, I’m about to be a little sloppy with my notation. What I am about to call y should be called y/r instead. Calling the transfer function y… and in a little while using u for u/r… nicely identifies which one is the output response and which is the control effort.

So here’s GK / (1+GK), and its poles:

Let me talk about that pole-zero cancellation. The key is that is it exact. When we form 1 + GK, we get…

and when we put things over a common denominator, we are multiplying by exactly 1 + 0.25 s – more generally, by whatever is in the denominator. We may not know it very precisely at all, but whatever it is, that’s what we’re multiplying by. That is, our model may not represent the actual physical plant precisely – it almost certainly is not exact! – but what we’re doing uses only the model.

When we invert 1 + GK to get 1 / (1 + GK), we get a zero, 1 + 0.25 s, which is exactly equal to the pole we started with. This cancellation should be fine.

It’s an entirely different issue if we add a zero to cancel an existing pole. In that case, if we do not know the pole exactly, then our zero is only approximately equal to it. I will close this example (in a second post) with an illustration from the Mathematica help system.

Here’s the closed-loop Bode plot:

I have computed the gain and phase margins for the closed loop… but I don’t know what they mean. Sorry. I certainly hope I figure it out.

Nevertheless, the amplitude ratio plot has a large peak (just under 15 dB) at a frequency of about 20. Well, looking back at the closed loop poles…

we infer that the peak is at 19.8997 radians/sec. (As we said before, the Bode plot shows the effect of an external driving frequency – but that effect is largest wneh the driving frequency equals the natural frequency – which is the imaginary part of the pair of poles.)

The closed loop transfer function of the controller is given by K / (1 + K G). Again, as I said, what I’m calling u is really u/r… but I just need a suggestive name.

The time-domain unit step responses of the system and the controller are given by inverse Laplace transforms, of y/s and u/s:

And here is a plot of the input (unit step, whatever that pale color is), the output (black), and the control effort (gold):

Let me get around to saying that proportional control sets a control signal proportional to the error. And that’s exactly what we see: at the very beginning our error is 1 so our control effort is 1. From then on the control effort is literally a reflection of the output, in the horizontal line y = 1/2.

Let’s reduce the proportionality constant from K = 1.

K = .01

G is the same, but K is reduced two orders of magnitude.

The corner frequency (the breakpoint) hasn’t moved: it’s is still 4 rad/sec. Here’s the open-loop Bode plot.

All we did was shift the amplitude ratio plot downward, but that affects the phase margin, which is now 76° – so this should be sluggish.

Let’s get the closed loop response for the output:

A repeated root in the denominator. Guess what? This is critical damping. (I know, I haven’t written up the general second-order system. But critical damping is when the discriminant vanishes.)

Here’s the closed loop Bode plot.

This time there is no phase margin to interpret: the amplitude ratio approaches 1 very closely for frequencies less than about 0.2 .

Here’s the closed-loop transfer function for the control effort:

As before, the time-domain unit step responses of the system and the controller are given by inverse Laplace transforms:

And here is the plot of the input (unit step, the pale color), the output (black), and (100 times) the control effort (gold):

Note that we had a problem with the scale: I multiplied the control effort by 100 in order to see it on the same vertical scale as the output. Here’s the un-scaled control effort:

Somewhere between a numerator of 1 and 100 we should be able to find a nice solution. That is, we have found

where the top plot is oscillating quite a bit, but isn’t far off after 1 second…. the bottom plot is only half way there after 1 second. (Yes, the time scales are different.)

Before I show you what Carstens did, let me show you what we can do in Mathematica. Unfortunately, hosted on WordPress.com, I cannot upload a CDF – computable document format – so I will have to show you screenshots of an animation. I’m going to show how to use the Manipulate command for the output response.

Manipulate K

We set only G…

We still write the closed loop transfer function, but with parameter K:

Note that I must include K as a parameter: o1 must be an explicit function of K. I could include t as a parameter, but I apparently do not need to. Since I earlier used o1 without an argument, I need to clear it.

With that denominator, we cannot have K = .01 . but in fact we already found a valid answer: we have critical damping. Having confirmed that, let’s forget about that case. (The transfer function whose two poles were at -2 was:

So here’s the manipulate command. As I said, K needs to be an argument to a function (here, o1)… the //Evaluate command speeds things up tremendously, so consider it a necessity… don’t try starting K at 1, not with this general equation. Here’s the initial output:

The first thing I do is click on the “+” at the far right of the slider

and I get

Now we have more information and options.

  • We see that K = 0.011 …
  • the dark triangle will play an animation, as K runs from .011 to .2 (or pause it after we start playing)…
  • the – and + will step thru values of K…
  • the arrow on the far right is a 3-way switch, for forward, backward, or both directions (which gives a smooth animation)…
  • the up and down symbols will let us alter the speed of the animation….

For things like this, I tend to step thru it rather than run the animation, but both are useful. In particular, since I cannot show you the animation, I will give you screenshots of stepping thru.

Here we have K = .02, .03, .04, and .05:

Here we have K = .07, .09, .11, and .13:

Here we have K = .15, .17, .19, and .2 (I should have gone to .21, I reckon):

And here’s an overview, K = .02, .07, .13, and .19:

By limiting K to 0.2, I’m still pretty far from out initial highly oscillatory case (K = 1), which had oscillated three times in the first second. Values of K above about .07 seem to have overshoot approaching 50% (1.5), and that might be high. (Look, the values really depend on what we’re controlling.)

How do we decide which one to pick? Let’s begin the next post by seeing what Carstens did.

Let me close by reminding you that I have two open questions from this post:

  • Why does the sign of the phase margin matter?
  • What do the gain and phase margins of a closed-loop Bode Plot mean?

Of course, we have an unanswered question – what value of K to pick – but I’ll deal with that in the next controls post.

One Response to “Control Theory – Example 1 P-only control”

  1. rip Says:

    I can see the answer to the first question by looking at a Nyquist plot. If the phase margin is negative, then we will encircle the point (-1,0), which says “unstable”. I don’t really understand Nyquist plots, but I can see the asymmetry that means the sign of the phase margin matters.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: