## review: K = 1

Let us resume Example 1. We started with the following transfer function, for which I arbitrarily assumed K = 1 and that the given factor of 100 was part of the plant G. There are two poles at -4, and 0.

… which had the following open loop Bode plot

… with a phase margin of 11°. We expect that the closed-loop system will be “jittery”.

Oh, let me remind us that I use rules for G and K so that I can still write such things as the generic equations for y and for u… not that I displayed them in the output this time.

The closed loop transfer function for output was…

… and the output and controller effort are:

## review K = .01

Then we reduced K by 2 orders of magnitude.

… which had the following open loop Bode plot

… with a phase margin of 76°. This, by contrast, should be “sluggish”.

The closed loop transfer function for output was…

… and the output and (100 times) the controller effort were:

What Carstens did first was to argue that somewhere between the critically damped response when K = .01 and the oscillatory response when K = 1, we should be able to find a better-looking output response. Ultimately, he looked at the open loop phase lag plot for K =1 and said that lowering the amplitude ratio plot by 30 dB would give us a phase margin of 50°.

So let’s lower K = 1 by 30 db. What’s that give us for a new K?

## Carstens: K = .032

The following calculations shows that a 30 dB decrease is multiplication by about 0.032.

He rounded that to .032 . Fine, let’s do that, too.

The corner frequency (the breakpoint) is still 4 rad/sec. Here’s the open-loop Bode plot.

The phase margin is 56° – so this should be good. But we’ve already seen that it is, in the previous post: he’s chosen something between our K = .03 and .04 . Let’s look at it. Here’s the closed loop transfer function (and its poles) for the output:

Here’s the closed loop Bode Plot:

The closed loop transfer function of the controller is, as before, given by K / (1 + K G)…

The time-domain unit step responses of the system and the controller are given by inverse Laplace transforms of y/s and u/s:

And here is a plot of the input (silver), the output (black), and (1/.032 times) the control effort (gold):

## Finding phase margin 50°

We could try to get closer to the 50° phase margin. I’d love to use manipulate… and I can. Set a rule only for G, so that K is undefined:

Here’s a Manipulate command and the initial output. I’ll remind you that K must be an explicit argument of tf. I notice now that I did not use //Evaluate… on the other hand, you’ll see that I only stepped thru the animation rather than run it continuously.

After I click on the little + sign at the right of the slider, I get some tools… I have restricted K to a fairly narrow range, .03 to .05.

Here it is at .035. I’m looking at the top of the red line, in degrees… I want it to be at -130°… it’s still too high.

At .4 it’s closer, but still too high:

At .43, it’s probably as close to -130° as I’m going to be able to see:

So, visually, that red line running from -130 to -180 says we have a 50° phase margin somewhere around 0.43 .

To be really sure, I executed the following commands for different K, until I got a computed phase margin of 50° at .0438.

Let me also record 30° (.138) and 40° (.074) for later use.

## K = .0438

As usual, rules… and a transfer function:

The open loop Bode plot and the computed margins confirm that I used the right number for K:

I’ve got tell you… on the one hand, I’m disappointed that the actual answer (.0438) is so much different from his (.032); on the other hand, we didn’t see a whole lot of difference over this range of K in the previous post. Let’s finish off this solution. Here’s the output transfer function…

Here’s the closed loop Bode Plot:

The closed loop transfer function of the controller is, as before, given by K / (1 + K G)…

And here is a plot of the input (silver), the output (black), and (1/.0438 times) the control effort (gold):

We can see them (K = .032 and .0438) side by side…

… or we can overlay them:

Oh, just how much did I drop the curve?

I want to show all 4 solutions over the same time scale.

## How would we chose among these?

I strongly suspect that most of the following criteria are used in three ways. One, as nicely defined criteria which students can be asked to meet. Two, as nicely defined criteria which can be used in the final description of a control system. Three, as looser guidelines or check points during the design process. I also believe the real criteria must depend on the specific system being controlled.

These criteria can be categorized in more than a couple of ways. One, some are time-domain and some are frequency-domain. Two, some refer to speed of response, and some to stability. Three, some deal with the transient response, and some with the long-term response. Four, some criteria actually get down to integrating weighted functions of the errors. Let’s take a look at the list, with some brief annotations about them. We’ll look at them as we go along, I think, rather than all at once. Oh, you should get the impression from my remarks that some of the definitions are not carved in stone.

Frequency Domain:

• gain margin: relative stability, 1/magnitude when phase = -180°, at phase crossover freq.
• phase margin: relative stability, 180 + phase at unity gain, which is gain crossover freq.
• delay time: speed of response, seems to be the derivative of phase lag wrt frequency.
• bandwidth: speed of response
• cutoff rate: e.g 6 db/octave: slope after cutoff frequency.
• resonant peak: relative stability, damping
• resonant frequency (of the resonant peak)

The only one of those we’ve actually used is the phase margin. I expect to have a very nice illustration of the gain margin when we talk about tuning rules for P–PI–PID control.

Time Domain:

• overshoot: relative stability
• delay time: speed of response, often 50% of final
• rise time: speed of response, time from 10% to 90%… or from 0 to 100% (usually of final value)
• settling time: long-term, to within 2, 3, or 5% final value
• dominant time constant is 1/a in e^(-at)

• decay ratio
• period of oscillation
• natural period

I muttered about rise time versus overshoot, but we haven’t really quantified them. We will be able to quantify many of these for a second-order system – which is what we’ve been looking at, but let’s take a longer look at a general second-order system, down the road.

• position,
• velocity,
• and acceleration errors;

that is, responses to unit step, ramp, and parabolic inputs.

You might have noted – I let it pass without comment – that our proportional-only control (the parameter K) – had no long-term error. That’s because of the 1/s in G. I will show you shortly that this example has long-term offset in response to ramp and parabolic inputs.

• Is the system stable?

This system is absolutely stable for any value of K. In particular, there is no value of K which would make this system oscillate at constant amplitude in response to a unit-step input. (Quickly: All three coefficients in the quadratic are positive. This means that -b/2a in the quadratic equation is always negative – so if the discriminant is negative, we always have an exponential decay. On the other hand, if the discriminant is positive, it’s always less than b – so we still have an exponential decay.)

Compute the errors:

There is a host of performance criteria based on the integrals of some function of the error. I found a nice large list in D’Azzo & Houpis, one edition being “Linear Control System Analysis and Design” (2nd ed), McGraw Hill, 1981, 0-07-016183-6.

First, the errors themselves, perhaps weighted by time:

$\int_0^{\infty } e(t) \, dt\$ and/or $\int_0^{\infty } t e(t) \, dt\$.

I’m not sure we’d ever really want to use the signed errors… I’ve looked at the error up to the rise versus the error during the first overshoot; it didn’t seem useful. So go with the absolute error, perhaps weighted by time or time squared… and these have common abbreviations (for Integrated Absolute Error, Integrated Time Absolute Error, Integrated Squared Time Absolute Error):

IAE: $\int_0^{\infty } |e(t)| \, dt\$

ITAE: $\int_0^{\infty } t |e(t)| \, dt\$

ISTAE: $\int_0^{\infty } t^2|e(t)| \, dt\$.

Or we could use the analytically more tractable squared error, again possibly weighted:

ISE: $\int_0^{\infty } e(t)^2 \, dt\$

ITSE: $\int_0^{\infty } t e(t)^2 \, dt\$

ISTSE: $\int_0^{\infty } t^2 e(t)^2 \, dt\$

I do believe that when we get to optimal control, we will find that we are not minimizing only the error… I expect to find that we include the control effort, too. And I’ll confess that trying to minimize the various error criteria was not particularly fruitful for this example.

I want to look at a two more values of K, those for 40° and 30° phase margins. (I don’t care to go more sluggish; I want to see what more jittery looks like.

## K = .074

This is a phase margin of 40°. I forgot to display the open-loop Bode plot, but I did compute the phase margin. Here we have the parameters, the transfer function, and the phase margin in degrees:

Here’s the closed-loop output transfer function.

Here’s the closed loop Bode Plot:

The closed loop transfer function of the controller is, as before, given by K / (1 + K G)…

And here is a plot of the input (silver), the output (black), and (1/.044 times) the control effort (gold):

## K = .138

This is a phase margin of 30°. Again, I forgot to show you that with open-loop Bode plot, but this output computes the phase margin:

The closed loop transfer function..

Here’s the closed loop Bode Plot:

I want to note and save that peak is about 5 db at a freq of 7.

The closed loop transfer function of the controller is, as before, given by K / (1 + K G)…

And here is a plot of the input (silver), the output (black), and (1/.044 times) the control effort (gold):

Now let’s look at the four nice values of phase margin, 30, 40, 50, 56°. We know that the smaller the phase margin, the more jittery the output. This isn’t a bad set of responses to choose from – unless our system requires at least critical damping, i.e no overshoot whatsoever.

## resonant frequency

Now I want to hit it at the resonant frequency, which appeared to be about 7.

There’s no reason to look up the Laplace transform of a sine wave – Mathematica can give it to me. Then I compute the inverse Laplace transforms of y times the Laplace transform of my chosen sine input. Oh, I did not change parameters; this is the system for which I observed the magnitude and location of the peak (K = .138).

Here’s my usual graph of input, output, and scaled control response.

That isn’t all that helpful. We do appear to have three sinusoids which settle down to constant amplitudes – but I emphasize that this is the response to a sinusoidal input, not to a unit-step.

Let’s look at the early time:

Let’s look just at the input and output. We see a constant phase shift, and constant amplitudes, with the output eventually close to 2 times larger than the input. I think my estimate of 5 dB was a little off.

## ramp response

Let me also look at the ramp and parabolic responses. Let me return to .0438 (50° phase margin). I reset the parameters… and recompute the output transfer function y:

The closed loop transfer function of the controller is, as before, given by K / (1 + K G)…

For the time-domain ramp responses of the system and the controller, I need the Laplace transform of a ramp function, namely t itself… and then I get the output and control effort:

And here is a plot of the input (silver), the output (black), and (1/.0438 times) the control effort (gold):

We see that the output is always less than the input signal, and appears to tend to a constant offset. The control effort, on the other hand, stabilizes.

## parabolic response

We’re not changing the output or control transfer functions, merely the input. Here we see one of the key benefits of transfer functions. I used t^2/2 rather than t^2 because it gave me 1/s^3 without a factor of 2. This may or may not be what I was supposed to do, but I don’t really care. The 2 is irrelevant in principle, although perhaps not by custom.

And here is a plot of the input (silver), the output (black), and (1/.0438 times) the control effort (gold):

We could take a longer look. The discrepancy between input and output appears to be growing. As does the control effort.

## Summary

We’ve looked at a specific second-order system… we’ve varied only the controller K, as a constant in the numerator. I remember being surprised years ago when I first saw just how wide a range of behavior we could get out of P-only control. Although this case was always stable, that doesn’t always happen, and it is possible for extreme values of K (in either direction, I think) to cause instability.

We have seen several unit-step responses, as well as one each of sinusoidal, ramp, and parabolic inputs.

I’ve introduced, just barely, a whole slew of performance measures.

What I forgot to do was explain why I think Carstens would have modified the solutions with nice phase margin. And rather than tack that onto this post, I’ll do it in the next controls post. It may be a rather short post – wouldn’t that be a change!? – but I also don’t feel like splitting this one apart now that it’s ready to go. I could put out two posts of approximately the same size, instead of one large and one small… but let me leave this one alone. Besides, the more I think about the next controls post, the bigger it gets.

Take this one in small bites. Perhaps you were going to anyway.

Have a nice Thanksgiving, those of you who celebrate it.