**edited: 11/14/2012 to replace “sum” with “product”; search on (edit)**

**edited: 1/12/2013 to replace “gain margin” with “phase margin”; search on (edit)**

## Introduction

Let emphasize that we require Mathematica® version 8 for the following. Prior versions required an add-on, Control System Professional. The commands are a little different now.

We’ve seen one example of using Laplace transforms to solve an ordinary differential equation. Frankly, that’s not enough, so if they’re new to you, you’ll need to study up on them.

Now I want to move on and introduce transfer functions. I hope you recall – or that it at least sounds familiar – when I say that Laplace transforms incorporated the initial conditions into the solution. Instead of getting a general solution with unknown constants, we get a specific solution that incorporates all the initial conditions.

Transfer functions are a special case of Laplace transforms – where we set all of the initial conditions to zero. On the other hand, transfer functions are a little more general than Laplace transforms: we use an arbitrary forcing function and arbitrary coefficients in the equation.

What?

Let me show you.

I start with an ordinary differential equation, and I get its Laplace transform. The differential equation is first order with constant coefficients, scaled so that y has a coefficient of 1. The RHS (right hand side) is an arbitrary function u(t) with a coefficient called kp.

I only need two multiplicative parameters, called and kp by convention. If I had had three coefficients, I would have divided through by the coefficient of y. (Or you might say, that’s where kp came from on the RHS.)

We see two Laplace transforms, of y and of the forcing function u. Let me simplify the notation, writing the transforms as Y and U respectively.

Now, let me solve for Y(s):

That is called the **transfer function** for the physical process described by the differential equation.

In other words, the relationship between Y and U is multiplicative and fixed, in that it depends on the coefficients of the differential equation. We have broken the differential equation into two parts – one which embodies y and its derivatives, and one which embodies the forcing function u.

Once I choose the forcing function u, I get its Laplace transform U, then I multiply by the transfer function to get the Laplace transform Y, and then the inverse Laplace transform gets me y(t).

(We will see down the road that I have specialized, too. The RHS need not just be a function u(t) – like the LHS, it could include derivatives.)

Of course, if I were only interested in one forcing function, we wouldn’t have bothered with new machinery. One advantage of the transfer function approach is that it lets us easily compute the effect of different forcing functions.

Another advantage of the transfer function approach is that it let’s us conveniently characterize each of the components of a complicated system.

Let me illustrate the use of our simple transfer function (one of the simplest of all!).

In general, we call this example a **first order system**. I can wrap the bare transfer function G into a “transfer function model”:

(Now you know why I wrote it the way I did; this is the customary form.) For much but not all of what I’m doing in this post, we could work with G itself rather than with the transfer function model tf constructed from it. In fact, we will want both forms.

Let’s set parameters. and set both forms (although I only printed one):

Where are the poles?

And asking for the poles of tf1 is just a little easier than finding the roots of the denominator of G1… but we could:

## using Laplace transforms for the output

Next I want to push our given transfer function with an impulse, a unit step function, and a ramp… and see what the output responses of the system are. Now, the impulse really behaves like a nonzero initial condition, while the step function and ramp are not unusual forcing functions for our first exposure to differential equations. Let me do this all using Laplace transforms.

So, in order to compute Y(s), I must multiply the transfer function by U(s) – so I need U(s) for impulse, step, and ramp. Mathematica calls the first the DiracDelta, the second the UnitStep, but for the third, I just used u(t) = t:

We see that the transforms are 1, 1/s, and 1/s^2. Now I want three inverse Laplace transforms, for the ree products – Y itself, Y/s, and Y/s^2 – and just for completeness, I’m going to ask for the limit of each inverse transform as t goes to infinity.

In the following, please note that I have included the input forcing term, in gray – what we would call the **control effort** in this context. Texts strongly recommend that we get in the habit of always seeing the control effort – in particular, we want to know when it’s outside reasonable bounds. For a change, I used Mathematica’s Plot command rather than Dave Park’s Presentations package. (I had an ulterior motive; maybe we’ll see it down the road.)

Impulse Response:

Unit Step Response:

Ramp Response:

The graphs are consistent with the limits we computed, which were 0, 2, infinity. Just to confirm things, I want to solve these just the way we would solve ODEs. Here’s the differential equation, first in general… and then we set our numerical parameters:

The ramp response is simple enough: I set the RHS = t, and I set the initial conditions IC to zero, and I solve:

The step response is similar, just use 1 instead of t for the RHS:

But for the impulse response, it turns out that I need to set the initial condition to 2/3 – the ratio of the two parameters – and the RHS to zero:

I’m still thinking about why it’s 2/3 instead of 1.

## the Bode Plot

We’ll be looking at Bode plots a lot, and I hope we’ll learn more about them… but I want to show you one now, without any preparation to speak of. Recall our numerical transfer function, in both forms:

The command I want is just BodePlot. Not surprisingly, if that’s all I ask for, the results aren’t very pretty.

We do get the same result if we use the transfer function model tf1 instead of the bare expression G1… and I will use the transfer function model henceforth.

The green background is my own stylesheet’s instruction… and the blue lines for graphs are Mathematica’s default. Let me set the range on the horizontal axis, and the background color, and get larger images:

I’ll have more to say about these graphs very shortly, when we can see more detail.

Now let me ask for some things called gain and phase margins, in both the graphs (StabilityMargins -> True) and numerically (GainPhaseMargins[tf1])… (That’s another command that requires a transfer function model.) I’ve also converted the phase margin from radians to degrees.

We don’t know what that means yet, but there is no gain margin drawn on the first graph (although it’s infinite at no frequency!), and the phase margin, drawn on the second graph, is 120°, at a frequency of . (And -60 = -180 + 120., or perhaps I should write 120 = -60 -(-180).)

Return to our Bode plot and add gridlines:

In this case, we can still see the phase margin in the second plot, but in general we might want it thicker and a more outstanding color. The following, then is what I usually call for… and I have followed the BodePlot command with an immediate request for the numerical values of the margins:

As I said, I may talk about Bode plots in more detail in a subsequent post… but let me at least tell you what the heck we’re looking at, now that we have larger and more detailed graphs.

If we have what’s called a linear time invariant system (i.e. a linear differential or difference or delay-differential or delay-difference) equation with constant coefficients), and if the forcing function on the RHS is a sinusoid of one pure frequency and amplitude 1, then the forced output will be a sinusoid of

- the same frequency
- but with amplitude a function of frequency
- and a phase shift which is also a function of frequency.

I would like to have said that the steady-state response would be a sinusoid of the same frequency… but that’s only true if there is a steady state response! In general the output will include real exponentials – the **natural response**, i.e. the solution of the homogeneous equation. If they decay to zero, fine, we have a steady-state; but if they do not decay to zero, then the solution is governed by the growing exponentials. This diary post illustrates the difference. Also read my comment immediately following that post.

That’s important enough to say again. The full solution of a non-homogeneous linear differential equation is the sum of two things: a general solution, with arbitrary constants, to the homogeneous equation, i.e. the RHS = 0; and a particular solution to the given RHS. The general solution need not decay to zero… but the Bode plot describes only the particular solution as a function of frequency when the given RHS is a pure sine wave.

Moving on…. What I’ve said is that the forced output has an amplitude A() instead of input amplitude 1, so A can be called the amplitude ratio; and the phase difference is . The two Bode plots describe the amplitude ratio A() and the phase shift .

But not directly. For one thing, since we want a wide range of frequencies, the horizontal axis is for both graphs – usually using radians per second. The second graph is then versus .

The first graph also appears to have a linear vertical axis – but that’s deceptive. Look at the values: they range from about +7 down to about -45. Ratios? No.

Logarithms of ratios.

The first graph is A() measured in decibels (dB): a value of 0 on the vertical axis corresponds to A = 1. To get decibels from a ratio A, compute 20 times the log base-10 of A:

Conversely, 3 dB – a frequent criterion for something – is a ratio of about 1.4:

Now I really should say more about the gain and phase margins. The narrow issue is: does it ever happen that the output is 180° out of phase with the input, and of the same magnitude? The answer is usually, no, not simultaneously. So we widen the issue, in two ways. One, we ask: if the output is ever 180° out of phase with the input, what is its amplitude at that frequency? That leads to the **gain margin**. Two, we ask: if the output is ever the same amplitude as the input, how far from 180° is it out of phase at that frequency? That leads to the **phase margin**.

Let me back up a little. Why do we care if the output is 180° out of phase? Because we’re going to subtract it from the input to get our error signal. If we added them under these conditions, they would cancel – but we’re not adding them, but subtracting and they will reinforce. Expect the error to get worse. Sometimes people say, the subtraction itself amounts to a 180° phase shift, and so another 180° phase shift will change the subtraction to an addition.

Finally, we generalize the issue. If 180° out of phase at the same amplitude is terrible, then close to 180° out of phase should be bad. This leads eventually to a rule of thumb suggesting that a (**edit**: ~~gain~~) phase margin of 40-60° is good… anything smaller is “jittery” control and anything larger is “sluggish”.

At this point, I am mostly using the gain margin to get the exact value of the **gain crossover frequency**, where the amplitude ratio is equal to 1. I’ll show you this soon. In the meantime, I’m simply not going to worry about the details of gain and phase margins.

As for this specific Bode plot… the phase is never ±180°, so there is no frequency at which to compute a gain margin. (“If the phase is ever 180° – well, no, never… then our margin – how close to 1 are we? – is said to be infinite.) On the other hand, when the amplitude ratio is 1, then the phase is -60, and we translate that to a phase margin of 120°. Not surprisingly, given that there was no finite answer for the gain margin, we see only one line added to the plots, specifically to the phase plot. So the frequency, , where the amplitude ratio is 1, is the gain crossover frequency for this example.

One last thing. Bode plots are easy to sketch if the transfer function is a product of linear and quadratic terms, raised to positive or negative powers. Things have been arranged so that we can add the amplitude ratios and the phase shifts of each factor – that’s why the amplitude ratio is plotted on a log scale. The product of any two terms

has been replaced by the sums

,

which is the logarithm of the **(edit)** ~~sum~~ product, which was to be computed.

That’s fine for products of terms. But to get from an “open loop” transfer function G to the corresponding closed loop, we must compute

and that’s not easy… by hand. Hence Nichols charts, which go from an open loop to the closed loop Bode plot… and gain and phase margins of an open loop plot, which tell us about the stability of the closed loop transfer function.

(Our first order system could be either an open loop or a closed loop transfer function. By asking for gain and phase margins, I am considering it as an open loop transfer function.)

But we’re not doing things by hand anymore, so getting a closed loop Bode plot is almost trivial, and we really ought to call for them without a second thought.

Summary:

- the horizontal axis of a Bode plot has , where
- is radians per second.
- one vertical axis is amplitude ratio in decibels.
- the other vertical axis is phase shift in degrees.
- the two plots describe the forced response to a sinusoidal input…
- they do not describe the natural response.
- the gain margin is computed at the frequency where (if) the phase shift is 180°.
- the phase margin is computed at the frequency where (if) the amplitude ratio is one.
- Gain and phase margins of an open loop tell us about the stability and responsiveness of the closed loop.

Now let me show you that we could specify different properties for the two graphs:

- PlotStyle->{{Red,Thickness[.01]},{Blue,Thickness[.005]}}
- Background->{Legacy@Linen,Legacy@BlanchedAlmond}
- ImageSize->{400,200}

I have yet to generalize separate directives for graphs to multiple transfer functions, which is more than a little annoying. I don’t know whether it is my lack of understanding, or whether there is a bug in the BodePlot command. (On the other hand, it works as i would expect in Control System Professional, so I have reason to think it’s a bug.) Still, it seemed worthwhile to emphasize that it can be done for a single transfer function. Note that it was done here by giving a list of specs: two assignments for the PlotStyle, two for Background, and two for ImageSize.

## frequency response

Now consider the amplitude ratio plot. For an input frequency below = 0.57735, output amplitude should be greater than the input amplitude, less for frequencies above .

Let’s see that.

Let’s set the frequency to 0.3, and use a pure sine input… and let me show you another way to get the output. The command OutputResponse came from Control System Professional, and its input must be either a transfer function model, or a state space model. Furthermore, it cannot handle a DiracDelta function for a continuous system, so it cannot be used “out of the box” for the impulse response. No problem: we saw that the Inverse Laplace Transform works just fine.

As will be my custom, as I said, the control effort – the forcing function – is shown in gray. It seems clear that there is a constant phase difference between input and output… and we can see a tiny decay at the beginning, from the first red peak to the second. Oh, the output settles down to a constant amplitude greater than 1 – as expected for the lower frequency.

I could also have done that slightly differently. Instead of {t, 0, 100}, which got me an interpolating function for the answer, I could have just specified t, which would have gotten me a symbolic answer insofar as possible.

It does require some attention to get it to a reasonable form. See this post for graphic failure. This symbolic form is more informative. For example, what is that exponential term? Ah, that’s the the natural response – the solution of the homogeneous equation… which decays in this case, so we will have a simple sinusoidal forced response, after a very little while.

Finally, I could have done this same way as I got the impulse, step, and ramp responses. I use that for the very next illustration.

Now let’s set the frequency to 1. This time, let me use the old method – just to remind you. I need the Laplace transform U(s) of the input…

and then I get the inverse Laplace transform of the product Y(s) U(s), where Y = G1 – here’s a place where I need the bare transfer function G1 rather than the wrapped-up transfer function model tf1:

The amplitude settles to a constant value less than 1… the phase difference appears to be constant… the exponential decay at the beginning is more obvious.

So much for an introduction. Here’s my personal summary of this numerical transfer function: Bode plot on the left… impulse, step, and ramp responses on the right… followed by two selected frequency responses.

## Summary

Here’s the command. The outermost braces { } around the g’s has the effect of reducing the space between the Bode plot and the three outputs. I have no idea why. But there was a lot more space around the plots as a whole, so I have the command and the output as separate screen shots.

And here are the two selected frequency responses:

November 14, 2012 at 10:50 am

I want to note an open question from this post:

How could I have known that the correct initial condition for the ODE was 2/3 instead of 1?

January 24, 2014 at 6:53 am

It’s because he used kp=2 and tau=3 … change these parameters and the initial conditions you need will change respectively

January 24, 2014 at 7:02 am

in addition:

take o1, the inverse LT of G1. Set t->0 (initial condition) and you’ll get 2/3 … now guess what happens if you add an impulse with u(0)=1 😉