## Introduction

Down the road, I expect to be using Laplace transforms to set up and solve electric circuits, and for transfer functions in control theory. An obvious starting point is to remind you just what a Laplace transform is.

So I should show you at least one example of solving a differential equation using Laplace transforms.

But if I do that, I really should remind you of the alternative solution, the one you almost certainly learned 1st.

On top of that, I really should show you what Mathematica® can do.

As if all that weren’t enough – though it really won’t take very long – I have seen a nice approach to Heaviside’s operational calculus, and I want to show that to you, too. Ah, by the time I explain it, this will justify a post of its own.

So, I propose to take a typical equation for these methods – linear, with constant coefficients – and I am going to

- let Mathematica solve it symbolically
- check the symbolic answer
- let Mathematica solve it numerically
- solve it using Laplace transforms
- solve the homogeneous equation and then find a particular solution to the inhomogeneous equation

and in a subsequent post I will

- solve it using Heavisisde’s operational calculus

If this is all new to you, or very old material you haven’t touched in a long time, you want to find a sophomore engineering mathematics text. It should show you methods 4 and 5, and it may give you some idea of how to solve the equation numerically yourself. Or that’s one potential benefit of this post: to motivate you to study differential equations and Laplace transforms.

I now have 3 references for Heaviside’s operator methods… the most recent, and the clearest, is Murray Spiegel’s “applied differential equations”, 3rd edition, 1981, Prentice Hall.

The most rigorous, alas, is an ancient faded set of typed lecture notes of K. O. Friedrichs of New York University in the spring of 1944. (Not only do friends give me their old textbooks, one friend gave me materials from his father and grandfather!)

The 3rd reference no longer seems very useful, but I won’t know for sure until I’ve spent more time on Heaviside. It is the Barnes and Noble College outline series volume “differential equations” by Kaj Nielsen. I do not know if this is my original copy from my freshman year in college – if not, it’s a used copy I bought in order to replace the original. This is where I 1st saw Heaviside’s operator approach.

The real purpose of including Heaviside in this post was that I am just beginning to play with his methods… so let’s start by seeing an example. But, as I said, I really need to talk about it, too, so it will happen later.

With that, let’s get rolling….

## DSolve: symbolic solution

Here we have an inhomogeneous (1 on the RHS) second order (y”) linear differential equation with constant coefficients… and two initial conditions, one on y and one on the derivative y’. It is only personal preference that I start with separate differential equation and initial conditions.

Now I ask Mathematica to solve it, using DSolve. In the first command, the {DE, IC} // Flatten merely creates one list out of two, and the // Flatten at the end removes extra parentheses from the solution.

The second command gave me an expression rather than a rule.

Now draw a picture. (As usual, I use Dave Park’s graphics software.)

That was pretty easy… and outright magic if you don’t understand the underlying mathematics. I hope you do, or set out to learn it.

## Check that

I want to emphasize that you need not take Mathematica’s answer as gospel. What was the equation?

Is

really a solution? Well, compute the left hand side… and we do indeed get the RHS, 1.

You might note the format for getting a second derivative wrt x.

## NDSolve: numerical solution

Not every differential equation can be solved symbolically. What then? Well, there’s no harm in getting just a numerical solution in addition to our symbolic one, although now a graph is essential rather than merely auxiliary information. This time I call NDSolve, and I use y instead of y[x], and {x, 0, 4} in place of x.

The answer is not only a rule, but also something called an InterpolatingFunction.

Plot it. Note that I specify y[x] and use the solution s1:

Now let me overlay the two pictures of symbolic and numerical solutions:

Looks good to me… though that line seems a little thicker. Well, let’s look at the differences:

Now that looks good.

## Laplace Transforms

As a reminder, the Laplace transform F(s) of the function f(t) is defined as

(There is nothing significant about t and s – the arguments of f(t) and F(s) – so long as they are the two arguments in e^(-st). Similarly, the choice of f and F for the function and its transform is a personal – and common – choice.)

If Mathematica is doing the work, it’s just a command we issue. Note that I will use x instead of t in what follows. The first command gets the transform of the entire differential equation. The second command converts all occurences of LaplaceTransform[y[x], x, s] to Y[s], i.e. to my notational preference. It also makes it a little easier to read.

We see that the transformed equation includes various powers of s, and both of the initial conditions. Although we set all initial conditions to zero for transfer functions, for the solution of differential equations, it is incredibly convenient that they are incorporated into our transformed equation.

The next two commands first convert my list of initial conditions to a list of rules, and then apply those rules to the transformed equation.

What do we have?

An algebraic equation in the function Y[s] – which is the Laplace transform of our unknown function y[x]. We have transformed a differential equation into an algebraic equation.

That’s a big deal.

Solve for Y[s]… and then, once we have it, ask for its inverse Laplace transform! The first line gets the solution as a rule… the second line gets Y[s] as an expression… and the third line gets the inverse transform:

What was the previous answer? And this one, again? And are they the same?

Yes, they’re the same.

Compared to just using DSolve, the Laplace transform takes more work… but it comes into its own for transfer functions, in control theory.

## Manually – sort of

What did our textbooks do to solve that equation? First they solved the homogeneous equation – that is, with the right hand side set to zero. Instead of the original equation… we work with…

Now, I’m going to go ahead and let Mathematica do a DSolve. The texbooks would have written “the characteristic equation” , and its solutions would have been p = ±1, leading to and as solutions. And that’s what DSolve gets.

Because the differential equation is linear and homogeneous, the sum of two solutions is a solution, and any multiple of a solution is a solution. So we’ve gotten a suitably general – two initial conditions, two arbitrary constants – solution to our homogeneous equation.

But we started with an inhomogeneous equation, so now we look for any solution at all of the original equation:

The RHS is constant, so we guess y = constant might be a solution. This is called the method of undetermined coefficients, and it can be used whenever the right hand side is a power of x (or t, whatever the idependent variable is), or a sine or cosine, or an exponential – or a sum of such terms. We guess that the solution is a multiple of the right hand side.

A few remarks, which you would find in your text. 1, of course, is a power of x, namely x^0, so by guessing a constant we are guessing a multiple of 1. If we had a sine or cosine on the RHS, we would guess a combination of sine and cosine… what happens is that the output can be a phase-shifted version of the input, and by including both sine and cosine in our guess, we allow for a phase shift.

So I want to assume that y = c, and plug into the differential equation. The first line is the equation, and the second line shows exactly how Mathematica stored it. Since Mathematica has no idea that c is a constant, I needed to tell it that the second derivative of c was zero – and to do that, I had to see exactly how Mathematica had written the second derivative.

The third line set c to 1, and the second derivative of c to zero… the fourth line multiplied the third line by -1 (!), and the final line gave me the result as a rule instead of an equation.

The upshot of those calculations is that there is a constant solution of the inhomogeneous equation, specifially y = -1.

Now I write the complete solution (lines 1 and 2)… and then I set x = 0 to get an equation involving the unknown constants C[1] and C[2], – and y[0], which is one of my known initial conditions, so I substitute for that, and get an equation involving the two unknown constants:

Next I differentiate the general solution… set x = 0… and get an equation involving y'[0], which I know from the initial conditions. After I plug in that value, I get a second equation involving the two unknown constants:

I now have two equations in two unknowns, and I solve:

Finally I use the values of C[1] and C[2] to write the answer:

Let’s be clear: is that the same as the second answer?

Yes.

So if you ever need to offer something more specific than the output of a DSolve command, you should try to use Laplace transforms in preference to solving the homogeneous and inhomogeneous equations (for linear constant coefficient ordinary differential equations).

Soon, I’ll try to explain Heaviside’s operational calculus, and use it to solve this same problem.

November 5, 2017 at 7:43 am

The Laplace Operator

The Laplace operator is a scalar operator defined as the dot product (inner product) of two gradient vector operators:

In dimensional space, we have:

When applied to a 2-D function , this operator produces a scalar function:

In discrete case, the second order differentiation becomes second order difference. In 1-D case, if the first order difference is defined as

then the second order difference is

Note that is so defined that it is symmetric to the center element . The Laplace operation can be carried out by 1-D convolution with a kernel .

In 2-D case, Laplace operator is the sum of two second order differences in both dimensions:

This operation can be carried out by 2-D convolution kernel:

Other Laplace kernels can be used:

We see that these Laplace kernels are actually the same as the high-pass filtering kernels discussed before. with prof dr mircea orasanu

Gradient operation is an effective detector for sharp edges where the pixel gray levels change over space very rapidly. But when the gray levels change slowly from dark to bright (red in the figure below), the gradient operation will produce a very wide edge (green in the figure). It is helpful in this case to consider using the Laplace operation. The second order derivative of the wide edge (blue in the figure) will have a zero crossing in the middle of edge. Therefore the location of the edge can be obtained by detecting the zero-crossings of the second order