Ordinary Differential Equations and the Laplace Transform

Introduction

Down the road, I expect to be using Laplace transforms to set up and solve electric circuits, and for transfer functions in control theory. An obvious starting point is to remind you just what a Laplace transform is.

So I should show you at least one example of solving a differential equation using Laplace transforms.

But if I do that, I really should remind you of the alternative solution, the one you almost certainly learned 1st.

On top of that, I really should show you what Mathematica® can do.

As if all that weren’t enough – though it really won’t take very long – I have seen a nice approach to Heaviside’s operational calculus, and I want to show that to you, too. Ah, by the time I explain it, this will justify a post of its own.

So, I propose to take a typical equation for these methods – linear, with constant coefficients – and I am going to

  1. let Mathematica solve it symbolically
  2. check the symbolic answer
  3. let Mathematica solve it numerically
  4. solve it using Laplace transforms
  5. solve the homogeneous equation and then find a particular solution to the inhomogeneous equation

and in a subsequent post I will

  • solve it using Heavisisde’s operational calculus

If this is all new to you, or very old material you haven’t touched in a long time, you want to find a sophomore engineering mathematics text. It should show you methods 4 and 5, and it may give you some idea of how to solve the equation numerically yourself. Or that’s one potential benefit of this post: to motivate you to study differential equations and Laplace transforms.

I now have 3 references for Heaviside’s operator methods… the most recent, and the clearest, is Murray Spiegel’s “applied differential equations”, 3rd edition, 1981, Prentice Hall.

The most rigorous, alas, is an ancient faded set of typed lecture notes of K. O. Friedrichs of New York University in the spring of 1944. (Not only do friends give me their old textbooks, one friend gave me materials from his father and grandfather!)

The 3rd reference no longer seems very useful, but I won’t know for sure until I’ve spent more time on Heaviside. It is the Barnes and Noble College outline series volume “differential equations” by Kaj Nielsen. I do not know if this is my original copy from my freshman year in college – if not, it’s a used copy I bought in order to replace the original. This is where I 1st saw Heaviside’s operator approach.

The real purpose of including Heaviside in this post was that I am just beginning to play with his methods… so let’s start by seeing an example. But, as I said, I really need to talk about it, too, so it will happen later.

With that, let’s get rolling….

DSolve: symbolic solution

Here we have an inhomogeneous (1 on the RHS) second order (y”) linear differential equation with constant coefficients… and two initial conditions, one on y and one on the derivative y’. It is only personal preference that I start with separate differential equation and initial conditions.

Now I ask Mathematica to solve it, using DSolve. In the first command, the {DE, IC} // Flatten merely creates one list out of two, and the // Flatten at the end removes extra parentheses from the solution.

The second command gave me an expression rather than a rule.

Now draw a picture. (As usual, I use Dave Park’s graphics software.)

That was pretty easy… and outright magic if you don’t understand the underlying mathematics. I hope you do, or set out to learn it.

Check that

I want to emphasize that you need not take Mathematica’s answer as gospel. What was the equation?

Is

really a solution? Well, compute the left hand side… and we do indeed get the RHS, 1.

You might note the format for getting a second derivative wrt x.

NDSolve: numerical solution

Not every differential equation can be solved symbolically. What then? Well, there’s no harm in getting just a numerical solution in addition to our symbolic one, although now a graph is essential rather than merely auxiliary information. This time I call NDSolve, and I use y instead of y[x], and {x, 0, 4} in place of x.

The answer is not only a rule, but also something called an InterpolatingFunction.

Plot it. Note that I specify y[x] and use the solution s1:

Now let me overlay the two pictures of symbolic and numerical solutions:

Looks good to me… though that line seems a little thicker. Well, let’s look at the differences:

Now that looks good.

Laplace Transforms

As a reminder, the Laplace transform F(s) of the function f(t) is defined as

F(s) = \int_0^\infty f(t) e^{-st} dt

(There is nothing significant about t and s – the arguments of f(t) and F(s) – so long as they are the two arguments in e^(-st). Similarly, the choice of f and F for the function and its transform is a personal – and common – choice.)

If Mathematica is doing the work, it’s just a command we issue. Note that I will use x instead of t in what follows. The first command gets the transform of the entire differential equation. The second command converts all occurences of LaplaceTransform[y[x], x, s] to Y[s], i.e. to my notational preference. It also makes it a little easier to read.

We see that the transformed equation includes various powers of s, and both of the initial conditions. Although we set all initial conditions to zero for transfer functions, for the solution of differential equations, it is incredibly convenient that they are incorporated into our transformed equation.

The next two commands first convert my list of initial conditions to a list of rules, and then apply those rules to the transformed equation.

What do we have?

An algebraic equation in the function Y[s] – which is the Laplace transform of our unknown function y[x]. We have transformed a differential equation into an algebraic equation.

That’s a big deal.

Solve for Y[s]… and then, once we have it, ask for its inverse Laplace transform! The first line gets the solution as a rule… the second line gets Y[s] as an expression… and the third line gets the inverse transform:

What was the previous answer? And this one, again? And are they the same?

Yes, they’re the same.

Compared to just using DSolve, the Laplace transform takes more work… but it comes into its own for transfer functions, in control theory.

Manually – sort of

What did our textbooks do to solve that equation? First they solved the homogeneous equation – that is, with the right hand side set to zero. Instead of the original equation… we work with…

Now, I’m going to go ahead and let Mathematica do a DSolve. The texbooks would have written “the characteristic equation” p^2 - 1 = 0\ , and its solutions would have been p = ±1, leading to e^x\ and e^{-x}\ as solutions. And that’s what DSolve gets.

Because the differential equation is linear and homogeneous, the sum of two solutions is a solution, and any multiple of a solution is a solution. So we’ve gotten a suitably general – two initial conditions, two arbitrary constants – solution to our homogeneous equation.

But we started with an inhomogeneous equation, so now we look for any solution at all of the original equation:

The RHS is constant, so we guess y = constant might be a solution. This is called the method of undetermined coefficients, and it can be used whenever the right hand side is a power of x (or t, whatever the idependent variable is), or a sine or cosine, or an exponential – or a sum of such terms. We guess that the solution is a multiple of the right hand side.

A few remarks, which you would find in your text. 1, of course, is a power of x, namely x^0, so by guessing a constant we are guessing a multiple of 1. If we had a sine or cosine on the RHS, we would guess a combination of sine and cosine… what happens is that the output can be a phase-shifted version of the input, and by including both sine and cosine in our guess, we allow for a phase shift.

So I want to assume that y = c, and plug into the differential equation. The first line is the equation, and the second line shows exactly how Mathematica stored it. Since Mathematica has no idea that c is a constant, I needed to tell it that the second derivative of c was zero – and to do that, I had to see exactly how Mathematica had written the second derivative.

The third line set c to 1, and the second derivative of c to zero… the fourth line multiplied the third line by -1 (!), and the final line gave me the result as a rule instead of an equation.

The upshot of those calculations is that there is a constant solution of the inhomogeneous equation, specifially y = -1.

Now I write the complete solution (lines 1 and 2)… and then I set x = 0 to get an equation involving the unknown constants C[1] and C[2], – and y[0], which is one of my known initial conditions, so I substitute for that, and get an equation involving the two unknown constants:

Next I differentiate the general solution… set x = 0… and get an equation involving y'[0], which I know from the initial conditions. After I plug in that value, I get a second equation involving the two unknown constants:

I now have two equations in two unknowns, and I solve:

Finally I use the values of C[1] and C[2] to write the answer:

Let’s be clear: is that the same as the second answer?

Yes.

So if you ever need to offer something more specific than the output of a DSolve command, you should try to use Laplace transforms in preference to solving the homogeneous and inhomogeneous equations (for linear constant coefficient ordinary differential equations).

Soon, I’ll try to explain Heaviside’s operational calculus, and use it to solve this same problem.

3 Responses to “Ordinary Differential Equations and the Laplace Transform”

  1. loiosu Says:

    The Laplace Operator
    The Laplace operator is a scalar operator defined as the dot product (inner product) of two gradient vector operators:

    In dimensional space, we have:

    When applied to a 2-D function , this operator produces a scalar function:

    In discrete case, the second order differentiation becomes second order difference. In 1-D case, if the first order difference is defined as

    then the second order difference is

    Note that is so defined that it is symmetric to the center element . The Laplace operation can be carried out by 1-D convolution with a kernel .
    In 2-D case, Laplace operator is the sum of two second order differences in both dimensions:

    This operation can be carried out by 2-D convolution kernel:

    Other Laplace kernels can be used:

    We see that these Laplace kernels are actually the same as the high-pass filtering kernels discussed before. with prof dr mircea orasanu
    Gradient operation is an effective detector for sharp edges where the pixel gray levels change over space very rapidly. But when the gray levels change slowly from dark to bright (red in the figure below), the gradient operation will produce a very wide edge (green in the figure). It is helpful in this case to consider using the Laplace operation. The second order derivative of the wide edge (blue in the figure) will have a zero crossing in the middle of edge. Therefore the location of the edge can be obtained by detecting the zero-crossings of the second order

  2. merniu Says:

    also the above considerations can be applied to Lagrangian expression as say prof dr mircea orasanu and prof horia orasanu as followings
    LAGRANGIAN AND NONHOLONOMIC CONSTRAINTS
    Author Horia Orasanu
    ABSTRACT
    Our main motivation for using this approach is the fact that while performing the gait pattern lateral undulation which consists of fixed periodic body motions, all the solutions of the snake robot dynamics have inherent oscillatory behaviour. Moreover, we will show how this behaviour can be analytically and constructively controlled based on virtual holonomic constraints
    1 INTRODUCTION
    The idea of virtual holonomic constraints is particularly a useful concept for control of oscillations (see, e.g. [20-24]). We will in this section show how this approach can be used to solve the path following control problem of snake robots. In particular, we will show how, by designing the joint reference trajectories in (35) using virtual holonomic constraints and by combining this with virtual holonomic constraints motivated by line-of-sight (LOS) guidance for the head angle in (36), we are able to solve the path following control problem, i.e. achieving (40).. In particular, we use the word ‘constructive’ in the sense that through the feedback action, we shape the dynamics of the system such that it possesses the desired structural properties, i.e. positive invariance and exponential stability of an appropriately defined constraint manifold. To this end, we define a constraint manifold for the system, and we design the control input of (29) to exponentially stabilize the constraint manifold. The geometry of this manifold is defined based on specified geometric relations among the generalized coordinates of the system which are called virtual holonomic constraints. In particular, we call them virtual constraints because they do not arise from a physical connection between two variables but rather from the actions of a feedback controller [20
    2 FUNDAMENTAL METHOD
    This section presents the most important concepts used in this work as well as the basis for the presented path planning approaches.
    2.1 Fast Marching and Path Planning
    The principle behind the fast marching method (FMM) is the expansion of a wave: in two dimensions, intuitively, the method simulates the spreading of a thick liquid as it is poured into a board, obtaining the time in which the front reaches every point of the grid. Similar formulations have been used in other study areas such as fluids mechanics, molecular dynamics in relation to electrostatics, thermal analysis, and more. Notwithstanding, it is crucial to highlight that the most important and peculiar feature of the method concerns how the wave expansion is calculated in arrival time for every cell in a grid. As a consequence of its particular mathematical formulation, the outputted potential map of the method presents only a global minimum and no local minimum whatsoever. There are many preceding graph search algorithms based on similar approaches, such as Dijkstra and A*, see [20] and [21] respectively; these search methods have been widely used and demonstrated to be efficient. Conversely, they have been
    proven to be inconsistent in the continuous space [22].
    In an homogeneous environment, the FMM generates – at same levels of the wave – front interface points in circular form and centred around the source location. In such a case, all the points in the interface are reached at a given time homogeneously, and the minimal paths between two points in the space are always composed of straight lines.
    The methods foundation is the same as the one behind the Fermat principle in optics, which states that a ray of light which goes through a prismatic glass always takes the fastest path between any two points; in other words, it takes the minimum – or optimal path – in time. The i
    References
    Bar-Yam, Y. Dynamics of Complex Systems; Addison-Wesley Publishing Company: Reading, MA, USA, 1997. [Google Scholar]
    Mitchell, M. Complexity: A Guided Tour; Oxford University Press: Oxford, UK, 2009. [Google Scholar]
    Bennett, C.H. How to define complexity in physics, and why. Complex. Entropy Phys. Inf. 1990, 8, 137–148. [Google Scholar]
    Winfree, A.T. The Geometry of Biological Time, 2nd ed.; Interdisciplinary Applied Mathematics (Book 12); Springer: New York, NY, USA, 2000. [Google Scholar]

  3. santagiu Says:

    and is seen that economic appear even here and lost deep observed now


Leave a comment