LAGRANGIAN AND NONHOLONOMIC CONSTRAINTS

Author Horia Orasanu

ABSTRACT

Our main motivation for using this approach is the fact that while performing the gait pattern lateral undulation which consists of fixed periodic body motions, all the solutions of the snake robot dynamics have inherent oscillatory behaviour. Moreover, we will show how this behaviour can be analytically and constructively controlled based on virtual holonomic constraints

1 INTRODUCTION

The idea of virtual holonomic constraints is particularly a useful concept for control of oscillations (see, e.g. [20-24]). We will in this section show how this approach can be used to solve the path following control problem of snake robots. In particular, we will show how, by designing the joint reference trajectories in (35) using virtual holonomic constraints and by combining this with virtual holonomic constraints motivated by line-of-sight (LOS) guidance for the head angle in (36), we are able to solve the path following control problem, i.e. achieving (40).. In particular, we use the word ‘constructive’ in the sense that through the feedback action, we shape the dynamics of the system such that it possesses the desired structural properties, i.e. positive invariance and exponential stability of an appropriately defined constraint manifold. To this end, we define a constraint manifold for the system, and we design the control input of (29) to exponentially stabilize the constraint manifold. The geometry of this manifold is defined based on specified geometric relations among the generalized coordinates of the system which are called virtual holonomic constraints. In particular, we call them virtual constraints because they do not arise from a physical connection between two variables but rather from the actions of a feedback controller [20

2 FUNDAMENTAL METHOD

This section presents the most important concepts used in this work as well as the basis for the presented path planning approaches.

2.1 Fast Marching and Path Planning

The principle behind the fast marching method (FMM) is the expansion of a wave: in two dimensions, intuitively, the method simulates the spreading of a thick liquid as it is poured into a board, obtaining the time in which the front reaches every point of the grid. Similar formulations have been used in other study areas such as fluids mechanics, molecular dynamics in relation to electrostatics, thermal analysis, and more. Notwithstanding, it is crucial to highlight that the most important and peculiar feature of the method concerns how the wave expansion is calculated in arrival time for every cell in a grid. As a consequence of its particular mathematical formulation, the outputted potential map of the method presents only a global minimum and no local minimum whatsoever. There are many preceding graph search algorithms based on similar approaches, such as Dijkstra and A*, see [20] and [21] respectively; these search methods have been widely used and demonstrated to be efficient. Conversely, they have been

proven to be inconsistent in the continuous space [22].

In an homogeneous environment, the FMM generates – at same levels of the wave – front interface points in circular form and centred around the source location. In such a case, all the points in the interface are reached at a given time homogeneously, and the minimal paths between two points in the space are always composed of straight lines.

The methods foundation is the same as the one behind the Fermat principle in optics, which states that a ray of light which goes through a prismatic glass always takes the fastest path between any two points; in other words, it takes the minimum – or optimal path – in time. The i

References

Bar-Yam, Y. Dynamics of Complex Systems; Addison-Wesley Publishing Company: Reading, MA, USA, 1997. [Google Scholar]

Mitchell, M. Complexity: A Guided Tour; Oxford University Press: Oxford, UK, 2009. [Google Scholar]

Bennett, C.H. How to define complexity in physics, and why. Complex. Entropy Phys. Inf. 1990, 8, 137–148. [Google Scholar]

Winfree, A.T. The Geometry of Biological Time, 2nd ed.; Interdisciplinary Applied Mathematics (Book 12); Springer: New York, NY, USA, 2000. [Google Scholar] ]]>

and ]]>

Irrespective of the high level of interest in constrained real-parameter optimization, the current constrained optimization test suite (CEC 2006 Benchmark

]]>The solution of partial differential equations describing a wide range of physical problems in a domain can be reduced to the solution of corresponding boundary integral equations (BIE) on a boundary . The solution of boundary integral equations leads to the retrieval of unknown boundary values of the functions and/or the derivatives of these functions that occur in the original differential equation. There exists a class of problems, where the finding of boundary values of unknown functions on a given domain is, from the application point of view, quite sufficient. As an example, we can mention the calculation of the stress intensity factor for a cracked body, which can be determined in terms of calculated displacements of crack faces. In this case, the calculation of stress-strain field in internal points of a domain is unnecessary. However, if we are also interested in values of functions inside the domain, we can calculate them from the known boundary data using appropriate integral relations. These integral relations are e.g. Green’s formulas for the case of Laplace’s or Poisson’s equation. We will see that, in case of elasticity problems, the appropriate integral relations are so-called Somigliana’s formulas, which are equivalent to Green’s formulas.

The BIE can rarely be solved analytically. One of the most used techniques to their solution is the boundary element method (BEM). In order to solve for the unknown surface data, the surface must be subdivided into segments (i.e. elements similarly as with the standard FEM) and, as a results, the boundary integral equations are approximated by a system of algebraic equations. The boundary elements have one less dimension than the body being analyzed. That is, the boundary of a two-dimensional problem is surrounded by a one-dimensional elements, while the surface of a three-dimensional solid is paved with two-dimensional elements. Consequently, boundary element analysis can be very efficient, particularly when the boundary quantities are of primary interest. BEM solutions have been found to be quite accurate, especially when the domain is infinite or semi-infinite, such as often occurs with stress concentration or crack problems.

The method is particularly appropriate for linear problems. Extensions into the nonlinear range are possible, but at the expense of some of the special advantages of the method. As the advantages of BEM in comparison to FEM, there are usually considered a lower number of unknowns, a higher accuracy of approximation of derivatives of unknown functions and an easy analysis of infinite domains.

There are two basic approaches leading to the formulation of BIE. The first approach, i.e. so-called direct formulation, leads to the construction of integral equations, which contain as unknown functions those functions, which stand in original differential equations. The second approach, so-called indirect formulation, leads to the integral equations which contain as unknown functions so-called single layer potential densities and double layer potential densities, from which the searched functions, standing in the differential equations, must only be computed.. While the theoretical basis of the direct formulation is the so-called fundamental solution of a differential equation, or Green’s function respectively, which is used together with Green’s formulas, or Somigliana formulas respectively, the basis of the indirect formulation is so-called theory of potential. Some basic principles of the theory of potential, necessary for the indirect formulation of BIE corresponding to Laplace’s equation and/or Poisson’s equation, are given in Appendix 2. (back to Lecture 8)

The direct formulation will first be demonstrated at one-dimensional problem, where the concept of Green’s function is better to elucidate. Also it will become clear how a searched function is expressed using its boundary values and the boundary value of its derivations. given by prof dr mircea orasanu and prof horia orasanu that are used for Green formula and LAGRANGIAN Formula Green’s function G(x,) of the operator A is defined as a solution of Eq. (1), where the function f is the Dirac delta function, f(x)= (x–), and satisfying given boundary conditions. It means that G(x,) fulfils the equation

(3)

for the same boundary conditions that the function u(x) is required to satisfy. Therefrom it follows the relation

. (4)

The conception of Green’s function has a substantial theoretical meaning, since it makes possible to solve a differential equation with suitable boundary conditions be means of the quadrature. It can be seen from the following: By using the Dirac delta function, the equation (2) can be written as

, (5)

wherefrom, using the definition of Green’s function (4), we get

. (6)

Green’s function is often called the influence function, which is motivated by its physical meaning. Consider, e.g. a beam supported at its end points a, b, and subjected to unit concentrated load at the point x = . Then, Green’s function G(x,) of this problem describes a beam deflection w(x), which is caused by the unit concentrated load. If instead of the concentrated load, the beam is subjected to a load f() at the point x = , the deflection will be given by G(x,)f(). If the beam is subjected to a distributed load f(x), then its deflection is

. (7)

This is the physical meaning of Eq (6), (in general case, the meaning is similar).

However, in the boundary integral equation method, we are not concerned to set up Green’s function for a particular boundary value problem – this is frequently unsolvable problem. We make do The basic features of the boundary element method will now be explained with a beam deflection example. We begin with the integral (global ) form of the equation of equilibrium and boundary conditions. Mathematically, we require the equation of equilibrium and boundary conditions to be fulfilled in the weak sense, see Lecture 4. In the case of 3D problem, the integral form of the equations of equilibrium and the boundary conditions can be written in the form (back to Lecture 8)

. (8)

Eq. (8) is sometimes referred to as the extended Galerkin’s method, where the basis functions need not satisfy boundary conditions. In other words, we require the differential equations of equilibrium and boundary conditions to be fulfilled in the mean with a certain weight. As the weight functions, we have selected the variations of sought functions. These procedure can be considered as a special case of the weighed residual method described in Lecture 5-6. The special one in the sense that the weight functions may be chosen quite arbitrarily, and specific selections lead then to the FEM, the finite difference method, the collocation method, and BEM.

For a beam, the equation (8) takes the form:

By virtue of integration by parts, the fundamental solution enables us to express the solution in an interior point of a domain in terms of its boundary values, and if we let move the interior point to the boundary, we obtain the integral equation, from which the unknown boundary value can be computed. Obviously, in the case of one-dimensional domain, the boundary degenerates into one (two points), and instead of boundary integral equations, we have simple algebraic conditions expressing the equality of function values. (back to Lecture 8)

, (9)

where T and M stand for the shear force and the bending moment respectively, , w and denote the deflection and the slope respectively, and denote the prescribed values at the ends of a beam.

Application of integration by parts to the first term of Eq. (9) gives

. (10)

Continuation of this process applied to the integral on the right-hand side of (10) leads to

and eventually to

. (11)

Substitution

The Least Common Multiple of two integers, is the least positive integer that is divisible by both integers. This is connected by a simple formula with the greatest common divisor of the two integers, a familiar topic from modern algebra and number theory. The purpose of this paper is to present a proof for the connection between least common multiple and greatest common divisor. Along the way we will see several other properties of the least common multiple, as well as a number of examples.

Throughout the discussion, we will consider only positive integers, the set of which is expressed as N. We also assume the notation and properties of the greatest common divisor presented in Hungerford [1]. In particular, if a and b are positive integers, we denote the greatest common divisor of a and b by (a,b). To begin the discussion of least common multiple, we present the following definition.

Definition 1. If a and b are positive integers, the least common multiple of a and b, denoted [a,b], is the least positive element of the set {z N : a | z and b | z }.

It should be remarked that for any positive integers a and b, their product a

We see that 12 is the least entry that is common to both sets. Indeed, 4|12 and 6|12 so 12 is definitely a common multiple. On the other hand, since the only positive multiple of 6 less than 12 is 6 itself, and since 4 is not a divisor of 6, we see that no other common multiple can be less than 12. This shows that 12 is the least common multiple.änd prof dr mircea orasanu and prof horia orasanu

In this example, we notice that 12 is not only less than or equal to every other common multiple of 4 and 6, it is also a divisor of all those multiples. We state this as a theorem.

b is always divisible

]]>The Laplace operator is a scalar operator defined as the dot product (inner product) of two gradient vector operators:

In dimensional space, we have:

When applied to a 2-D function , this operator produces a scalar function:

In discrete case, the second order differentiation becomes second order difference. In 1-D case, if the first order difference is defined as

then the second order difference is

Note that is so defined that it is symmetric to the center element . The Laplace operation can be carried out by 1-D convolution with a kernel .

In 2-D case, Laplace operator is the sum of two second order differences in both dimensions:

This operation can be carried out by 2-D convolution kernel:

Other Laplace kernels can be used:

We see that these Laplace kernels are actually the same as the high-pass filtering kernels discussed before. with prof dr mircea orasanu

Gradient operation is an effective detector for sharp edges where the pixel gray levels change over space very rapidly. But when the gray levels change slowly from dark to bright (red in the figure below), the gradient operation will produce a very wide edge (green in the figure). It is helpful in this case to consider using the Laplace operation. The second order derivative of the wide edge (blue in the figure) will have a zero crossing in the middle of edge. Therefore the location of the edge can be obtained by detecting the zero-crossings of the second order

This is my expert

]]>