Diseases, endocrinologists. MRI
Site search

General solution of differential equation examples. Types of differential equations, solution methods

6.1. BASIC CONCEPTS AND DEFINITIONS

When solving various problems in mathematics and physics, biology and medicine, quite often it is not possible to immediately establish a functional relationship in the form of a formula connecting the variables that describe the process under study. Usually you have to use equations that contain, in addition to the independent variable and the unknown function, also its derivatives.

Definition. An equation connecting an independent variable, an unknown function and its derivatives of various orders is called differential.

An unknown function is usually denoted y(x) or simply y, and its derivatives - y", y" etc.

Other designations are also possible, for example: if y= x(t), then x"(t), x""(t)- its derivatives, and t- independent variable.

Definition. If a function depends on one variable, then the differential equation is called ordinary. General form ordinary differential equation:

or

Functions F And f may not contain some arguments, but for the equations to be differential, the presence of a derivative is essential.

Definition.The order of the differential equation is called the order of the highest derivative included in it.

For example, x 2 y"- y= 0, y" + sin x= 0 are first order equations, and y"+ 2 y"+ 5 y= x- second order equation.

When solving differential equations, the integration operation is used, which is associated with the appearance of an arbitrary constant. If the integration action is applied n times, then, obviously, the solution will contain n arbitrary constants.

6.2. DIFFERENTIAL EQUATIONS OF THE FIRST ORDER

General form first order differential equation is determined by the expression

The equation may not explicitly contain x And y, but necessarily contains y".

If the equation can be written as

then we obtain a first-order differential equation resolved with respect to the derivative.

Definition. The general solution of the first order differential equation (6.3) (or (6.4)) is the set of solutions , Where WITH- arbitrary constant.

The graph of the solution to a differential equation is called integral curve.

Giving an arbitrary constant WITH different values, partial solutions can be obtained. On surface xOy the general solution is a family of integral curves corresponding to each particular solution.

If you set a point A (x 0 , y 0), through which the integral curve must pass, then, as a rule, from a set of functions One can single out one - a private solution.

Definition.Private decision of a differential equation is its solution that does not contain arbitrary constants.

If is a general solution, then from the condition

you can find a constant WITH. The condition is called initial condition.

The problem of finding a particular solution to the differential equation (6.3) or (6.4) satisfying the initial condition at called Cauchy problem. Does this problem always have a solution? The answer is contained in the following theorem.

Cauchy's theorem(theorem of existence and uniqueness of a solution). Let in the differential equation y"= f(x,y) function f(x,y) and her

partial derivative defined and continuous in some

region D, containing a point Then in the area D exists

the only solution to the equation that satisfies the initial condition at

Cauchy's theorem states that under certain conditions there is a unique integral curve y= f(x), passing through a point Points at which the conditions of the theorem are not met

Cauchies are called special. At these points it breaks f(x, y) or.

Either several integral curves or none pass through a singular point.

Definition. If the solution (6.3), (6.4) is found in the form f(x, y, C)= 0, not allowed relative to y, then it is called general integral differential equation.

Cauchy's theorem only guarantees that a solution exists. Since there is no single method for finding a solution, we will consider only some types of first-order differential equations that can be integrated into quadratures

Definition. The differential equation is called integrable in quadratures, if finding its solution comes down to integrating functions.

6.2.1. First order differential equations with separable variables

Definition. A first order differential equation is called an equation with separable variables,

The right side of equation (6.5) is the product of two functions, each of which depends on only one variable.

For example, the equation is an equation with separating

mixed with variables
and the equation

cannot be represented in the form (6.5).

Considering that , we rewrite (6.5) in the form

From this equation we obtain a differential equation with separated variables, in which the differentials are functions that depend only on the corresponding variable:

Integrating term by term, we have


where C = C 2 - C 1 - arbitrary constant. Expression (6.6) is the general integral of equation (6.5).

By dividing both sides of equation (6.5) by, we can lose those solutions for which, Indeed, if at

That obviously is a solution to equation (6.5).

Example 1. Find a solution to the equation that satisfies

condition: y= 6 at x= 2 (y(2) = 6).

Solution. We will replace y" then . Multiply both sides by

dx, since during further integration it is impossible to leave dx in the denominator:

and then dividing both parts by we get the equation,

which can be integrated. Let's integrate:

Then ; potentiating, we get y = C. (x + 1) - ob-

general solution.

Using the initial data, we determine an arbitrary constant, substituting them into the general solution

Finally we get y= 2(x + 1) is a particular solution. Let's look at a few more examples of solving equations with separable variables.

Example 2. Find the solution to the equation

Solution. Considering that , we get .

Integrating both sides of the equation, we have

where

Example 3. Find the solution to the equation Solution. We divide both sides of the equation into those factors that depend on a variable that does not coincide with the variable under the differential sign, i.e. and integrate. Then we get


and finally

Example 4. Find the solution to the equation

Solution. Knowing what we will get. Section

lim variables. Then

Integrating, we get


Comment. In examples 1 and 2, the required function is y expressed explicitly (general solution). In examples 3 and 4 - implicitly (general integral). In the future, the form of the decision will not be specified.

Example 5. Find the solution to the equation Solution.


Example 6. Find the solution to the equation , satisfying

condition y(e)= 1.

Solution. Let's write the equation in the form

Multiplying both sides of the equation by dx and on, we get

Integrating both sides of the equation (the integral on the right side is taken by parts), we obtain

But according to the condition y= 1 at x= e. Then

Let's substitute the found values WITH to the general solution:

The resulting expression is called a partial solution of the differential equation.

6.2.2. Homogeneous differential equations of the first order

Definition. The first order differential equation is called homogeneous, if it can be represented in the form

Let us present an algorithm for solving a homogeneous equation.

1.Instead y let's introduce a new functionThen and therefore

2.In terms of function u equation (6.7) takes the form

that is, the replacement reduces a homogeneous equation to an equation with separable variables.

3. Solving equation (6.8), we first find u and then y= ux.

Example 1. Solve the equation Solution. Let's write the equation in the form

We make the substitution:
Then

We will replace

Multiply by dx: Divide by x and on Then

Having integrated both sides of the equation over the corresponding variables, we have


or, returning to the old variables, we finally get

Example 2.Solve the equation Solution.Let Then


Let's divide both sides of the equation by x2: Let's open the brackets and rearrange the terms:


Moving on to the old variables, we arrive at the final result:

Example 3.Find the solution to the equation given that

Solution.Performing a standard replacement we get

or


or

This means that the particular solution has the form Example 4. Find the solution to the equation

Solution.


Example 5.Find the solution to the equation Solution.

Independent work

Find solutions to differential equations with separable variables (1-9).

Find a solution to homogeneous differential equations (9-18).

6.2.3. Some applications of first order differential equations

Radioactive decay problem

The rate of decay of Ra (radium) at each moment of time is proportional to its available mass. Find the law of radioactive decay of Ra if it is known that at the initial moment there was Ra and the half-life of Ra is 1590 years.

Solution. Let at the instant the mass Ra be x= x(t) g, and Then the decay rate Ra is equal to


According to the conditions of the problem

Where k

Separating the variables in the last equation and integrating, we get

where

For determining C we use the initial condition: when .

Then and, therefore,

Proportionality factor k determined from the additional condition:

We have

From here and the required formula

Bacterial reproduction rate problem

The rate of reproduction of bacteria is proportional to their number. At the beginning there were 100 bacteria. Within 3 hours their number doubled. Find the dependence of the number of bacteria on time. How many times will the number of bacteria increase within 9 hours?

Solution. Let x- number of bacteria at a time t. Then, according to the condition,

Where k- proportionality coefficient.

From here From the condition it is known that . Means,

From the additional condition . Then

The function you are looking for:

So, when t= 9 x= 800, i.e. within 9 hours the number of bacteria increased 8 times.

The problem of increasing the amount of enzyme

In a brewer's yeast culture, the rate of growth of the active enzyme is proportional to its initial amount x. Initial amount of enzyme a doubled within an hour. Find dependency

x(t).

Solution. By condition, the differential equation of the process has the form

from here

But . Means, C= a and then

It is also known that

Hence,

6.3. SECOND ORDER DIFFERENTIAL EQUATIONS

6.3.1. Basic Concepts

Definition.Second order differential equation is called a relation connecting the independent variable, the desired function and its first and second derivatives.

In special cases, x may be missing from the equation, at or y". However, a second-order equation must necessarily contain y." In the general case, a second-order differential equation is written as:

or, if possible, in the form resolved with respect to the second derivative:

As in the case of a first-order equation, for a second-order equation there can be general and particular solutions. The general solution is:

Finding a Particular Solution

under initial conditions - given

numbers) is called Cauchy problem. Geometrically, this means that we need to find the integral curve at= y(x), passing through a given point and having a tangent at this point which is

aligns with the positive axis direction Ox specified angle. e. (Fig. 6.1). The Cauchy problem has a unique solution if the right-hand side of equation (6.10), incessant

is discontinuous and has continuous partial derivatives with respect to uh, uh" in some neighborhood of the starting point

To find constants included in a private solution, the system must be resolved

Rice. 6.1. Integral curve

Solving differential equations. Thanks to our online service, you can solve differential equations of any type and complexity: inhomogeneous, homogeneous, nonlinear, linear, first, second order, with separable or non-separable variables, etc. You receive a solution to differential equations in analytical form with a detailed description. Many people are interested: why is it necessary to solve differential equations online? This type of equation is very common in mathematics and physics, where it will be impossible to solve many problems without calculating the differential equation. Differential equations are also common in economics, medicine, biology, chemistry and other sciences. Solving such an equation online greatly simplifies your tasks, gives you the opportunity to better understand the material and test yourself. Advantages of solving differential equations online. A modern mathematical service website allows you to solve differential equations online of any complexity. As you know, there are a large number of types of differential equations and each of them has its own methods of solution. On our service you can find solutions to differential equations of any order and type online. To get a solution, we suggest you fill in the initial data and click the “Solution” button. Errors in the operation of the service are excluded, so you can be 100% sure that you received the correct answer. Solve differential equations with our service. Solve differential equations online. By default, in such an equation, the function y is a function of the x variable. But you can also specify your own variable designation. For example, if you specify y(t) in a differential equation, then our service will automatically determine that y is a function of the t variable. The order of the entire differential equation will depend on the maximum order of the derivative of the function present in the equation. Solving such an equation means finding the desired function. Our service will help you solve differential equations online. It doesn't take much effort on your part to solve the equation. You just need to enter the left and right sides of your equation into the required fields and click the “Solution” button. When entering, the derivative of a function must be denoted by an apostrophe. In a matter of seconds you will receive a ready-made detailed solution to the differential equation. Our service is absolutely free. Differential equations with separable variables. If in a differential equation there is an expression on the left side that depends on y, and on the right side there is an expression that depends on x, then such a differential equation is called with separable variables. The left side may contain a derivative of y; the solution to differential equations of this type will be in the form of a function of y, expressed through the integral of the right side of the equation. If on the left side there is a differential of the function of y, then in this case both sides of the equation are integrated. When the variables in a differential equation are not separated, they will need to be separated to obtain a separated differential equation. Linear differential equation. A differential equation whose function and all its derivatives are in the first degree is called linear. General form of the equation: y’+a1(x)y=f(x). f(x) and a1(x) are continuous functions of x. Solving differential equations of this type reduces to integrating two differential equations with separated variables. Order of differential equation. A differential equation can be of the first, second, nth order. The order of a differential equation determines the order of the highest derivative that it contains. In our service you can solve differential equations online for the first, second, third, etc. order. The solution to the equation will be any function y=f(x), substituting it into the equation, you will get an identity. The process of finding a solution to a differential equation is called integration. Cauchy problem. If, in addition to the differential equation itself, the initial condition y(x0)=y0 is given, then this is called the Cauchy problem. The indicators y0 and x0 are added to the solution of the equation and the value of an arbitrary constant C is determined, and then a particular solution of the equation at this value of C is determined. This is the solution to the Cauchy problem. The Cauchy problem is also called a problem with boundary conditions, which is very common in physics and mechanics. You also have the opportunity to set the Cauchy problem, that is, from all possible solutions to the equation, select a quotient that meets the given initial conditions.


This article is a starting point in studying the theory of differential equations. Here are the basic definitions and concepts that will constantly appear in the text. For better assimilation and understanding, the definitions are provided with examples.

Differential equation (DE) is an equation that includes an unknown function under the derivative or differential sign.

If the unknown function is a function of one variable, then the differential equation is called ordinary(abbreviated ODE - ordinary differential equation). If the unknown function is a function of many variables, then the differential equation is called partial differential equation.

The maximum order of the derivative of an unknown function entering a differential equation is called order of the differential equation.


Here are examples of ODEs of the first, second and fifth orders, respectively

As examples of second order partial differential equations, we give

Further we will consider only ordinary differential equations of the nth order of the form or , where Ф(x, y) = 0 is an unknown function specified implicitly (when possible, we will write it in explicit representation y = f(x) ).

The process of finding solutions to a differential equation is called by integrating the differential equation.

Solving a differential equation is an implicitly specified function Ф(x, y) = 0 (in some cases, the function y can be expressed explicitly through the argument x), which turns the differential equation into an identity.

NOTE.

The solution to a differential equation is always sought on a predetermined interval X.

Why are we talking about this separately? Yes, because in many problems the interval X is not mentioned. That is, usually the condition of the problems is formulated as follows: “find a solution to the ordinary differential equation " In this case, it is implied that the solution should be sought for all x for which both the desired function y and the original equation make sense.

The solution to a differential equation is often called integral of the differential equation.

Functions or can be called the solution of a differential equation.

One of the solutions to the differential equation is the function. Indeed, substituting this function into the original equation, we obtain the identity . It is easy to see that another solution to this ODE is, for example, . Thus, differential equations can have many solutions.


General solution of a differential equation is a set of solutions containing all, without exception, solutions to this differential equation.

The general solution of a differential equation is also called general integral of the differential equation.

Let's go back to the example. The general solution of the differential equation has the form or , where C is an arbitrary constant. Above we indicated two solutions to this ODE, which are obtained from the general integral of the differential equation by substituting C = 0 and C = 1, respectively.

If the solution of a differential equation satisfies the initially specified additional conditions, then it is called partial solution of the differential equation.

A partial solution of the differential equation satisfying the condition y(1)=1 is . Really, And .

The main problems of the theory of differential equations are Cauchy problems, boundary value problems and problems of finding a general solution to a differential equation on any given interval X.

Cauchy problem is the problem of finding a particular solution to a differential equation that satisfies the given initial conditions, where are numbers.

Boundary value problem is the problem of finding a particular solution to a second-order differential equation that satisfies additional conditions at the boundary points x 0 and x 1:
f (x 0) = f 0, f (x 1) = f 1, where f 0 and f 1 are given numbers.

The boundary value problem is often called boundary problem.

An ordinary differential equation of nth order is called linear, if it has the form , and the coefficients are continuous functions of the argument x on the integration interval.

Let us consider a linear homogeneous equation of the second order, i.e. the equation

and establish some properties of its solutions.

Property 1
If is a solution to a linear homogeneous equation, then C, Where C- an arbitrary constant, is a solution to the same equation.
Proof.
Substituting into the left side of the equation under consideration C, we get: ,
but, because is a solution to the original equation.
Hence,

and the validity of this property has been proven.

Property 2
The sum of two solutions to a linear homogeneous equation is a solution to the same equation.
Proof.
Let and be solutions of the equation under consideration, then
And .
Now substituting + into the equation under consideration we will have:
, i.e. + is the solution to the original equation.
From the proven properties it follows that, knowing two particular solutions of a linear homogeneous second-order equation, we can obtain the solution , depending on two arbitrary constants, i.e. from the number of constants that the second order equation must contain a general solution. But will this decision be general, i.e. Is it possible to satisfy arbitrarily given initial conditions by choosing arbitrary constants?
When answering this question, we will use the concept of linear independence of functions, which can be defined as follows.

The two functions are called linearly independent on a certain interval, if their ratio on this interval is not constant, i.e. If
.
Otherwise the functions are called linearly dependent.
In other words, two functions are said to be linearly dependent on a certain interval if on the entire interval.

Examples

1. Functions y 1 = e x and y 2 = e -x are linearly independent for all values ​​of x, because
.
2. Functions y
1 = e x and y 2 = 5 e x linearly dependent, because
.

Theorem 1.

If the functions and are linearly dependent on a certain interval, then the determinant is called Vronsky's determinant given functions is identically equal to zero on this interval.

Proof.

If
,
where , then and .
Hence,
.
The theorem has been proven.

Comment.
The Wronski determinant, which appears in the theorem considered, is usually denoted by the letter W or symbols .
If the functions are solutions of a linear homogeneous equation of the second order, then the following converse and, moreover, stronger theorem is valid for them.

Theorem 2.

If the Wronski determinant, compiled for solutions and a linear homogeneous equation of the second order, vanishes at least at one point, then these solutions are linearly dependent.

Proof.

Let the Wronski determinant vanish at the point , i.e. =0,
and let and .
Consider a linear homogeneous system

relatively unknown and .
The determinant of this system coincides with the value of the Wronski determinant at
x=, i.e. coincides with , and therefore equals zero. Therefore, the system has a non-zero solution and ( and are not equal to zero). Using these values ​​and , consider the function . This function is a solution to the same equation as the and functions. In addition, this function satisfies zero initial conditions: , because And .
On the other hand, it is obvious that the solution to the equation satisfying the zero initial conditions is the function y=0.
Due to the uniqueness of the solution, we have: . Whence it follows that
,
those. functions and are linearly dependent. The theorem has been proven.

Consequences.

1. If the Wronski determinant appearing in the theorems is equal to zero for some value x=, then it is equal to zero for any value xfrom the considered interval.

2. If the solutions are linearly independent, then the Wronski determinant does not vanish at any point in the interval under consideration.

3. If the Wronski determinant is nonzero at least at one point, then the solutions are linearly independent.

Theorem 3.

If and are two linearly independent solutions of a homogeneous second-order equation, then the function , where and are arbitrary constants, is a general solution to this equation.

Proof.

As is known, the function is a solution to the equation under consideration for any values ​​of and . Let us now prove that whatever the initial conditions
And ,
it is possible to select the values ​​of arbitrary constants and so that the corresponding particular solution satisfies the given initial conditions.
Substituting the initial conditions into the equalities, we obtain a system of equations
.
From this system it is possible to determine and , since determinant of this system

there is a Wronski determinant for x= and, therefore, is not equal to zero (due to the linear independence of the solutions and ).

; .

A particular solution with the obtained values ​​and satisfies the given initial conditions. Thus, the theorem is proven.

Examples

Example 1.

The general solution to the equation is the solution .
Really,
.

Therefore, the functions sinx and cosx are linearly independent. This can be verified by considering the relationship of these functions:

.

Example 2.

Solution y = C 1 e x +C 2 e - x equation is general, because .

Example 3.

The equation , whose coefficients and
continuous on any interval not containing the point x = 0, admits partial solutions

(easy to check by substitution). Therefore, its general solution has the form:
.

Comment

We have established that the general solution of a linear homogeneous second-order equation can be obtained by knowing any two linearly independent partial solutions of this equation. However, there are no general methods for finding such partial solutions in final form for equations with variable coefficients. For equations with constant coefficients, such a method exists and will be discussed later.

A differential equation is an equation that involves a function and one or more of its derivatives. In most practical problems, functions represent physical quantities, derivatives correspond to the rates of change of these quantities, and an equation determines the relationship between them.


This article discusses methods for solving certain types of ordinary differential equations, the solutions of which can be written in the form elementary functions, that is, polynomial, exponential, logarithmic and trigonometric, as well as their inverse functions. Many of these equations occur in real life, although most other differential equations cannot be solved by these methods, and for them the answer is written in the form of special functions or power series, or is found by numerical methods.


To understand this article, you must be proficient in differential and integral calculus, as well as have some understanding of partial derivatives. It is also recommended to know the basics of linear algebra as applied to differential equations, especially second-order differential equations, although knowledge of differential and integral calculus is sufficient to solve them.

Preliminary information

  • Differential equations have an extensive classification. This article talks about ordinary differential equations, that is, about equations that include a function of one variable and its derivatives. Ordinary differential equations are much easier to understand and solve than partial differential equations, which include functions of several variables. This article does not discuss partial differential equations, since the methods for solving these equations are usually determined by their particular form.
    • Below are some examples of ordinary differential equations.
      • d y d x = k y (\displaystyle (\frac ((\mathrm (d) )y)((\mathrm (d) )x))=ky)
      • d 2 x d t 2 + k x = 0 (\displaystyle (\frac ((\mathrm (d) )^(2)x)((\mathrm (d) )t^(2)))+kx=0)
    • Below are some examples of partial differential equations.
      • ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 = 0 (\displaystyle (\frac (\partial ^(2)f)(\partial x^(2)))+(\frac (\partial ^(2 )f)(\partial y^(2)))=0)
      • ∂ u ∂ t − α ∂ 2 u ∂ x 2 = 0 (\displaystyle (\frac (\partial u)(\partial t))-\alpha (\frac (\partial ^(2)u)(\partial x ^(2)))=0)
  • Order of a differential equation is determined by the order of the highest derivative included in this equation. The first of the above ordinary differential equations is of first order, while the second is a second order equation. Degree of a differential equation is the highest power to which one of the terms of this equation is raised.
    • For example, the equation below is third order and second degree.
      • (d 3 y d x 3) 2 + d y d x = 0 (\displaystyle \left((\frac ((\mathrm (d) )^(3)y)((\mathrm (d) )x^(3)))\ right)^(2)+(\frac ((\mathrm (d) )y)((\mathrm (d) )x))=0)
  • The differential equation is linear differential equation in the event that the function and all its derivatives are in the first degree. Otherwise the equation is nonlinear differential equation. Linear differential equations are remarkable in that their solutions can be used to form linear combinations that will also be solutions to the given equation.
    • Below are some examples of linear differential equations.
    • Below are some examples of nonlinear differential equations. The first equation is nonlinear due to the sine term.
      • d 2 θ d t 2 + g l sin ⁡ θ = 0 (\displaystyle (\frac ((\mathrm (d) )^(2)\theta )((\mathrm (d) )t^(2)))+( \frac (g)(l))\sin \theta =0)
      • d 2 x d t 2 + (d x d t) 2 + t x 2 = 0 (\displaystyle (\frac ((\mathrm (d) )^(2)x)((\mathrm (d) )t^(2)))+ \left((\frac ((\mathrm (d) )x)((\mathrm (d) )t))\right)^(2)+tx^(2)=0)
  • Common decision ordinary differential equation is not unique, it includes arbitrary integration constants. In most cases, the number of arbitrary constants is equal to the order of the equation. In practice, the values ​​of these constants are determined based on the given initial conditions, that is, according to the values ​​of the function and its derivatives at x = 0. (\displaystyle x=0.) The number of initial conditions that are necessary to find private solution differential equation, in most cases is also equal to the order of the given equation.
    • For example, this article will look at solving the equation below. This is a second order linear differential equation. Its general solution contains two arbitrary constants. To find these constants it is necessary to know the initial conditions at x (0) (\displaystyle x(0)) And x ′ (0) . (\displaystyle x"(0).) Usually the initial conditions are specified at the point x = 0 , (\displaystyle x=0,), although this is not necessary. This article will also discuss how to find particular solutions for given initial conditions.
      • d 2 x d t 2 + k 2 x = 0 (\displaystyle (\frac ((\mathrm (d) )^(2)x)((\mathrm (d) )t^(2)))+k^(2 )x=0)
      • x (t) = c 1 cos ⁡ k x + c 2 sin ⁡ k x (\displaystyle x(t)=c_(1)\cos kx+c_(2)\sin kx)

Steps

Part 1

First order equations

When using this service, some information may be transferred to YouTube.

  1. Linear equations of the first order. This section discusses methods for solving first-order linear differential equations in general and special cases when some terms are equal to zero. Let's pretend that y = y (x) , (\displaystyle y=y(x),) p (x) (\displaystyle p(x)) And q (x) (\displaystyle q(x)) are functions x. (\displaystyle x.)

    D y d x + p (x) y = q (x) (\displaystyle (\frac ((\mathrm (d) )y)((\mathrm (d) )x))+p(x)y=q(x ))

    P (x) = 0. (\displaystyle p(x)=0.) According to one of the main theorems of mathematical analysis, the integral of the derivative of a function is also a function. Thus, it is enough to simply integrate the equation to find its solution. It should be taken into account that when calculating the indefinite integral, an arbitrary constant appears.

    • y (x) = ∫ q (x) d x (\displaystyle y(x)=\int q(x)(\mathrm (d) )x)

    Q (x) = 0. (\displaystyle q(x)=0.) We use the method separation of variables. This moves different variables to different sides of the equation. For example, you can move all members from y (\displaystyle y) into one, and all members with x (\displaystyle x) to the other side of the equation. Members can also be transferred d x (\displaystyle (\mathrm (d) )x) And d y (\displaystyle (\mathrm (d) )y), which are included in the expressions of derivatives, however, it should be remembered that this is just a symbol that is convenient when differentiating a complex function. Discussion of these members, which are called differentials, is beyond the scope of this article.

    • First, you need to move the variables to opposite sides of the equal sign.
      • 1 y d y = − p (x) d x (\displaystyle (\frac (1)(y))(\mathrm (d) )y=-p(x)(\mathrm (d) )x)
    • Let's integrate both sides of the equation. After integration, arbitrary constants will appear on both sides, which can be transferred to the right side of the equation.
      • ln ⁡ y = ∫ − p (x) d x (\displaystyle \ln y=\int -p(x)(\mathrm (d) )x)
      • y (x) = e − ∫ p (x) d x (\displaystyle y(x)=e^(-\int p(x)(\mathrm (d) )x))
    • Example 1.1. In the last step we used the rule e a + b = e a e b (\displaystyle e^(a+b)=e^(a)e^(b)) and replaced e C (\displaystyle e^(C)) on C (\displaystyle C), since this is also an arbitrary integration constant.
      • d y d x − 2 y sin ⁡ x = 0 (\displaystyle (\frac ((\mathrm (d) )y)((\mathrm (d) )x))-2y\sin x=0)
      • 1 2 y d y = sin ⁡ x d x 1 2 ln ⁡ y = − cos ⁡ x + C ln ⁡ y = − 2 cos ⁡ x + C y (x) = C e − 2 cos ⁡ x (\displaystyle (\begin(aligned )(\frac (1)(2y))(\mathrm (d) )y&=\sin x(\mathrm (d) )x\\(\frac (1)(2))\ln y&=-\cos x+C\\\ln y&=-2\cos x+C\\y(x)&=Ce^(-2\cos x)\end(aligned)))

    P (x) ≠ 0 , q (x) ≠ 0. (\displaystyle p(x)\neq 0,\ q(x)\neq 0.) To find a general solution we introduced integrating factor as a function of x (\displaystyle x) to reduce the left-hand side to a common derivative and thus solve the equation.

    • Multiply both sides by μ (x) (\displaystyle \mu (x))
      • μ d y d x + μ p y = μ q (\displaystyle \mu (\frac ((\mathrm (d) )y)((\mathrm (d) )x))+\mu py=\mu q)
    • To reduce the left-hand side to the general derivative, the following transformations must be made:
      • d d x (μ y) = d μ d x y + μ d y d x = μ d y d x + μ p y (\displaystyle (\frac (\mathrm (d) )((\mathrm (d) )x))(\mu y)=(\ frac ((\mathrm (d) )\mu )((\mathrm (d) )x))y+\mu (\frac ((\mathrm (d) )y)((\mathrm (d) )x)) =\mu (\frac ((\mathrm (d) )y)((\mathrm (d) )x))+\mu py)
    • The last equality means that d μ d x = μ p (\displaystyle (\frac ((\mathrm (d) )\mu )((\mathrm (d) )x))=\mu p). This is an integrating factor that is sufficient to solve any first-order linear equation. Now we can derive the formula for solving this equation with respect to μ , (\displaystyle \mu ,) although it is useful for training to do all the intermediate calculations.
      • μ (x) = e ∫ p (x) d x (\displaystyle \mu (x)=e^(\int p(x)(\mathrm (d) )x))
    • Example 1.2. This example shows how to find a particular solution to a differential equation with given initial conditions.
      • t d y d t + 2 y = t 2 , y (2) = 3 (\displaystyle t(\frac ((\mathrm (d) )y)((\mathrm (d) )t))+2y=t^(2) ,\quad y(2)=3)
      • d y d t + 2 t y = t (\displaystyle (\frac ((\mathrm (d) )y)((\mathrm (d) )t))+(\frac (2)(t))y=t)
      • μ (x) = e ∫ p (t) d t = e 2 ln ⁡ t = t 2 (\displaystyle \mu (x)=e^(\int p(t)(\mathrm (d) )t)=e ^(2\ln t)=t^(2))
      • d d t (t 2 y) = t 3 t 2 y = 1 4 t 4 + C y (t) = 1 4 t 2 + C t 2 (\displaystyle (\begin(aligned)(\frac (\mathrm (d) )((\mathrm (d) )t))(t^(2)y)&=t^(3)\\t^(2)y&=(\frac (1)(4))t^(4 )+C\\y(t)&=(\frac (1)(4))t^(2)+(\frac (C)(t^(2)))\end(aligned)))
      • 3 = y (2) = 1 + C 4 , C = 8 (\displaystyle 3=y(2)=1+(\frac (C)(4)),\quad C=8)
      • y (t) = 1 4 t 2 + 8 t 2 (\displaystyle y(t)=(\frac (1)(4))t^(2)+(\frac (8)(t^(2)) ))


    Solving linear equations of the first order (recorded by Intuit - National Open University).
  2. Nonlinear first order equations. This section discusses methods for solving some first-order nonlinear differential equations. Although there is no general method for solving such equations, some of them can be solved using the methods below.

    D y d x = f (x , y) (\displaystyle (\frac ((\mathrm (d) )y)((\mathrm (d) )x))=f(x,y))
    d y d x = h (x) g (y) . (\displaystyle (\frac ((\mathrm (d) )y)((\mathrm (d) )x))=h(x)g(y).) If the function f (x , y) = h (x) g (y) (\displaystyle f(x,y)=h(x)g(y)) can be divided into functions of one variable, such an equation is called differential equation with separable variables. In this case, you can use the above method:

    • ∫ d y h (y) = ∫ g (x) d x (\displaystyle \int (\frac ((\mathrm (d) )y)(h(y)))=\int g(x)(\mathrm (d) )x)
    • Example 1.3.
      • d y d x = x 3 y (1 + x 4) (\displaystyle (\frac ((\mathrm (d) )y)((\mathrm (d) )x))=(\frac (x^(3))( y(1+x^(4)))))
      • ∫ y d y = ∫ x 3 1 + x 4 d x 1 2 y 2 = 1 4 ln ⁡ (1 + x 4) + C y (x) = 1 2 ln ⁡ (1 + x 4) + C (\displaystyle (\ begin(aligned)\int y(\mathrm (d) )y&=\int (\frac (x^(3))(1+x^(4)))(\mathrm (d) )x\\(\ frac (1)(2))y^(2)&=(\frac (1)(4))\ln(1+x^(4))+C\\y(x)&=(\frac ( 1)(2))\ln(1+x^(4))+C\end(aligned)))

    D y d x = g (x , y) h (x , y) . (\displaystyle (\frac ((\mathrm (d) )y)((\mathrm (d) )x))=(\frac (g(x,y))(h(x,y))).) Let's pretend that g (x , y) (\displaystyle g(x,y)) And h (x , y) (\displaystyle h(x,y)) are functions x (\displaystyle x) And y. (\displaystyle y.) Then homogeneous differential equation is an equation in which g (\displaystyle g) And h (\displaystyle h) are homogeneous functions to the same degree. That is, the functions must satisfy the condition g (α x , α y) = α k g (x , y) , (\displaystyle g(\alpha x,\alpha y)=\alpha ^(k)g(x,y),) Where k (\displaystyle k) is called the degree of homogeneity. Any homogeneous differential equation can be used by suitable substitutions of variables (v = y / x (\displaystyle v=y/x) or v = x / y (\displaystyle v=x/y)) convert to a separable equation.

    • Example 1.4. The above description of homogeneity may seem unclear. Let's look at this concept with an example.
      • d y d x = y 3 − x 3 y 2 x (\displaystyle (\frac ((\mathrm (d) )y)((\mathrm (d) )x))=(\frac (y^(3)-x^ (3))(y^(2)x)))
      • To begin with, it should be noted that this equation is nonlinear with respect to y. (\displaystyle y.) We also see that in this case it is impossible to separate the variables. At the same time, this differential equation is homogeneous, since both the numerator and the denominator are homogeneous with a power of 3. Therefore, we can make a change of variables v = y/x. (\displaystyle v=y/x.)
      • d y d x = y x − x 2 y 2 = v − 1 v 2 (\displaystyle (\frac ((\mathrm (d) )y)((\mathrm (d) )x))=(\frac (y)(x ))-(\frac (x^(2))(y^(2)))=v-(\frac (1)(v^(2))))
      • y = v x , d y d x = d v d x x + v (\displaystyle y=vx,\quad (\frac ((\mathrm (d) )y)((\mathrm (d) )x))=(\frac ((\mathrm (d) )v)((\mathrm (d) )x))x+v)
      • d v d x x = − 1 v 2 . (\displaystyle (\frac ((\mathrm (d) )v)((\mathrm (d) )x))x=-(\frac (1)(v^(2))).) As a result, we have the equation for v (\displaystyle v) with separable variables.
      • v (x) = − 3 ln ⁡ x + C 3 (\displaystyle v(x)=(\sqrt[(3)](-3\ln x+C)))
      • y (x) = x − 3 ln ⁡ x + C 3 (\displaystyle y(x)=x(\sqrt[(3)](-3\ln x+C)))

    D y d x = p (x) y + q (x) y n . (\displaystyle (\frac ((\mathrm (d) )y)((\mathrm (d) )x))=p(x)y+q(x)y^(n).) This Bernoulli differential equation- a special type of nonlinear equation of the first degree, the solution of which can be written using elementary functions.

    • Multiply both sides of the equation by (1 − n) y − n (\displaystyle (1-n)y^(-n)):
      • (1 − n) y − n d y d x = p (x) (1 − n) y 1 − n + (1 − n) q (x) (\displaystyle (1-n)y^(-n)(\frac ( (\mathrm (d) )y)((\mathrm (d) )x))=p(x)(1-n)y^(1-n)+(1-n)q(x))
    • We use the rule for differentiating a complex function on the left side and transform the equation into a linear equation with respect to y 1 − n , (\displaystyle y^(1-n),) which can be solved using the above methods.
      • d y 1 − n d x = p (x) (1 − n) y 1 − n + (1 − n) q (x) (\displaystyle (\frac ((\mathrm (d) )y^(1-n)) ((\mathrm (d) )x))=p(x)(1-n)y^(1-n)+(1-n)q(x))

    M (x , y) + N (x , y) d y d x = 0. (\displaystyle M(x,y)+N(x,y)(\frac ((\mathrm (d) )y)((\mathrm (d) )x))=0.) This equation in total differentials. It is necessary to find the so-called potential function φ (x , y) , (\displaystyle \varphi (x,y),), which satisfies the condition d φ d x = 0. (\displaystyle (\frac ((\mathrm (d) )\varphi )((\mathrm (d) )x))=0.)

    • To fulfill this condition, it is necessary to have total derivative. The total derivative takes into account the dependence on other variables. To calculate the total derivative φ (\displaystyle \varphi ) By x , (\displaystyle x,) we assume that y (\displaystyle y) may also depend on x. (\displaystyle x.)
      • d φ d x = ∂ φ ∂ x + ∂ φ ∂ y d y d x (\displaystyle (\frac ((\mathrm (d) )\varphi )((\mathrm (d) )x))=(\frac (\partial \varphi )(\partial x))+(\frac (\partial \varphi )(\partial y))(\frac ((\mathrm (d) )y)((\mathrm (d) )x)))
    • Comparing the terms gives us M (x , y) = ∂ φ ∂ x (\displaystyle M(x,y)=(\frac (\partial \varphi )(\partial x))) And N (x, y) = ∂ φ ∂ y. (\displaystyle N(x,y)=(\frac (\partial \varphi )(\partial y)).) This is a typical result for equations in several variables, in which the mixed derivatives of smooth functions are equal to each other. Sometimes this case is called Clairaut's theorem. In this case, the differential equation is a total differential equation if the following condition is satisfied:
      • ∂ M ∂ y = ∂ N ∂ x (\displaystyle (\frac (\partial M)(\partial y))=(\frac (\partial N)(\partial x)))
    • The method for solving equations in total differentials is similar to finding potential functions in the presence of several derivatives, which we will briefly discuss. First let's integrate M (\displaystyle M) By x. (\displaystyle x.) Because the M (\displaystyle M) is a function and x (\displaystyle x), And y , (\displaystyle y,) upon integration we get an incomplete function φ , (\displaystyle \varphi ,) designated as φ ~ (\displaystyle (\tilde (\varphi ))). The result also depends on y (\displaystyle y) integration constant.
      • φ (x , y) = ∫ M (x , y) d x = φ ~ (x , y) + c (y) (\displaystyle \varphi (x,y)=\int M(x,y)(\mathrm (d) )x=(\tilde (\varphi ))(x,y)+c(y))
    • After this, to get c (y) (\displaystyle c(y)) we can take the partial derivative of the resulting function with respect to y , (\displaystyle y,) equate the result N (x , y) (\displaystyle N(x,y)) and integrate. You can also first integrate N (\displaystyle N), and then take the partial derivative with respect to x (\displaystyle x), which will allow you to find an arbitrary function d(x). (\displaystyle d(x).) Both methods are suitable, and usually the simpler function is chosen for integration.
      • N (x , y) = ∂ φ ∂ y = ∂ φ ~ ∂ y + d c d y (\displaystyle N(x,y)=(\frac (\partial \varphi )(\partial y))=(\frac (\ partial (\tilde (\varphi )))(\partial y))+(\frac ((\mathrm (d) )c)((\mathrm (d) )y)))
    • Example 1.5. You can take partial derivatives and see that the equation below is a total differential equation.
      • 3 x 2 + y 2 + 2 x y d y d x = 0 (\displaystyle 3x^(2)+y^(2)+2xy(\frac ((\mathrm (d) )y)((\mathrm (d) )x) )=0)
      • φ = ∫ (3 x 2 + y 2) d x = x 3 + x y 2 + c (y) ∂ φ ∂ y = N (x , y) = 2 x y + d c d y (\displaystyle (\begin(aligned)\varphi &=\int (3x^(2)+y^(2))(\mathrm (d) )x=x^(3)+xy^(2)+c(y)\\(\frac (\partial \varphi )(\partial y))&=N(x,y)=2xy+(\frac ((\mathrm (d) )c)((\mathrm (d) )y))\end(aligned)))
      • d c d y = 0 , c (y) = C (\displaystyle (\frac ((\mathrm (d) )c)((\mathrm (d) )y))=0,\quad c(y)=C)
      • x 3 + x y 2 = C (\displaystyle x^(3)+xy^(2)=C)
    • If the differential equation is not a total differential equation, in some cases you can find an integrating factor that allows you to convert it into a total differential equation. However, such equations are rarely used in practice, and although the integrating factor exists, it happens to find it not easy, therefore these equations are not considered in this article.

Part 2

Second order equations
  1. Homogeneous linear differential equations with constant coefficients. These equations are widely used in practice, so their solution is of primary importance. In this case, we are not talking about homogeneous functions, but about the fact that there is 0 on the right side of the equation. The next section will show how to solve the corresponding heterogeneous differential equations. Below a (\displaystyle a) And b (\displaystyle b) are constants.

    D 2 y d x 2 + a d y d x + b y = 0 (\displaystyle (\frac ((\mathrm (d) )^(2)y)((\mathrm (d) )x^(2)))+a(\frac ((\mathrm (d) )y)((\mathrm (d) )x))+by=0)

    Characteristic equation. This differential equation is remarkable in that it can be solved very easily if you pay attention to what properties its solutions should have. From the equation it is clear that y (\displaystyle y) and its derivatives are proportional to each other. From previous examples, which were discussed in the section on first-order equations, we know that only an exponential function has this property. Therefore, it is possible to put forward ansatz(an educated guess) about what the solution to this equation will be.

    • The solution will have the form of an exponential function e r x , (\displaystyle e^(rx),) Where r (\displaystyle r) is a constant whose value should be found. Substitute this function into the equation and get the following expression
      • e r x (r 2 + a r + b) = 0 (\displaystyle e^(rx)(r^(2)+ar+b)=0)
    • This equation indicates that the product of an exponential function and a polynomial must equal zero. It is known that the exponent cannot be equal to zero for any values ​​of the degree. From this we conclude that the polynomial is equal to zero. Thus, we have reduced the problem of solving a differential equation to the much simpler problem of solving an algebraic equation, which is called the characteristic equation for a given differential equation.
      • r 2 + a r + b = 0 (\displaystyle r^(2)+ar+b=0)
      • r ± = − a ± a 2 − 4 b 2 (\displaystyle r_(\pm )=(\frac (-a\pm (\sqrt (a^(2)-4b)))(2)))
    • We got two roots. Since this differential equation is linear, its general solution is a linear combination of partial solutions. Since this is a second order equation, we know that it is really general solution, and there are no others. A more rigorous justification for this lies in theorems on the existence and uniqueness of a solution, which can be found in textbooks.
    • A useful way to check whether two solutions are linearly independent is to calculate Wronskiana. Vronskian W (\displaystyle W) is the determinant of a matrix whose columns contain functions and their successive derivatives. The linear algebra theorem states that the functions included in the Wronskian are linearly dependent if the Wronskian is equal to zero. In this section we can check whether two solutions are linearly independent - to do this we need to make sure that the Wronskian is not zero. The Wronskian is important when solving inhomogeneous differential equations with constant coefficients by the method of varying parameters.
      • W = | y 1 y 2 y 1 ′ y 2 ′ | (\displaystyle W=(\begin(vmatrix)y_(1)&y_(2)\\y_(1)"&y_(2)"\end(vmatrix)))
    • In terms of linear algebra, the set of all solutions to a given differential equation forms a vector space whose dimension is equal to the order of the differential equation. In this space one can choose a basis from linearly independent decisions from each other. This is possible due to the fact that the function y (x) (\displaystyle y(x)) valid linear operator. Derivative is linear operator, since it transforms the space of differentiable functions into the space of all functions. Equations are called homogeneous in those cases when, for any linear operator L (\displaystyle L) we need to find a solution to the equation L [ y ] = 0. (\displaystyle L[y]=0.)

    Let us now move on to consider several specific examples. We will consider the case of multiple roots of the characteristic equation a little later, in the section on reducing the order.

    If the roots r ± (\displaystyle r_(\pm )) are different real numbers, the differential equation has the following solution

    • y (x) = c 1 e r + x + c 2 e r − x (\displaystyle y(x)=c_(1)e^(r_(+)x)+c_(2)e^(r_(-)x ))

    Two complex roots. From the fundamental theorem of algebra it follows that solutions to polynomial equations with real coefficients have roots that are real or form conjugate pairs. Therefore, if a complex number r = α + i β (\displaystyle r=\alpha +i\beta ) is the root of the characteristic equation, then r ∗ = α − i β (\displaystyle r^(*)=\alpha -i\beta ) is also the root of this equation. Thus, we can write the solution in the form c 1 e (α + i β) x + c 2 e (α − i β) x , (\displaystyle c_(1)e^((\alpha +i\beta)x)+c_(2)e^( (\alpha -i\beta)x),) however, it is a complex number and is not desirable for solving practical problems.

    • Instead you can use Euler's formula e i x = cos ⁡ x + i sin ⁡ x (\displaystyle e^(ix)=\cos x+i\sin x), which allows you to write the solution in the form of trigonometric functions:
      • e α x (c 1 cos ⁡ β x + i c 1 sin ⁡ β x + c 2 cos ⁡ β x − i c 2 sin ⁡ β x) (\displaystyle e^(\alpha x)(c_(1)\cos \ beta x+ic_(1)\sin \beta x+c_(2)\cos \beta x-ic_(2)\sin \beta x))
    • Now you can instead of a constant c 1 + c 2 (\displaystyle c_(1)+c_(2)) write down c 1 (\displaystyle c_(1)), and the expression i (c 1 − c 2) (\displaystyle i(c_(1)-c_(2))) replaced by c 2 . (\displaystyle c_(2).) After this we get the following solution:
      • y (x) = e α x (c 1 cos ⁡ β x + c 2 sin ⁡ β x) (\displaystyle y(x)=e^(\alpha x)(c_(1)\cos \beta x+c_ (2)\sin\beta x))
    • There is another way to write the solution in terms of amplitude and phase, which is better suited for physics problems.
    • Example 2.1. Let us find a solution to the differential equation given below with the given initial conditions. To do this, you need to take the resulting solution, as well as its derivative, and substitute them into the initial conditions, which will allow us to determine arbitrary constants.
      • d 2 x d t 2 + 3 d x d t + 10 x = 0 , x (0) = 1 , x ′ (0) = − 1 (\displaystyle (\frac ((\mathrm (d) )^(2)x)(( \mathrm (d) )t^(2)))+3(\frac ((\mathrm (d) )x)((\mathrm (d) )t))+10x=0,\quad x(0) =1,\x"(0)=-1)
      • r 2 + 3 r + 10 = 0 , r ± = − 3 ± 9 − 40 2 = − 3 2 ± 31 2 i (\displaystyle r^(2)+3r+10=0,\quad r_(\pm ) =(\frac (-3\pm (\sqrt (9-40)))(2))=-(\frac (3)(2))\pm (\frac (\sqrt (31))(2) )i)
      • x (t) = e − 3 t / 2 (c 1 cos ⁡ 31 2 t + c 2 sin ⁡ 31 2 t) (\displaystyle x(t)=e^(-3t/2)\left(c_(1 )\cos (\frac (\sqrt (31))(2))t+c_(2)\sin (\frac (\sqrt (31))(2))t\right))
      • x (0) = 1 = c 1 (\displaystyle x(0)=1=c_(1))
      • x ′ (t) = − 3 2 e − 3 t / 2 (c 1 cos ⁡ 31 2 t + c 2 sin ⁡ 31 2 t) + e − 3 t / 2 (− 31 2 c 1 sin ⁡ 31 2 t + 31 2 c 2 cos ⁡ 31 2 t) (\displaystyle (\begin(aligned)x"(t)&=-(\frac (3)(2))e^(-3t/2)\left(c_ (1)\cos (\frac (\sqrt (31))(2))t+c_(2)\sin (\frac (\sqrt (31))(2))t\right)\\&+e ^(-3t/2)\left(-(\frac (\sqrt (31))(2))c_(1)\sin (\frac (\sqrt (31))(2))t+(\frac ( \sqrt (31))(2))c_(2)\cos (\frac (\sqrt (31))(2))t\right)\end(aligned)))
      • x ′ (0) = − 1 = − 3 2 c 1 + 31 2 c 2 , c 2 = 1 31 (\displaystyle x"(0)=-1=-(\frac (3)(2))c_( 1)+(\frac (\sqrt (31))(2))c_(2),\quad c_(2)=(\frac (1)(\sqrt (31))))
      • x (t) = e − 3 t / 2 (cos ⁡ 31 2 t + 1 31 sin ⁡ 31 2 t) (\displaystyle x(t)=e^(-3t/2)\left(\cos (\frac (\sqrt (31))(2))t+(\frac (1)(\sqrt (31)))\sin (\frac (\sqrt (31))(2))t\right))


    Solving nth order differential equations with constant coefficients (recorded by Intuit - National Open University).
  2. Decreasing order. Order reduction is a method for solving differential equations when one linearly independent solution is known. This method consists of lowering the order of the equation by one, which allows you to solve the equation using the methods described in the previous section. Let the solution be known. The main idea of ​​order reduction is to find a solution in the form below, where it is necessary to define the function v (x) (\displaystyle v(x)), substituting it into the differential equation and finding v(x). (\displaystyle v(x).) Let's look at how order reduction can be used to solve a differential equation with constant coefficients and multiple roots.


    Multiple roots homogeneous differential equation with constant coefficients. Recall that a second-order equation must have two linearly independent solutions. If the characteristic equation has multiple roots, the set of solutions Not forms a space since these solutions are linearly dependent. In this case, it is necessary to use order reduction to find a second linearly independent solution.

    • Let the characteristic equation have multiple roots r (\displaystyle r). Let us assume that the second solution can be written in the form y (x) = e r x v (x) (\displaystyle y(x)=e^(rx)v(x)), and substitute it into the differential equation. In this case, most terms, with the exception of the term with the second derivative of the function v , (\displaystyle v,) will be reduced.
      • v ″ (x) e r x = 0 (\displaystyle v""(x)e^(rx)=0)
    • Example 2.2. Let the following equation be given which has multiple roots r = − 4. (\displaystyle r=-4.) During substitution, most terms are reduced.
      • d 2 y d x 2 + 8 d y d x + 16 y = 0 (\displaystyle (\frac ((\mathrm (d) )^(2)y)((\mathrm (d) )x^(2)))+8( \frac ((\mathrm (d) )y)((\mathrm (d) )x))+16y=0)
      • y = v (x) e − 4 x y ′ = v ′ (x) e − 4 x − 4 v (x) e − 4 x y ″ = v ″ (x) e − 4 x − 8 v ′ (x) e − 4 x + 16 v (x) e − 4 x (\displaystyle (\begin(aligned)y&=v(x)e^(-4x)\\y"&=v"(x)e^(-4x )-4v(x)e^(-4x)\\y""&=v""(x)e^(-4x)-8v"(x)e^(-4x)+16v(x)e^ (-4x)\end(aligned)))
      • v ″ e − 4 x − 8 v ′ e − 4 x + 16 v e − 4 x + 8 v ′ e − 4 x − 32 v e − 4 x + 16 v e − 4 x = 0 (\displaystyle (\begin(aligned )v""e^(-4x)&-(\cancel (8v"e^(-4x)))+(\cancel (16ve^(-4x)))\\&+(\cancel (8v"e ^(-4x)))-(\cancel (32ve^(-4x)))+(\cancel (16ve^(-4x)))=0\end(aligned)))
    • Similar to our ansatz for a differential equation with constant coefficients, in this case only the second derivative can be equal to zero. We integrate twice and obtain the desired expression for v (\displaystyle v):
      • v (x) = c 1 + c 2 x (\displaystyle v(x)=c_(1)+c_(2)x)
    • Then the general solution of a differential equation with constant coefficients in the case where the characteristic equation has multiple roots can be written in the following form. For convenience, you can remember that to obtain linear independence it is enough to simply multiply the second term by x (\displaystyle x). This set of solutions is linearly independent, and thus we have found all the solutions to this equation.
      • y (x) = (c 1 + c 2 x) e r x (\displaystyle y(x)=(c_(1)+c_(2)x)e^(rx))

    D 2 y d x 2 + p (x) d y d x + q (x) y = 0. (\displaystyle (\frac ((\mathrm (d) )^(2)y)((\mathrm (d) )x^( 2)))+p(x)(\frac ((\mathrm (d) )y)((\mathrm (d) )x))+q(x)y=0.) Order reduction is applicable if the solution is known y 1 (x) (\displaystyle y_(1)(x)), which can be found or given in the problem statement.

    • We are looking for a solution in the form y (x) = v (x) y 1 (x) (\displaystyle y(x)=v(x)y_(1)(x)) and substitute it into this equation:
      • v ″ y 1 + 2 v ′ y 1 ′ + p (x) v ′ y 1 + v (y 1 ″ + p (x) y 1 ′ + q (x)) = 0 (\displaystyle v""y_( 1)+2v"y_(1)"+p(x)v"y_(1)+v(y_(1)""+p(x)y_(1)"+q(x))=0)
    • Because the y 1 (\displaystyle y_(1)) is a solution to a differential equation, all terms with v (\displaystyle v) are being reduced. In the end it remains first order linear equation. To see this more clearly, let's make a change of variables w (x) = v ′ (x) (\displaystyle w(x)=v"(x)):
      • y 1 w ′ + (2 y 1 ′ + p (x) y 1) w = 0 (\displaystyle y_(1)w"+(2y_(1)"+p(x)y_(1))w=0 )
      • w (x) = exp ⁡ (∫ (2 y 1 ′ (x) y 1 (x) + p (x)) d x) (\displaystyle w(x)=\exp \left(\int \left((\ frac (2y_(1)"(x))(y_(1)(x)))+p(x)\right)(\mathrm (d) )x\right))
      • v (x) = ∫ w (x) d x (\displaystyle v(x)=\int w(x)(\mathrm (d) )x)
    • If the integrals can be calculated, we obtain the general solution as a combination of elementary functions. Otherwise, the solution can be left in integral form.
  3. Cauchy-Euler equation. The Cauchy-Euler equation is an example of a second order differential equation with variables coefficients, which has exact solutions. This equation is used in practice, for example, to solve the Laplace equation in spherical coordinates.

    X 2 d 2 y d x 2 + a x d y d x + b y = 0 (\displaystyle x^(2)(\frac ((\mathrm (d) )^(2)y)((\mathrm (d) )x^(2) ))+ax(\frac ((\mathrm (d) )y)((\mathrm (d) )x))+by=0)

    Characteristic equation. As you can see, in this differential equation, each term contains a power factor, the degree of which is equal to the order of the corresponding derivative.

    • Thus, you can try to look for a solution in the form y (x) = x n , (\displaystyle y(x)=x^(n),) where it is necessary to determine n (\displaystyle n), just as we were looking for a solution in the form of an exponential function for a linear differential equation with constant coefficients. After differentiation and substitution we get
      • x n (n 2 + (a − 1) n + b) = 0 (\displaystyle x^(n)(n^(2)+(a-1)n+b)=0)
    • To use the characteristic equation, we must assume that x ≠ 0 (\displaystyle x\neq 0). Dot x = 0 (\displaystyle x=0) called regular singular point differential equation. Such points are important when solving differential equations using power series. This equation has two roots, which can be different and real, multiple or complex conjugate.
      • n ± = 1 − a ± (a − 1) 2 − 4 b 2 (\displaystyle n_(\pm )=(\frac (1-a\pm (\sqrt ((a-1)^(2)-4b )))(2)))

    Two different real roots. If the roots n ± (\displaystyle n_(\pm )) are real and different, then the solution to the differential equation has the following form:

    • y (x) = c 1 x n + + c 2 x n − (\displaystyle y(x)=c_(1)x^(n_(+))+c_(2)x^(n_(-)))

    Two complex roots. If the characteristic equation has roots n ± = α ± β i (\displaystyle n_(\pm )=\alpha \pm \beta i), the solution is a complex function.

    • To transform the solution into a real function, we make a change of variables x = e t , (\displaystyle x=e^(t),) that is t = ln ⁡ x , (\displaystyle t=\ln x,) and use Euler's formula. Similar actions were performed previously when determining arbitrary constants.
      • y (t) = e α t (c 1 e β i t + c 2 e − β i t) (\displaystyle y(t)=e^(\alpha t)(c_(1)e^(\beta it)+ c_(2)e^(-\beta it)))
    • Then the general solution can be written as
      • y (x) = x α (c 1 cos ⁡ (β ln ⁡ x) + c 2 sin ⁡ (β ln ⁡ x)) (\displaystyle y(x)=x^(\alpha )(c_(1)\ cos(\beta \ln x)+c_(2)\sin(\beta \ln x)))

    Multiple roots. To obtain a second linearly independent solution, it is necessary to reduce the order again.

    • It takes quite a lot of calculations, but the principle remains the same: we substitute y = v (x) y 1 (\displaystyle y=v(x)y_(1)) into an equation whose first solution is y 1 (\displaystyle y_(1)). After reductions, the following equation is obtained:
      • v ″ + 1 x v ′ = 0 (\displaystyle v""+(\frac (1)(x))v"=0)
    • This is a first order linear equation with respect to v ′ (x) . (\displaystyle v"(x).) His solution is v (x) = c 1 + c 2 ln ⁡ x . (\displaystyle v(x)=c_(1)+c_(2)\ln x.) Thus, the solution can be written in the following form. This is quite easy to remember - to obtain the second linearly independent solution simply requires an additional term with ln ⁡ x (\displaystyle \ln x).
      • y (x) = x n (c 1 + c 2 ln ⁡ x) (\displaystyle y(x)=x^(n)(c_(1)+c_(2)\ln x))
  4. Inhomogeneous linear differential equations with constant coefficients. Inhomogeneous equations have the form L [ y (x) ] = f (x) , (\displaystyle L=f(x),) Where f (x) (\displaystyle f(x))- so-called free member. According to the theory of differential equations, the general solution of this equation is a superposition private solution y p (x) (\displaystyle y_(p)(x)) And additional solution y c (x) . (\displaystyle y_(c)(x).) However, in this case, a particular solution does not mean a solution given by the initial conditions, but rather a solution that is determined by the presence of heterogeneity (a free term). An additional solution is a solution to the corresponding homogeneous equation in which f (x) = 0. (\displaystyle f(x)=0.) The overall solution is a superposition of these two solutions, since L [ y p + y c ] = L [ y p ] + L [ y c ] = f (x) (\displaystyle L=L+L=f(x)), and since L [ y c ] = 0 , (\displaystyle L=0,) such a superposition is indeed a general solution.

    D 2 y d x 2 + a d y d x + b y = f (x) (\displaystyle (\frac ((\mathrm (d) )^(2)y)((\mathrm (d) )x^(2)))+a (\frac ((\mathrm (d) )y)((\mathrm (d) )x))+by=f(x))

    Method of undetermined coefficients. The method of indefinite coefficients is used in cases where the intercept term is a combination of exponential, trigonometric, hyperbolic or power functions. Only these functions are guaranteed to have a finite number of linearly independent derivatives. In this section we will find a particular solution to the equation.

    • Let's compare the terms in f (x) (\displaystyle f(x)) with terms in without paying attention to constant factors. There are three possible cases.
      • No two members are the same. In this case, a particular solution y p (\displaystyle y_(p)) will be a linear combination of terms from y p (\displaystyle y_(p))
      • f (x) (\displaystyle f(x)) contains member x n (\displaystyle x^(n)) and member from y c , (\displaystyle y_(c),) Where n (\displaystyle n) is zero or a positive integer, and this term corresponds to a separate root of the characteristic equation. In this case y p (\displaystyle y_(p)) will consist of a combination of the function x n + 1 h (x) , (\displaystyle x^(n+1)h(x),) its linearly independent derivatives, as well as other terms f (x) (\displaystyle f(x)) and their linearly independent derivatives.
      • f (x) (\displaystyle f(x)) contains member h (x) , (\displaystyle h(x),) which is a work x n (\displaystyle x^(n)) and member from y c , (\displaystyle y_(c),) Where n (\displaystyle n) equals 0 or a positive integer, and this term corresponds to multiple root of the characteristic equation. In this case y p (\displaystyle y_(p)) is a linear combination of the function x n + s h (x) (\displaystyle x^(n+s)h(x))(Where s (\displaystyle s)- multiplicity of the root) and its linearly independent derivatives, as well as other members of the function f (x) (\displaystyle f(x)) and its linearly independent derivatives.
    • Let's write it down y p (\displaystyle y_(p)) as a linear combination of the terms listed above. Due to these coefficients in a linear combination, this method is called the “method of indefinite coefficients”. When contained in y c (\displaystyle y_(c)) members can be discarded due to the presence of arbitrary constants in y c . (\displaystyle y_(c).) After this we substitute y p (\displaystyle y_(p)) into the equation and equate similar terms.
    • We determine the coefficients. At this stage, a system of algebraic equations is obtained, which can usually be solved without any problems. The solution of this system allows us to obtain y p (\displaystyle y_(p)) and thereby solve the equation.
    • Example 2.3. Let us consider an inhomogeneous differential equation whose free term contains a finite number of linearly independent derivatives. A particular solution to such an equation can be found by the method of indefinite coefficients.
      • d 2 y d t 2 + 6 y = 2 e 3 t − cos ⁡ 5 t (\displaystyle (\frac ((\mathrm (d) )^(2)y)((\mathrm (d) )t^(2) ))+6y=2e^(3t)-\cos 5t)
      • y c (t) = c 1 cos ⁡ 6 t + c 2 sin ⁡ 6 t (\displaystyle y_(c)(t)=c_(1)\cos (\sqrt (6))t+c_(2)\sin (\sqrt (6))t)
      • y p (t) = A e 3 t + B cos ⁡ 5 t + C sin ⁡ 5 t (\displaystyle y_(p)(t)=Ae^(3t)+B\cos 5t+C\sin 5t)
      • 9 A e 3 t − 25 B cos ⁡ 5 t − 25 C sin ⁡ 5 t + 6 A e 3 t + 6 B cos ⁡ 5 t + 6 C sin ⁡ 5 t = 2 e 3 t − cos ⁡ 5 t ( \displaystyle (\begin(aligned)9Ae^(3t)-25B\cos 5t&-25C\sin 5t+6Ae^(3t)\\&+6B\cos 5t+6C\sin 5t=2e^(3t)-\ cos 5t\end(aligned)))
      • ( 9 A + 6 A = 2 , A = 2 15 − 25 B + 6 B = − 1 , B = 1 19 − 25 C + 6 C = 0 , C = 0 (\displaystyle (\begin(cases)9A+ 6A=2,&A=(\dfrac (2)(15))\\-25B+6B=-1,&B=(\dfrac (1)(19))\\-25C+6C=0,&C=0 \end(cases)))
      • y (t) = c 1 cos ⁡ 6 t + c 2 sin ⁡ 6 t + 2 15 e 3 t + 1 19 cos ⁡ 5 t (\displaystyle y(t)=c_(1)\cos (\sqrt (6 ))t+c_(2)\sin (\sqrt (6))t+(\frac (2)(15))e^(3t)+(\frac (1)(19))\cos 5t)

    Lagrange method. The Lagrange method, or method of variation of arbitrary constants, is a more general method for solving inhomogeneous differential equations, especially in cases where the intercept term does not contain a finite number of linearly independent derivatives. For example, with free terms tan ⁡ x (\displaystyle \tan x) or x − n (\displaystyle x^(-n)) to find a particular solution it is necessary to use the Lagrange method. The Lagrange method can even be used to solve differential equations with variable coefficients, although in this case, with the exception of the Cauchy-Euler equation, it is used less frequently, since the additional solution is usually not expressed in terms of elementary functions.

    • Let's assume that the solution has the following form. Its derivative is given in the second line.
      • y (x) = v 1 (x) y 1 (x) + v 2 (x) y 2 (x) (\displaystyle y(x)=v_(1)(x)y_(1)(x)+v_ (2)(x)y_(2)(x))
      • y ′ = v 1 ′ y 1 + v 1 y 1 ′ + v 2 ′ y 2 + v 2 y 2 ′ (\displaystyle y"=v_(1)"y_(1)+v_(1)y_(1) "+v_(2)"y_(2)+v_(2)y_(2)")
    • Since the proposed solution contains two unknown quantities, it is necessary to impose additional condition. Let us choose this additional condition in the following form:
      • v 1 ′ y 1 + v 2 ′ y 2 = 0 (\displaystyle v_(1)"y_(1)+v_(2)"y_(2)=0)
      • y ′ = v 1 y 1 ′ + v 2 y 2 ′ (\displaystyle y"=v_(1)y_(1)"+v_(2)y_(2)")
      • y ″ = v 1 ′ y 1 ′ + v 1 y 1 ″ + v 2 ′ y 2 ′ + v 2 y 2 ″ (\displaystyle y""=v_(1)"y_(1)"+v_(1) y_(1)""+v_(2)"y_(2)"+v_(2)y_(2)"")
    • Now we can get the second equation. After substitution and redistribution of members, you can group together members with v 1 (\displaystyle v_(1)) and members with v 2 (\displaystyle v_(2)). These terms are reduced because y 1 (\displaystyle y_(1)) And y 2 (\displaystyle y_(2)) are solutions of the corresponding homogeneous equation. As a result, we obtain the following system of equations
      • v 1 ′ y 1 + v 2 ′ y 2 = 0 v 1 ′ y 1 ′ + v 2 ′ y 2 ′ = f (x) (\displaystyle (\begin(aligned)v_(1)"y_(1)+ v_(2)"y_(2)&=0\\v_(1)"y_(1)"+v_(2)"y_(2)"&=f(x)\\\end(aligned)))
    • This system can be transformed into a matrix equation of the form A x = b , (\displaystyle A(\mathbf (x) )=(\mathbf (b) ),) whose solution is x = A − 1 b . (\displaystyle (\mathbf (x) )=A^(-1)(\mathbf (b) ).) For matrix 2 × 2 (\displaystyle 2\times 2) the inverse matrix is ​​found by dividing by the determinant, rearranging the diagonal elements, and changing the sign of the non-diagonal elements. In fact, the determinant of this matrix is ​​a Wronskian.
      • (v 1 ′ v 2 ′) = 1 W (y 2 ′ − y 2 − y 1 ′ y 1) (0 f (x)) (\displaystyle (\begin(pmatrix)v_(1)"\\v_( 2)"\end(pmatrix))=(\frac (1)(W))(\begin(pmatrix)y_(2)"&-y_(2)\\-y_(1)"&y_(1)\ end(pmatrix))(\begin(pmatrix)0\\f(x)\end(pmatrix)))
    • Expressions for v 1 (\displaystyle v_(1)) And v 2 (\displaystyle v_(2)) are given below. As in the order reduction method, in this case, during integration, an arbitrary constant appears, which includes an additional solution in the general solution of the differential equation.
      • v 1 (x) = − ∫ 1 W f (x) y 2 (x) d x (\displaystyle v_(1)(x)=-\int (\frac (1)(W))f(x)y_( 2)(x)(\mathrm (d) )x)
      • v 2 (x) = ∫ 1 W f (x) y 1 (x) d x (\displaystyle v_(2)(x)=\int (\frac (1)(W))f(x)y_(1) (x)(\mathrm (d) )x)


    Lecture from the National Open University Intuit entitled "Linear differential equations of nth order with constant coefficients."

Practical use

Differential equations establish a relationship between a function and one or more of its derivatives. Because such relationships are extremely common, differential equations have found wide application in a variety of fields, and since we live in four dimensions, these equations are often differential equations in private derivatives. This section covers some of the most important equations of this type.

  • Exponential growth and decay. Radioactive decay. Compound interest. The rate of chemical reactions. Concentration of drugs in the blood. Unlimited population growth. Newton-Richmann law. There are many systems in the real world in which the rate of growth or decay at any given time is proportional to the quantity at a given time or can be well approximated by a model. This is because the solution to a given differential equation, the exponential function, is one of the most important functions in mathematics and other sciences. More generally, with controlled population growth, the system may include additional terms that limit growth. In the equation below, the constant k (\displaystyle k) can be either greater or less than zero.
    • d y d x = k x (\displaystyle (\frac ((\mathrm (d) )y)((\mathrm (d) )x))=kx)
  • Harmonic vibrations. In both classical and quantum mechanics, the harmonic oscillator is one of the most important physical systems due to its simplicity and wide application in approximating more complex systems such as a simple pendulum. In classical mechanics, harmonic vibrations are described by an equation that relates the position of a material point to its acceleration through Hooke's law. In this case, damping and driving forces can also be taken into account. In the expression below x ˙ (\displaystyle (\dot (x)))- time derivative of x , (\displaystyle x,) β (\displaystyle \beta )- parameter that describes the damping force, ω 0 (\displaystyle \omega _(0))- angular frequency of the system, F (t) (\displaystyle F(t))- time-dependent driving force. The harmonic oscillator is also present in electromagnetic oscillatory circuits, where it can be implemented with greater accuracy than in mechanical systems.
    • x ¨ + 2 β x ˙ + ω 0 2 x = F (t) (\displaystyle (\ddot (x))+2\beta (\dot (x))+\omega _(0)^(2)x =F(t))
  • Bessel's equation. The Bessel differential equation is used in many areas of physics, including solving the wave equation, Laplace's equation, and Schrödinger's equation, especially in the presence of cylindrical or spherical symmetry. This second-order differential equation with variable coefficients is not a Cauchy-Euler equation, so its solutions cannot be written as elementary functions. The solutions to the Bessel equation are the Bessel functions, which are well studied due to their application in many fields. In the expression below α (\displaystyle \alpha )- a constant that corresponds in order Bessel functions.
    • x 2 d 2 y d x 2 + x d y d x + (x 2 − α 2) y = 0 (\displaystyle x^(2)(\frac ((\mathrm (d) )^(2)y)((\mathrm (d ) )x^(2)))+x(\frac ((\mathrm (d) )y)((\mathrm (d) )x))+(x^(2)-\alpha ^(2)) y=0)
  • Maxwell's equations. Along with the Lorentz force, Maxwell's equations form the basis of classical electrodynamics. These are the four partial differential equations for electrical E (r , t) (\displaystyle (\mathbf (E) )((\mathbf (r) ),t)) and magnetic B (r , t) (\displaystyle (\mathbf (B) )((\mathbf (r) ),t)) fields. In the expressions below ρ = ρ (r , t) (\displaystyle \rho =\rho ((\mathbf (r) ),t))- charge density, J = J (r , t) (\displaystyle (\mathbf (J) )=(\mathbf (J) )((\mathbf (r) ),t))- current density, and ϵ 0 (\displaystyle \epsilon _(0)) And μ 0 (\displaystyle \mu _(0))- electric and magnetic constants, respectively.
    • ∇ ⋅ E = ρ ϵ 0 ∇ ⋅ B = 0 ∇ × E = − ∂ B ∂ t ∇ × B = μ 0 J + μ 0 ϵ 0 ∂ E ∂ t (\displaystyle (\begin(aligned)\nabla \cdot (\mathbf (E) )&=(\frac (\rho )(\epsilon _(0)))\\\nabla \cdot (\mathbf (B) )&=0\\\nabla \times (\mathbf (E) )&=-(\frac (\partial (\mathbf (B) ))(\partial t))\\\nabla \times (\mathbf (B) )&=\mu _(0)(\ mathbf (J) )+\mu _(0)\epsilon _(0)(\frac (\partial (\mathbf (E) ))(\partial t))\end(aligned)))
  • Schrödinger equation. In quantum mechanics, the Schrödinger equation is the fundamental equation of motion, which describes the movement of particles in accordance with a change in the wave function Ψ = Ψ (r , t) (\displaystyle \Psi =\Psi ((\mathbf (r) ),t)) with time. The equation of motion is described by the behavior Hamiltonian H^(\displaystyle (\hat (H))) - operator, which describes the energy of the system. One of the well-known examples of the Schrödinger equation in physics is the equation for a single non-relativistic particle subject to the potential V (r , t) (\displaystyle V((\mathbf (r) ),t)). Many systems are described by the time-dependent Schrödinger equation, and on the left side of the equation is E Ψ , (\displaystyle E\Psi ,) Where E (\displaystyle E)- particle energy. In the expressions below ℏ (\displaystyle \hbar )- reduced Planck constant.
    • i ℏ ∂ Ψ ∂ t = H ^ Ψ (\displaystyle i\hbar (\frac (\partial \Psi )(\partial t))=(\hat (H))\Psi )
    • i ℏ ∂ Ψ ∂ t = (− ℏ 2 2 m ∇ 2 + V (r , t)) Ψ (\displaystyle i\hbar (\frac (\partial \Psi )(\partial t))=\left(- (\frac (\hbar ^(2))(2m))\nabla ^(2)+V((\mathbf (r) ),t)\right)\Psi )
  • Wave equation. Physics and technology cannot be imagined without waves; they are present in all types of systems. In general, waves are described by the equation below, in which u = u (r , t) (\displaystyle u=u((\mathbf (r) ),t)) is the desired function, and c (\displaystyle c)- experimentally determined constant. d'Alembert was the first to discover that for the one-dimensional case the solution to the wave equation is any function with argument x − c t (\displaystyle x-ct), which describes a wave of arbitrary shape propagating to the right. The general solution for the one-dimensional case is a linear combination of this function with a second function with argument x + c t (\displaystyle x+ct), which describes a wave propagating to the left. This solution is presented in the second line.
    • ∂ 2 u ∂ t 2 = c 2 ∇ 2 u (\displaystyle (\frac (\partial ^(2)u)(\partial t^(2)))=c^(2)\nabla ^(2)u )
    • u (x , t) = f (x − c t) + g (x + c t) (\displaystyle u(x,t)=f(x-ct)+g(x+ct))
  • Navier-Stokes equations. The Navier-Stokes equations describe the movement of fluids. Because fluids are present in virtually every field of science and technology, these equations are extremely important for predicting weather, designing aircraft, studying ocean currents, and solving many other applied problems. The Navier-Stokes equations are nonlinear partial differential equations, and in most cases they are very difficult to solve because the nonlinearity leads to turbulence, and obtaining a stable solution by numerical methods requires partitioning into very small cells, which requires significant computing power. For practical purposes in hydrodynamics, methods such as time averaging are used to model turbulent flows. Even more basic questions such as the existence and uniqueness of solutions to nonlinear partial differential equations are challenging, and proving the existence and uniqueness of a solution to the Navier-Stokes equations in three dimensions is among the mathematical problems of the millennium. Below are the incompressible fluid flow equation and the continuity equation.
    • ∂ u ∂ t + (u ⋅ ∇) u − ν ∇ 2 u = − ∇ h , ∂ ρ ∂ t + ∇ ⋅ (ρ u) = 0 (\displaystyle (\frac (\partial (\mathbf (u) ) )(\partial t))+((\mathbf (u) )\cdot \nabla)(\mathbf (u) )-\nu \nabla ^(2)(\mathbf (u) )=-\nabla h, \quad (\frac (\partial \rho )(\partial t))+\nabla \cdot (\rho (\mathbf (u) ))=0)
  • Many differential equations simply cannot be solved using the above methods, especially those mentioned in the last section. This applies to cases where the equation contains variable coefficients and is not a Cauchy-Euler equation, or when the equation is nonlinear, except in a few very rare cases. However, the above methods can solve many important differential equations that are often encountered in various fields of science.
  • Unlike differentiation, which allows you to find the derivative of any function, the integral of many expressions cannot be expressed in elementary functions. So don't waste time trying to calculate an integral where it is impossible. Look at the table of integrals. If the solution to a differential equation cannot be expressed in terms of elementary functions, sometimes it can be represented in integral form, and in this case it does not matter whether this integral can be calculated analytically.

Warnings

  • Appearance differential equation can be misleading. For example, below are two first order differential equations. The first equation can be easily solved using the methods described in this article. At first glance, a minor change y (\displaystyle y) on y 2 (\displaystyle y^(2)) in the second equation makes it non-linear and becomes very difficult to solve.
    • d y d x = x 2 + y (\displaystyle (\frac ((\mathrm (d) )y)((\mathrm (d) )x))=x^(2)+y)
    • d y d x = x 2 + y 2 (\displaystyle (\frac ((\mathrm (d) )y)((\mathrm (d) )x))=x^(2)+y^(2))