Biographies Characteristics Analysis

Find the minimum of the function under the condition. Extremum of a function of several variables The concept of an extremum of a function of several variables

Let the function z - f(x, y) be defined in some domain D and let Mo(xo, y0) be an interior point of this domain. Definition. If there exists such a number that the inequality is true for all that satisfy the conditions, then the point Mo(xo, yo) is called the point of local maximum of the function f(x, y); if, however, for all Dx, Du satisfying the conditions | then the point Mo(x0, y0) is called a fine local minimum. In other words, the point M0(x0, y0) is the point of maximum or minimum of the function f(x, y) if there exists a 6-neighborhood of the point A/o(x0, y0) such that at all points M(x, y) of this neighborhood, the increment of the function preserves sign. Examples. 1. For a function, a point is a minimum point (Fig. 17). 2. For the function, the point 0(0,0) is the maximum point (Fig. 18). 3. For the function, the point 0(0,0) is the local maximum point. 4 Indeed, there is a neighborhood of the point 0(0, 0), for example, a circle of radius j (see Fig. 19), at any point of which, different from the point 0(0, 0), the value of the function f(x, y) less than 1 = We will consider only points of strict maximum and minimum of functions when the strict inequality or strict inequality holds for all points M(x) y) from some punctured 6-neighborhood of the point Mq. The value of the function at the maximum point is called the maximum, and the value of the function at the minimum point is called the minimum of this function. The maximum and minimum points of a function are called the extremum points of the function, and the maxima and minima of the function themselves are called its extrema. Theorem 11 (necessary condition for an extremum). If function Extremum of a function of several variables The concept of an extremum of a function of several variables. Necessary and sufficient conditions for an extremum Conditional extremum The largest and smallest values ​​of continuous functions have an extremum at the point then at this point each partial derivative and u either vanishes or does not exist. Let the function z = f(x) y) have an extremum at the point M0(x0, y0). Let's give the variable y the value yo. Then the function z = /(x, y) will be a function of one variable x\ Since at x = xo it has an extremum (maximum or minimum, Fig. 20), then its derivative with respect to x = “o, | (*o,l>)" Is equal to zero, or does not exist. Similarly, we verify that) or is equal to zero, or does not exist. Points at which = 0 and u = 0 or do not exist are called critical points of the function z = Dx, y). The points at which $£ = u = 0 are also called stationary points of the function. Theorem 11 expresses only necessary conditions for an extremum, which are not sufficient. 18 Fig.20 immt derivatives that vanish at. But this function is rather thin on the imvat “straumum. Indeed, the function is equal to zero at the point 0(0, 0) and takes on points M(x, y), as close as you like to the point 0(0, 0), kkk positive and negative values. For it, so at points at points (0, y) for arbitrarily small points, the point 0(0, 0) of this type is called a mini-max point (Fig. 21). Sufficient conditions for an extremum of a function of two variables are expressed by the following theorem. Theorem 12 (sufficient conditions for an extremum of fuzzy variables). Let the point Mo(xo, y0) be a stationary point of the function f(x, y), and in some neighborhood of the point / including the point Mo itself, the function f(r, y) has continuous partial derivatives up to the second order inclusive. Then "1) at the point Mq(xq, V0) the function f(x, y) has a maximum if the determinant is at this point 2) at the point Mo(x0, V0) the function f(x, y) has a minimum if at point Mo(xo, yo) the function f(x, y) has no extremum if D(xo, yo)< 0. Если же то в точке Мо(жо> Wo) the extremum of the function f(x, y) may or may not be. In this case, further research is required. We confine ourselves to proving assertions 1) and 2) of the theorem. Let us write the Taylor formula of the second order for the function /(i, y): where. By assumption, whence it is clear that the sign of the increment D/ is determined by the sign of the trinomial on the right side of (1), i.e., the sign of the second differential d2f. Let us denote for brevity. Then equality (l) can be written as follows: Let at the point MQ(so, y0) we have neighborhood of the point M0(s0,yo). If the condition (at the point A/0) is satisfied, and, due to continuity, the derivative /,z(s, y) will retain its sign in some neighborhood of the point Af0. In the region where A ∆ 0, we have 0 in some neighborhood of the point M0(x0) y0), then the sign of the trinomial AAx2 -I- 2BAxAy + CDy2 coincides with the sign A at the point C cannot have different signs). Since the sign of the sum AAs2 + 2BAxAy + CAy2 at the point (s0 + $ Ax, yo + 0 Dy) determines the sign of the difference, we arrive at the following conclusion: if the function f(s, y) at the stationary point (s0, yo) satisfies condition, then for sufficiently small || inequality will hold. Thus, at the point (sq, y0) the function /(s, y) has a maximum. But if the condition is satisfied at the stationary point (s0, yo), then for all sufficiently small |Ar| and |Do| the inequality is true, which means that the function /(s, y) has a minimum at the point (so, yo). Examples. 1. Investigate the function 4 for an extremum Using the necessary conditions for an extremum, we look for stationary points of the function. To do this, we find the partial derivatives, u and equate them to zero. We get a system of equations from where - a stationary point. Let us now use Theorem 12. We have Hence, there is an extremum at the point Ml. Because this is the minimum. If we transform the function g to the form, then it is easy to see that the right side (")" will be minimal when is the absolute minimum of this function. 2. Investigate the function for an extremum. We find the stationary points of the function, for which we compose a system of equations From here so that the point is stationary. Since, by virtue of Theorem 12, there is no extremum at the point M. * 3. Investigate the function for an extremum Find the stationary points of the function. From the system of equations we obtain that, so that the point is stationary. Further, we have so that Theorem 12 does not give an answer to the question of the presence or absence of an extremum. Let's do it this way. For a function about all points other than a point so that, by definition, at the point A/o(0,0) the function r has an absolute minimum. By analogous drying, we establish that the function has a maximum at the point, but the function does not have an extremum at the point. Let a function of η independent variables be differentiable at a point. The point Mo is called a stationary point of the function if. Theorem 13 (sufficient conditions for an extremum). Let the function be defined and have continuous partial derivatives of the second order in some neighborhood of the fine line Mc(xi..., which is a stationary fine function, if the quadratic form (the second differential of the function f in the fine point is positive-definite (negative-definite), point of minimum (respectively, fine maximum) of the function f is fine If the quadratic form (4) is sign-alternating, then there is no extremum in the fine LG0. 15.2 Conditional extremum So far, we have been concerned with finding local extrema of a function in the entire domain of its definition, when the function arguments are not bound by any additional conditions. 23) is equal to one and is reached at the point (0,0). It corresponds to exactly M - the vertex of the pvvboloid. Let us add the constraint equation y = j. Then the conditional maximum will obviously be equal. It is reached at the point (o, |), and it corresponds to the vertex Afj of the pvvboloid, which is the line of intersection of the pvvboloid with the plane y = j. In the case of an unconditional minimum s, we have the smallest applicate among all the explicts of the surface * = 1 - n;2 ~ y1; slumvv conditional - only among all the points of the pvrboloidv, corresponding to the point * of the straight line y \u003d j not of the xOy plane. One of the methods for finding the conditional extremum of a function in the presence and connection is as follows. Let the connection equation y)-0 define y as a single-valued differentiable function of the argument x: Substituting a function instead of y into the function, we obtain a function of one argument in which the connection condition has already been taken into account. The (unconditional) extremum of the function is the desired conditional extremum. Example. Find the extremum of a function under the condition Extremum of a function of several variables The concept of an extremum of a function of several variables. Necessary and sufficient conditions for extremum Conditional extremum The largest and smallest values ​​of continuous functions A \u003d 1 - critical point;, so that delivers a conditional minimum of the function r (Fig. 24). Let us indicate another way to solve the problem of conditional extremum, called the Lagrange multiplier method. Let there be a point of conditional extremum of the function in the presence of a connection. Let us assume that the equation of connection defines a unique continuously differentiable function in some neighborhood of the point xi. Assuming that we obtain that the derivative with respect to x of the function /(r, ip(x)) at the point xq must be equal to zero or, which is equivalent to this, the differential of f (x, y) at the point Mo "O) From the connection equation we have (5) Then, due to the arbitrariness of dx, we obtain Equalities (6) and (7) express the necessary conditions for an unconditional extremum at a point of a function called the Lagrange function. Thus, the point of the conditional extremum of the function / (x, y), if, is necessarily a stationary point of the Lagrange function where A is some numerical coefficient. From here we obtain a rule for finding conditional extrema: in order to find points that can be points of a function's absolute extremum in the presence of a connection, 1) we compose the Lagrange function, 2) equating the derivatives and W of this function to zero and adding the connection equation to the resulting equations, we obtain a system of three equations from which we find the values ​​of A and the coordinates x, y of possible extremum points. The question of the existence and nature of the conditional extremum is solved on the basis of studying the sign of the second differential of the Lagrange function for the considered system of values ​​x0, Yo, A, obtained from (8) under the condition that If, then at the point (x0, Yo) the function f(x, y ) has a conditional maximum; if d2F > 0 - then the conditional minimum. In particular, if at a stationary point (xo, J/o) the determinant D for the function F(x, y) is positive, then at the point (®o, V0) there is a conditional maximum of the function /(x, y) if, and conditional minimum of the function /(x, y), if Example. Let us again turn to the conditions of the previous example: find the extremum of the function provided that x + y = 1. We will solve the problem using the Lagrange multiplier method. The Lagrange function in this case has the form To find stationary points, we compose a system From the first two equations of the system, we obtain that x = y. Then from the third equation of the system (coupling equation) we find that x - y = j - the coordinates of the point of a possible extremum. In this case (it is indicated that A \u003d -1. Thus, the Lagrange function. is a conditional minimum point of the function * \u003d x2 + y2 under the condition that there is no unconditional extremum for the Lagrangian function. P (x, y) does not yet mean the absence of a conditional extremum for the function /(x, y) in the presence of a connection Example: Find the extremum of the function under the condition y 4 We compose the Lagrange function and write out a system for determining A and the coordinates of possible extremum points: From the first two equations we obtain x + y = 0 and come to the system y = A = 0. Thus, the corresponding Lagrange function has the form At the point (0, 0), the function F(x, y; 0) does not have an unconditional extremum, but the conditional extremum of the function r = xy. When y = x, there is "Indeed, in this case r = x2. From here it is clear that at the point (0,0) there is a conditional minimum. "The method of Lagrange multipliers is transferred to the case of functions of any number of arguments / Let the extremum of the function be sought in the presence of the connection equations Sostaalyaem the Lagrange function where A|, Az,..., A„, - not certain constant factors. Equating to zero all partial derivatives of the first order of the function F and adding to the obtained equations the connection equations (9), we obtain a system of n + m equations, from which we determine Ab A3|..., Am and the coordinates x\) x2) . » xn possible points of the conditional extremum. The question of whether the points found by the Lagrange method are really conditional extremum points can often be resolved on the basis of considerations of a physical or geometric nature. 15.3. Maximum and minimum values ​​of continuous functions Let it be required to find the maximum (smallest) value of a function z = /(x, y) continuous in some extended bounded domain D. By Theorem 3, in this region there is a point (xo, V0) at which the function takes the largest (smallest) value. If the point (xo, y0) lies inside the domain D, then the function / has a maximum (minimum) in it, so that in this case the point of interest to us is contained among the critical points of the function /(x, y). However, the function /(x, y) can also reach its maximum (smallest) value at the boundary of the region. Therefore, in order to find the largest (smallest) value taken by the function z = /(x, y) in a bounded closed area 2), it is necessary to find all the maxima (minimums) of the function achieved inside this area, as well as the largest (smallest) value of the function on the border of this area. The largest (smallest) of all these numbers will be the desired maximum (smallest) value of the function z = /(x, y) in the region 27. Let us show how this is done in the case of a differentiable function. Prmmr. Find the largest and smallest values ​​of the function of area 4. We find the critical points of the function inside the area D. To do this, we compose a system of equations. From here we get x \u003d y \u003e 0, so that the point 0 (0,0) is the critical point of the function x. Since Let us now find the largest and smallest values ​​of the function on the boundary Г of the region D. On the part of the boundary we have so that y \u003d 0 is a critical point, and since \u003d then at this point the function z \u003d 1 + y2 has a minimum equal to one. At the ends of the segment G", at points (, we have. Using symmetry considerations, we obtain the same results for other parts of the boundary. Finally, we obtain: the smallest value of the function z \u003d x2 + y2 in the region "B" is equal to zero and it is reached at the interior point 0( 0, 0) area, and the maximum value of this function, equal to two, is reached at four points of the boundary (Fig.25) Fig.25 Exercises functions: Find the partial derivatives of functions and their total differentials: Find the derivatives of complex functions: 3 Find J. Extremum of a function of several variables Concept of an extremum of a function of several variables Necessary and sufficient conditions for an extremum Conditional extremum The largest and smallest values ​​of continuous functions 34. Using the formula for the derivative of a complex function two variables, find and functions: 35. Using the formula for the derivative of a complex function in two variables, find |J and functions: Find jj implicit functions: 40. Find the slope of the tangent curve at the point of intersection with the straight line x = 3. 41. Find the points where the tangent of the x-curve is parallel to the x-axis. . In the following tasks, find and Z: Write the equations of the tangent plane and the normal of the surface: 49. Write the equations of the tangent planes of the surface x2 + 2y2 + Zr2 \u003d 21, parallel to the plane x + 4y + 6z \u003d 0. Find the first three to four terms of the expansion using the Taylor formula : 50. y in a neighborhood of the point (0, 0). Using the definition of the extremum of a function, investigate the following functions for an extremum:). Using sufficient conditions for the extremum of a function of two variables, investigate the extremum of the function: 84. Find the largest and smallest values ​​of the function z \u003d x2 - y2 in a closed circle 85. Find the largest and smallest values ​​\u200b\u200bof the function * \u003d x2y (4-x-y) in a triangle bounded by lines x \u003d 0, y = 0, x + y = b. 88. Determine the dimensions of a rectangular open pool with the smallest surface, provided that its volume is equal to V. 87. Find the dimensions of a rectangular parallelepiped with a given total surface of 5 maximum volume. Answers 1. and | A square formed by line segments x including its sides. 3. Family of concentric rings 2= 0,1,2,... .4. The whole plane except for the points of the straight lines y. The part of the plane located above the parabola y \u003d -x?. 8. Circle points x. The whole plane except for the straight lines x The radical expression is non-negative in two cases j * ^ or j x ^ ^ which is equivalent to an infinite series of inequalities, respectively. The domain of definition is shaded squares (Fig. 26); l which is equivalent to an infinite series The function is defined at points. a) Lines parallel to the line x b) Concentric circles centered at the origin. 10. a) parabolas y) parabolas y a) parabolas b) hyperbolas | .Planes xc. 13.Prim - one-cavity hyperboloids of revolution around the Oz axis; for and are two-sheeted hyperboloids of revolution around the Oz axis, both families of surfaces are separated by a cone; There is no limit, b) 0. 18. Let y = kxt then z lim z = -2, so that the given function at the point (0,0) has no limit. 19. a) Point (0.0); b) point (0,0). 20. a) Break line - circle x2 + y2 = 1; b) the break line is a straight line y \u003d x. 21. a) Break lines - coordinate axes Ox and Oy; b) 0 (empty set). 22. All points (m, n), where and n are integers

Definition1: A function is said to have a local maximum at a point if there exists a neighborhood of the point such that for any point M with coordinates (x, y) inequality is fulfilled: . In this case, i.e., the increment of the function< 0.

Definition2: A function is said to have a local minimum at a point if there exists a neighborhood of the point such that for any point M with coordinates (x, y) inequality is fulfilled: . In this case, i.e., the increment of the function > 0.

Definition 3: Local minimum and maximum points are called extremum points.

Conditional Extremes

When searching for extrema of a function of many variables, problems often arise related to the so-called conditional extreme. This concept can be explained by the example of a function of two variables.

Let a function and a line be given L on surface 0xy. The task is to line L find such a point P(x, y), in which the value of the function is the largest or smallest compared to the values ​​of this function at the points of the line L located near the point P. Such points P called conditional extremum points line functions L. Unlike the usual extremum point, the function value at the conditional extremum point is compared with the function values ​​not at all points of some of its neighborhood, but only at those that lie on the line L.

It is quite clear that the point of the usual extremum (they also say unconditional extremum) is also a conditional extremum point for any line passing through this point. The converse, of course, is not true: a conditional extremum point may not be a conventional extremum point. Let me explain this with a simple example. The graph of the function is the upper hemisphere (Appendix 3 (Fig. 3)).

This function has a maximum at the origin; it corresponds to the top M hemispheres. If the line L there is a line passing through the points BUT and AT(her equation x+y-1=0), then it is geometrically clear that for the points of this line the maximum value of the function is reached at the point lying in the middle between the points BUT and AT. This is the point of the conditional extremum (maximum) of the function on the given line; it corresponds to the point M 1 on the hemisphere, and it can be seen from the figure that there can be no question of any ordinary extremum here.

Note that in the final part of the problem of finding the largest and smallest values ​​of a function in a closed region, we have to find the extremal values ​​of the function on the boundary of this region, i.e. on some line, and thereby solve the problem for a conditional extremum.

Let us now proceed to the practical search for the points of the conditional extremum of the function Z= f(x, y) provided that the variables x and y are related by the equation (x, y) = 0. This relation will be called the constraint equation. If from the connection equation y can be expressed explicitly in terms of x: y \u003d (x), we get a function of one variable Z \u003d f (x, (x)) \u003d Ф (x).

Having found the value of x at which this function reaches an extremum, and then determining the corresponding values ​​of y from the connection equation, we will obtain the desired points of the conditional extremum.

So, in the above example, from the equation of communication x+y-1=0 we have y=1-x. From here

It is easy to check that z reaches its maximum at x = 0.5; but then from the connection equation y = 0.5, and we get exactly the point P, found from geometric considerations.

The conditional extremum problem is solved very simply even when the constraint equation can be represented by parametric equations x=x(t), y=y(t). Substituting the expressions for x and y into this function, we again come to the problem of finding the extremum of a function of one variable.

If the constraint equation has a more complex form and we can neither explicitly express one variable in terms of another, nor replace it with parametric equations, then the problem of finding a conditional extremum becomes more difficult. We will continue to assume that in the expression of the function z= f(x, y) the variable (x, y) = 0. The total derivative of the function z= f(x, y) is equal to:

Where is the derivative y`, found by the rule of differentiation of the implicit function. At the points of the conditional extremum, the found total derivative must be equal to zero; this gives one equation relating x and y. Since they must also satisfy the constraint equation, we get a system of two equations with two unknowns

Let's transform this system to a much more convenient one by writing the first equation as a proportion and introducing a new auxiliary unknown:

(a minus sign is placed in front for convenience). It is easy to pass from these equalities to the following system:

f` x =(x,y)+` x (x,y)=0, f` y (x,y)+` y (x,y)=0 (*),

which, together with the constraint equation (x, y) = 0, forms a system of three equations with unknowns x, y, and.

These equations (*) are easiest to remember using the following rule: in order to find points that can be points of the conditional extremum of the function

Z= f(x, y) with the constraint equation (x, y) = 0, you need to form an auxiliary function

F(x,y)=f(x,y)+(x,y)

Where is some constant, and write equations to find the extremum points of this function.

The specified system of equations delivers, as a rule, only the necessary conditions, i.e. not every pair of x and y values ​​that satisfies this system is necessarily a conditional extremum point. I will not give sufficient conditions for conditional extremum points; very often the specific content of the problem itself suggests what the found point is. The described technique for solving problems for a conditional extremum is called the method of Lagrange multipliers.

A sufficient condition for an extremum of a function of two variables

1. Let the function be continuously differentiable in some neighborhood of the point and have continuous second-order partial derivatives (pure and mixed).

2. Denote by the second order determinant

extremum variable lecture function

Theorem

If the point with coordinates is a stationary point for the function, then:

A) When it is a point of local extremum and, at a local maximum, - a local minimum;

C) when the point is not a local extremum point;

C) if, maybe both.

Proof

We write the Taylor formula for the function, limiting ourselves to two members:

Since, according to the condition of the theorem, the point is stationary, the second-order partial derivatives are equal to zero, i.e. and. Then

Denote

Then the increment of the function will take the form:

Due to the continuity of second-order partial derivatives (pure and mixed), according to the condition of the theorem at a point, we can write:

Where or; ,

1. Let and, i.e., or.

2. We multiply the increment of the function and divide by, we get:

3. Complement the expression in curly brackets to the full square of the sum:

4. The expression in curly brackets is non-negative, since

5. Therefore, if and hence, and, then and, therefore, according to the definition, the point is a point of local minimum.

6. If and means, and, then, according to the definition, a point with coordinates is a local maximum point.

2. Consider a square trinomial, its discriminant, .

3. If, then there are points such that the polynomial

4. The total increment of the function at a point in accordance with the expression obtained in I, we write in the form:

5. Due to the continuity of second-order partial derivatives, by the condition of the theorem at a point, we can write that

therefore, there exists a neighborhood of a point such that, for any point, the square trinomial is greater than zero:

6. Consider - the neighborhood of the point.

Let's choose any value, so that's the point. Assuming that in the formula for the increment of the function

What we get:

7. Since, then.

8. Arguing similarly for the root, we get that in any -neighborhood of the point there is a point for which, therefore, in the neighborhood of the point it does not preserve sign, therefore there is no extremum at the point.

Conditional extremum of a function of two variables

When searching for extrema of a function of two variables, problems often arise related to the so-called conditional extremum. This concept can be explained by the example of a function of two variables.

Let a function and a line L be given on the plane 0xy. The task is to find such a point P (x, y) on the line L, at which the value of the function is the largest or smallest compared to the values ​​of this function at the points of the line L, located near the point P. Such points P are called conditional extremum points functions on the line L. In contrast to the usual extremum point, the value of the function at the conditional extremum point is compared with the values ​​of the function not at all points of some of its neighborhood, but only at those that lie on the line L.

It is quite clear that the point of the usual extremum (they also say the unconditional extremum) is also the point of the conditional extremum for any line passing through this point. The converse, of course, is not true: a conditional extremum point may not be a conventional extremum point. Let's illustrate what has been said with an example.

Example #1. The graph of the function is the upper hemisphere (Fig. 2).

Rice. 2.

This function has a maximum at the origin; it corresponds to the vertex M of the hemisphere. If the line L is a straight line passing through points A and B (its equation), then it is geometrically clear that for the points of this line the maximum value of the function is reached at the point lying in the middle between points A and B. This is the conditional extremum (maximum) point functions on this line; it corresponds to the point M 1 on the hemisphere, and it can be seen from the figure that there can be no question of any ordinary extremum here.

Note that in the final part of the problem of finding the largest and smallest values ​​of a function in a closed region, one has to find the extremal values ​​of the function on the boundary of this region, i.e. on some line, and thereby solve the problem for a conditional extremum.

Definition 1. They say that where has a conditional or relative maximum (minimum) at a point that satisfies the equation: if for any that satisfies the equation, the inequality

Definition 2. An equation of the form is called a constraint equation.

Theorem

If the functions and are continuously differentiable in a neighborhood of a point, and the partial derivative and the point are the point of the conditional extremum of the function with respect to the constraint equation, then the second-order determinant is equal to zero:

Proof

1. Since, according to the condition of the theorem, the partial derivative, and the value of the function, then in some rectangle

implicit function defined

A complex function of two variables at a point will have a local extremum, therefore, or.

2. Indeed, according to the invariance property of the first-order differential formula

3. The connection equation can be represented in this form, which means

4. Multiply equation (2) by, and (3) by and add them

Therefore, at

arbitrary. h.t.d.

Consequence

The search for conditional extremum points of a function of two variables in practice is carried out by solving a system of equations

So, in the above example No. 1 from the equation of communication we have. From here it is easy to check what reaches a maximum at . But then from the equation of communication. We get the point P, found geometrically.

Example #2. Find the conditional extremum points of the function with respect to the constraint equation.

Let's find the partial derivatives of the given function and the connection equation:

Let's make a second-order determinant:

Let's write down the system of equations for finding conditional extremum points:

hence, there are four conditional extremum points of the function with coordinates: .

Example #3. Find the extremum points of the function.

Equating the partial derivatives to zero: , we find one stationary point - the origin. Here,. Therefore, the point (0, 0) is not an extremum point either. The equation is the equation of a hyperbolic paraboloid (Fig. 3), the figure shows that the point (0, 0) is not an extremum point.

Rice. 3.

The largest and smallest value of a function in a closed area

1. Let the function be defined and continuous in a bounded closed domain D.

2. Let the function have finite partial derivatives in this region, except for individual points of the region.

3. In accordance with the Weierstrass theorem, in this area there is a point at which the function takes the largest and smallest values.

4. If these points are interior points of the region D, then it is obvious that they will have a maximum or a minimum.

5. In this case, the points of interest to us are among the suspicious points on the extremum.

6. However, the function can also take on the maximum or minimum value on the boundary of the region D.

7. In order to find the largest (smallest) value of the function in the area D, you need to find all internal points suspicious for an extremum, calculate the value of the function in them, then compare with the value of the function at the boundary points of the area, and the largest of all found values ​​will be the largest in the closed region D.

8. The method of finding a local maximum or minimum was considered earlier in Section 1.2. and 1.3.

9. It remains to consider the method of finding the maximum and minimum values ​​of the function on the boundary of the region.

10. In the case of a function of two variables, the area usually turns out to be bounded by a curve or several curves.

11. Along such a curve (or several curves), the variables and either depend on one another, or both depend on one parameter.

12. Thus, on the boundary, the function turns out to be dependent on one variable.

13. The method of finding the largest value of a function of one variable was discussed earlier.

14. Let the boundary of the region D be given by the parametric equations:

Then on this curve the function of two variables will be a complex function of the parameter: . For such a function, the largest and smallest value is determined by the method of determining the largest and smallest values ​​for a function of one variable.

Extrema of functions of several variables. A necessary condition for an extremum. Sufficient condition for an extremum. Conditional extreme. Method of Lagrange multipliers. Finding the largest and smallest values.

Lecture 5

Definition 5.1. Dot M 0 (x 0, y 0) called maximum point functions z = f(x, y), if f (x o , y o) > f(x, y) for all points (x, y) M 0.

Definition 5.2. Dot M 0 (x 0, y 0) called minimum point functions z = f(x, y), if f (x o , y o) < f(x, y) for all points (x, y) from some neighborhood of the point M 0.

Remark 1. The maximum and minimum points are called extremum points functions of several variables.

Remark 2. The extremum point for a function of any number of variables is defined in a similar way.

Theorem 5.1(necessary extremum conditions). If a M 0 (x 0, y 0) is the extremum point of the function z = f(x, y), then at this point the first-order partial derivatives of this function are equal to zero or do not exist.

Proof.

Let's fix the value of the variable at counting y = y 0. Then the function f(x, y0) will be a function of one variable X, for which x = x 0 is the extremum point. Therefore, by Fermat's theorem or does not exist. The same assertion is proved for .

Definition 5.3. Points belonging to the domain of a function of several variables, at which the partial derivatives of the function are equal to zero or do not exist, are called stationary points this function.

Comment. Thus, the extremum can be reached only at stationary points, but it is not necessarily observed at each of them.

Theorem 5.2(sufficient conditions for an extremum). Let in some neighborhood of the point M 0 (x 0, y 0), which is a stationary point of the function z = f(x, y), this function has continuous partial derivatives up to the 3rd order inclusive. Denote Then:

1) f(x, y) has at the point M 0 maximum if AC-B² > 0, A < 0;

2) f(x, y) has at the point M 0 minimum if AC-B² > 0, A > 0;

3) there is no extremum at the critical point if AC-B² < 0;



4) if AC-B² = 0, additional research is needed.

Proof.

Let us write the Taylor formula of the second order for the function f(x, y), keeping in mind that at a stationary point, the partial derivatives of the first order are equal to zero:

where If the angle between the segment M 0 M, where M (x 0 +Δ x, y 0 +Δ at), and the O axis X denote φ, then Δ x =Δ ρ cos φ, Δ y=Δρsinφ. In this case, the Taylor formula will take the form: . Let Then we can divide and multiply the expression in parentheses by BUT. We get:

Consider now four possible cases:

1) AC-B² > 0, A < 0. Тогда , и for sufficiently small Δρ. Therefore, in some neighborhood M 0 f (x 0 + Δ x, y 0 +Δ y)< f(x0, y0), i.e M 0 is the maximum point.

2) Let AC-B² > 0, A > 0. Then , and M 0 is the minimum point.

3) Let AC-B² < 0, A> 0. Consider the increment of arguments along the ray φ = 0. Then it follows from (5.1) that , that is, when moving along this ray, the function increases. If we move along a ray such that tg φ 0 \u003d -A / B, then , therefore, when moving along this ray, the function decreases. So the point M 0 is not an extreme point.

3`) When AC-B² < 0, A < 0 доказательство отсутствия экстремума проводится

similar to the previous one.

3``) If AC-B² < 0, A= 0, then . Wherein . Then, for sufficiently small φ, expression 2 B cos + C sinφ close to 2 AT, that is, it retains a constant sign, and sinφ changes sign in the vicinity of the point M 0 . This means that the increment of the function changes sign in the vicinity of the stationary point, which is therefore not an extremum point.

4) If AC-B² = 0, and , , that is, the sign of the increment is determined by the sign 2α 0 . At the same time, further research is needed to elucidate the question of the existence of an extremum.

Example. Let's find the extremum points of the function z=x² - 2 xy + 2y² + 2 x. To search for stationary points, we solve the system . So, the stationary point is (-2,-1). Wherein A = 2, AT = -2, With= 4. Then AC-B² = 4 > 0, therefore, an extremum is reached at the stationary point, namely the minimum (since A > 0).

Definition 5.4. If the function arguments f (x 1 , x 2 ,…, x n) bound by additional conditions in the form m equations ( m< n) :

φ 1 ( x 1, x 2,…, x n) = 0, φ 2 ( x 1, x 2,…, x n) = 0, …, φ m ( x 1, x 2,…, x n) = 0, (5.2)

where the functions φ i have continuous partial derivatives, then equations (5.2) are called connection equations.

Definition 5.5. Function extremum f (x 1 , x 2 ,…, x n) under conditions (5.2) is called conditional extremum.

Comment. We can offer the following geometric interpretation of the conditional extremum of a function of two variables: let the arguments of the function f(x,y) are related by the equation φ (x, y)= 0, defining some curve in the plane O hu. Having restored from each point of this curve perpendiculars to the plane O hu before crossing the surface z = f (x, y), we obtain a spatial curve lying on the surface above the curve φ (x, y)= 0. The problem is to find the extremum points of the resulting curve, which, of course, in the general case do not coincide with the unconditional extremum points of the function f(x,y).

Let us define the necessary conditional extremum conditions for a function of two variables by introducing the following definition beforehand:

Definition 5.6. Function L (x 1 , x 2 ,…, x n) = f (x 1 , x 2 ,…, x n) + λ 1 φ 1 (x 1 , x 2 ,…, x n) +

+ λ 2 φ 2 (x 1 , x 2 ,…, x n) +…+λ m φ m (x 1 , x 2 ,…, x n), (5.3)

where λ i - some constants, called Lagrange function, and the numbers λiindefinite Lagrange multipliers.

Theorem 5.3(necessary conditional extremum conditions). Conditional extremum of the function z = f(x, y) in the presence of the constraint equation φ ( x, y)= 0 can only be reached at stationary points of the Lagrange function L (x, y) = f (x, y) + λφ (x, y).

Proof. The constraint equation defines an implicit dependency at from X, so we will assume that at there is a function from X: y = y(x). Then z there is a complex function X, and its critical points are determined by the condition: . (5.4) It follows from the constraint equation that . (5.5)

We multiply equality (5.5) by some number λ and add it to (5.4). We get:

, or .

The last equality must hold at stationary points, from which it follows:

(5.6)

A system of three equations for three unknowns is obtained: x, y and λ, with the first two equations being the conditions for the stationary point of the Lagrange function. Eliminating the auxiliary unknown λ from system (5.6), we find the coordinates of the points at which the original function can have a conditional extremum.

Remark 1. The presence of a conditional extremum at the found point can be checked by studying the second-order partial derivatives of the Lagrange function by analogy with Theorem 5.2.

Remark 2. Points at which the conditional extremum of the function can be reached f (x 1 , x 2 ,…, x n) under conditions (5.2), can be defined as solutions of the system (5.7)

Example. Find the conditional extremum of the function z = xy given that x + y= 1. Compose the Lagrange function L(x, y) = xy + λ (x + y – one). System (5.6) then looks like this:

Whence -2λ=1, λ=-0.5, x = y = -λ = 0.5. Wherein L (x, y) can be represented as L(x, y) = - 0,5 (x-y)² + 0.5 ≤ 0.5, therefore, at the found stationary point L (x, y) has a maximum and z = xy - conditional maximum.

Let us first consider the case of a function of two variables. The conditional extremum of the function $z=f(x,y)$ at the point $M_0(x_0;y_0)$ is the extremum of this function, reached under the condition that the variables $x$ and $y$ in the vicinity of this point satisfy the constraint equation $\ varphi(x,y)=0$.

The name "conditional" extremum is due to the fact that the additional condition $\varphi(x,y)=0$ is imposed on the variables. If it is possible to express one variable in terms of another from the connection equation, then the problem of determining the conditional extremum is reduced to the problem of the usual extremum of a function of one variable. For example, if $y=\psi(x)$ follows from the constraint equation, then substituting $y=\psi(x)$ into $z=f(x,y)$, we get a function of one variable $z=f\left (x,\psi(x)\right)$. In the general case, however, this method is of little use, so a new algorithm is required.

Method of Lagrange multipliers for functions of two variables.

The method of Lagrange multipliers is that to find the conditional extremum, the Lagrange function is composed: $F(x,y)=f(x,y)+\lambda\varphi(x,y)$ (the parameter $\lambda$ is called the Lagrange multiplier ). The necessary extremum conditions are given by a system of equations from which the stationary points are determined:

$$ \left \( \begin(aligned) & \frac(\partial F)(\partial x)=0;\\ & \frac(\partial F)(\partial y)=0;\\ & \varphi (x,y)=0.\end(aligned)\right.$$

The sign $d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("" )dy^2$. If at a stationary point $d^2F > 0$, then the function $z=f(x,y)$ has a conditional minimum at this point, but if $d^2F< 0$, то условный максимум.

There is another way to determine the nature of the extremum. From the constraint equation we get: $\varphi_(x)^(")dx+\varphi_(y)^(")dy=0$, $dy=-\frac(\varphi_(x)^("))(\varphi_ (y)^("))dx$, so at any stationary point we have:

$$d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=F_(xx)^( "")dx^2+2F_(xy)^("")dx\left(-\frac(\varphi_(x)^("))(\varphi_(y)^("))dx\right)+ F_(yy)^("")\left(-\frac(\varphi_(x)^("))(\varphi_(y)^("))dx\right)^2=\\ =-\frac (dx^2)(\left(\varphi_(y)^(") \right)^2)\cdot\left(-(\varphi_(y)^("))^2 F_(xx)^(" ")+2\varphi_(x)^(")\varphi_(y)^(")F_(xy)^("")-(\varphi_(x)^("))^2 F_(yy)^ ("")\right)$$

The second factor (located in brackets) can be represented in this form:

Elements of the $\left| \begin(array) (cc) F_(xx)^("") & F_(xy)^("") \\ F_(xy)^("") & F_(yy)^("") \end (array) \right|$ which is the Hessian of the Lagrange function. If $H > 0$ then $d^2F< 0$, что указывает на условный максимум. Аналогично, при $H < 0$ имеем $d^2F >$0, i.e. we have a conditional minimum of the function $z=f(x,y)$.

Note on the form of the $H$ determinant. show/hide

$$ H=-\left|\begin(array) (ccc) 0 & \varphi_(x)^(") & \varphi_(y)^(")\\ \varphi_(x)^(") & F_ (xx)^("") & F_(xy)^("") \\ \varphi_(y)^(") & F_(xy)^("") & F_(yy)^("") \ end(array) \right| $$

In this situation, the rule formulated above changes as follows: if $H > 0$, then the function has a conditional minimum, and for $H< 0$ получим условный максимум функции $z=f(x,y)$. При решении задач следует учитывать такие нюансы.

Algorithm for studying a function of two variables for a conditional extremum

  1. Compose the Lagrange function $F(x,y)=f(x,y)+\lambda\varphi(x,y)$
  2. Solve system $ \left \( \begin(aligned) & \frac(\partial F)(\partial x)=0;\\ & \frac(\partial F)(\partial y)=0;\\ & \ varphi(x,y)=0.\end(aligned)\right.$
  3. Determine the nature of the extremum at each of the stationary points found in the previous paragraph. To do this, use any of the following methods:
    • Compose the determinant $H$ and find out its sign
    • Taking into account the constraint equation, calculate the sign of $d^2F$

Lagrange multiplier method for functions of n variables

Suppose we have a function of $n$ variables $z=f(x_1,x_2,\ldots,x_n)$ and $m$ constraint equations ($n > m$):

$$\varphi_1(x_1,x_2,\ldots,x_n)=0; \; \varphi_2(x_1,x_2,\ldots,x_n)=0,\ldots,\varphi_m(x_1,x_2,\ldots,x_n)=0.$$

Denoting the Lagrange multipliers as $\lambda_1,\lambda_2,\ldots,\lambda_m$, we compose the Lagrange function:

$$F(x_1,x_2,\ldots,x_n,\lambda_1,\lambda_2,\ldots,\lambda_m)=f+\lambda_1\varphi_1+\lambda_2\varphi_2+\ldots+\lambda_m\varphi_m$$

The necessary conditions for the presence of a conditional extremum are given by a system of equations from which the coordinates of stationary points and the values ​​of the Lagrange multipliers are found:

$$\left\(\begin(aligned) & \frac(\partial F)(\partial x_i)=0; (i=\overline(1,n))\\ & \varphi_j=0; (j=\ overline(1,m)) \end(aligned) \right.$$

It is possible to find out whether a function has a conditional minimum or a conditional maximum at the found point, as before, using the sign $d^2F$. If at the found point $d^2F > 0$, then the function has a conditional minimum, but if $d^2F< 0$, - то условный максимум. Можно пойти иным путем, рассмотрев следующую матрицу:

Matrix determinant $\left| \begin(array) (ccccc) \frac(\partial^2F)(\partial x_(1)^(2)) & \frac(\partial^2F)(\partial x_(1)\partial x_(2) ) & \frac(\partial^2F)(\partial x_(1)\partial x_(3)) &\ldots & \frac(\partial^2F)(\partial x_(1)\partial x_(n)) \\ \frac(\partial^2F)(\partial x_(2)\partial x_1) & \frac(\partial^2F)(\partial x_(2)^(2)) & \frac(\partial^2F )(\partial x_(2)\partial x_(3)) &\ldots & \frac(\partial^2F)(\partial x_(2)\partial x_(n))\\ \frac(\partial^2F )(\partial x_(3) \partial x_(1)) & \frac(\partial^2F)(\partial x_(3)\partial x_(2)) & \frac(\partial^2F)(\partial x_(3)^(2)) &\ldots & \frac(\partial^2F)(\partial x_(3)\partial x_(n))\\ \ldots & \ldots & \ldots &\ldots & \ ldots\\ \frac(\partial^2F)(\partial x_(n)\partial x_(1)) & \frac(\partial^2F)(\partial x_(n)\partial x_(2)) & \ frac(\partial^2F)(\partial x_(n)\partial x_(3)) &\ldots & \frac(\partial^2F)(\partial x_(n)^(2))\\ \end( array) \right|$ highlighted in red in the $L$ matrix is ​​the Hessian of the Lagrange function. We use the following rule:

  • If the signs of the corner minors are $H_(2m+1),\; H_(2m+2),\ldots,H_(m+n)$ matrices $L$ coincide with the sign of $(-1)^m$, then the stationary point under study is the conditional minimum point of the function $z=f(x_1,x_2 ,x_3,\ldots,x_n)$.
  • If the signs of the corner minors are $H_(2m+1),\; H_(2m+2),\ldots,H_(m+n)$ alternate, and the sign of the minor $H_(2m+1)$ coincides with the sign of the number $(-1)^(m+1)$, then the studied stationary the point is the conditional maximum point of the function $z=f(x_1,x_2,x_3,\ldots,x_n)$.

Example #1

Find the conditional extremum of the function $z(x,y)=x+3y$ under the condition $x^2+y^2=10$.

The geometric interpretation of this problem is as follows: it is required to find the largest and smallest value of the applicate of the plane $z=x+3y$ for the points of its intersection with the cylinder $x^2+y^2=10$.

It is somewhat difficult to express one variable in terms of another from the constraint equation and substitute it into the function $z(x,y)=x+3y$, so we will use the Lagrange method.

Denoting $\varphi(x,y)=x^2+y^2-10$, we compose the Lagrange function:

$$ F(x,y)=z(x,y)+\lambda \varphi(x,y)=x+3y+\lambda(x^2+y^2-10);\\ \frac(\partial F)(\partial x)=1+2\lambda x; \frac(\partial F)(\partial y)=3+2\lambda y. $$

Let us write down the system of equations for determining the stationary points of the Lagrange function:

$$ \left \( \begin(aligned) & 1+2\lambda x=0;\\ & 3+2\lambda y=0;\\ & x^2+y^2-10=0. \end (aligned)\right.$$

If we assume $\lambda=0$, then the first equation becomes: $1=0$. The resulting contradiction says that $\lambda\neq 0$. Under the condition $\lambda\neq 0$, from the first and second equations we have: $x=-\frac(1)(2\lambda)$, $y=-\frac(3)(2\lambda)$. Substituting the obtained values ​​into the third equation, we get:

$$ \left(-\frac(1)(2\lambda) \right)^2+\left(-\frac(3)(2\lambda) \right)^2-10=0;\\ \frac (1)(4\lambda^2)+\frac(9)(4\lambda^2)=10; \lambda^2=\frac(1)(4); \left[ \begin(aligned) & \lambda_1=-\frac(1)(2);\\ & \lambda_2=\frac(1)(2). \end(aligned) \right.\\ \begin(aligned) & \lambda_1=-\frac(1)(2); \; x_1=-\frac(1)(2\lambda_1)=1; \; y_1=-\frac(3)(2\lambda_1)=3;\\ & \lambda_2=\frac(1)(2); \; x_2=-\frac(1)(2\lambda_2)=-1; \; y_2=-\frac(3)(2\lambda_2)=-3.\end(aligned) $$

So, the system has two solutions: $x_1=1;\; y_1=3;\; \lambda_1=-\frac(1)(2)$ and $x_2=-1;\; y_2=-3;\; \lambda_2=\frac(1)(2)$. Let us find out the nature of the extremum at each stationary point: $M_1(1;3)$ and $M_2(-1;-3)$. To do this, we calculate the determinant $H$ at each of the points.

$$ \varphi_(x)^(")=2x;\; \varphi_(y)^(")=2y;\; F_(xx)^("")=2\lambda;\; F_(xy)^("")=0;\; F_(yy)^("")=2\lambda.\\ H=\left| \begin(array) (ccc) 0 & \varphi_(x)^(") & \varphi_(y)^(")\\ \varphi_(x)^(") & F_(xx)^("") & F_(xy)^("") \\ \varphi_(y)^(") & F_(xy)^("") & F_(yy)^("") \end(array) \right|= \left| \begin(array) (ccc) 0 & 2x & 2y\\ 2x & 2\lambda & 0 \\ 2y & 0 & 2\lambda \end(array) \right|= 8\cdot\left| \begin(array) (ccc) 0 & x & y\\ x & \lambda & 0 \\ y & 0 & \lambda \end(array) \right| $$

At the point $M_1(1;3)$ we get: $H=8\cdot\left| \begin(array) (ccc) 0 & x & y\\ x & \lambda & 0 \\ y & 0 & \lambda \end(array) \right|= 8\cdot\left| \begin(array) (ccc) 0 & 1 & 3\\ 1 & -1/2 & 0 \\ 3 & 0 & -1/2 \end(array) \right|=40 > 0$, so at point $M_1(1;3)$ the function $z(x,y)=x+3y$ has a conditional maximum, $z_(\max)=z(1;3)=10$.

Similarly, at the point $M_2(-1;-3)$ we find: $H=8\cdot\left| \begin(array) (ccc) 0 & x & y\\ x & \lambda & 0 \\ y & 0 & \lambda \end(array) \right|= 8\cdot\left| \begin(array) (ccc) 0 & -1 & -3\\ -1 & 1/2 & 0 \\ -3 & 0 & 1/2 \end(array) \right|=-40$. Since $H< 0$, то в точке $M_2(-1;-3)$ имеем условный минимум функции $z(x,y)=x+3y$, а именно: $z_{\min}=z(-1;-3)=-10$.

I note that instead of calculating the value of the determinant $H$ at each point, it is much more convenient to open it in a general way. In order not to clutter up the text with details, I will hide this method under a note.

Determinant $H$ notation in general form. show/hide

$$ H=8\cdot\left|\begin(array)(ccc)0&x&y\\x&\lambda&0\\y&0&\lambda\end(array)\right| =8\cdot\left(-\lambda(y^2)-\lambda(x^2)\right) =-8\lambda\cdot\left(y^2+x^2\right). $$

In principle, it is already obvious which sign $H$ has. Since none of the points $M_1$ or $M_2$ coincides with the origin, then $y^2+x^2>0$. Therefore, the sign of $H$ is opposite to the sign of $\lambda$. You can also complete the calculations:

$$ \begin(aligned) &H(M_1)=-8\cdot\left(-\frac(1)(2)\right)\cdot\left(3^2+1^2\right)=40;\ \ &H(M_2)=-8\cdot\frac(1)(2)\cdot\left((-3)^2+(-1)^2\right)=-40. \end(aligned) $$

The question about the nature of the extremum at the stationary points $M_1(1;3)$ and $M_2(-1;-3)$ can be solved without using the determinant $H$. Find the sign of $d^2F$ at each stationary point:

$$ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=2\lambda \left( dx^2+dy^2\right) $$

I note that the notation $dx^2$ means exactly $dx$ raised to the second power, i.e. $\left(dx\right)^2$. Hence we have: $dx^2+dy^2>0$, so for $\lambda_1=-\frac(1)(2)$ we get $d^2F< 0$. Следовательно, функция имеет в точке $M_1(1;3)$ условный максимум. Аналогично, в точке $M_2(-1;-3)$ получим условный минимум функции $z(x,y)=x+3y$. Отметим, что для определения знака $d^2F$ не пришлось учитывать связь между $dx$ и $dy$, ибо знак $d^2F$ очевиден без дополнительных преобразований. В следующем примере для определения знака $d^2F$ уже будет необходимо учесть связь между $dx$ и $dy$.

Answer: at the point $(-1;-3)$ the function has a conditional minimum, $z_(\min)=-10$. At the point $(1;3)$ the function has a conditional maximum, $z_(\max)=10$

Example #2

Find the conditional extremum of the function $z(x,y)=3y^3+4x^2-xy$ under the condition $x+y=0$.

The first way (the method of Lagrange multipliers)

Denoting $\varphi(x,y)=x+y$ we compose the Lagrange function: $F(x,y)=z(x,y)+\lambda \varphi(x,y)=3y^3+4x^2 -xy+\lambda(x+y)$.

$$ \frac(\partial F)(\partial x)=8x-y+\lambda; \; \frac(\partial F)(\partial y)=9y^2-x+\lambda.\\ \left \( \begin(aligned) & 8x-y+\lambda=0;\\ & 9y^2-x+\ lambda=0;\\&x+y=0.\end(aligned)\right.$$

Solving the system, we get: $x_1=0$, $y_1=0$, $\lambda_1=0$ and $x_2=\frac(10)(9)$, $y_2=-\frac(10)(9)$ , $\lambda_2=-10$. We have two stationary points: $M_1(0;0)$ and $M_2 \left(\frac(10)(9);-\frac(10)(9) \right)$. Let us find out the nature of the extremum at each stationary point using the determinant $H$.

$$ H=\left| \begin(array) (ccc) 0 & \varphi_(x)^(") & \varphi_(y)^(")\\ \varphi_(x)^(") & F_(xx)^("") & F_(xy)^("") \\ \varphi_(y)^(") & F_(xy)^("") & F_(yy)^("") \end(array) \right|= \left| \begin(array) (ccc) 0 & 1 & 1\\ 1 & 8 & -1 \\ 1 & -1 & 18y \end(array) \right|=-10-18y $$

At point $M_1(0;0)$ $H=-10-18\cdot 0=-10< 0$, поэтому $M_1(0;0)$ есть точка условного минимума функции $z(x,y)=3y^3+4x^2-xy$, $z_{\min}=0$. В точке $M_2\left(\frac{10}{9};-\frac{10}{9}\right)$ $H=10 >0$, so at this point the function has a conditional maximum, $z_(\max)=\frac(500)(243)$.

We investigate the nature of the extremum at each of the points by a different method, based on the sign of $d^2F$:

$$ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=8dx^2-2dxdy+ 18ydy^2 $$

From the constraint equation $x+y=0$ we have: $d(x+y)=0$, $dx+dy=0$, $dy=-dx$.

$$ d^2 F=8dx^2-2dxdy+18ydy^2=8dx^2-2dx(-dx)+18y(-dx)^2=(10+18y)dx^2 $$

Since $ d^2F \Bigr|_(M_1)=10 dx^2 > 0$, then $M_1(0;0)$ is the conditional minimum point of the function $z(x,y)=3y^3+4x^ 2-xy$. Similarly, $d^2F \Bigr|_(M_2)=-10 dx^2< 0$, т.е. $M_2\left(\frac{10}{9}; -\frac{10}{9} \right)$ - точка условного максимума.

Second way

From the constraint equation $x+y=0$ we get: $y=-x$. Substituting $y=-x$ into the function $z(x,y)=3y^3+4x^2-xy$, we obtain some function of the variable $x$. Let's denote this function as $u(x)$:

$$ u(x)=z(x,-x)=3\cdot(-x)^3+4x^2-x\cdot(-x)=-3x^3+5x^2. $$

Thus, we reduced the problem of finding the conditional extremum of a function of two variables to the problem of determining the extremum of a function of one variable.

$$ u_(x)^(")=-9x^2+10x;\\ -9x^2+10x=0; \; x\cdot(-9x+10)=0;\\ x_1=0; \ ;y_1=-x_1=0;\\ x_2=\frac(10)(9);\;y_2=-x_2=-\frac(10)(9).$$

Got points $M_1(0;0)$ and $M_2\left(\frac(10)(9); -\frac(10)(9)\right)$. Further research is known from the course of the differential calculus of functions of one variable. Investigating the sign of $u_(xx)^("")$ at each stationary point or checking the sign change of $u_(x)^(")$ at the found points, we get the same conclusions as when solving the first method. For example, check sign $u_(xx)^("")$:

$$u_(xx)^("")=-18x+10;\\ u_(xx)^("")(M_1)=10;\;u_(xx)^("")(M_2)=- 10.$$

Since $u_(xx)^("")(M_1)>0$, then $M_1$ is the minimum point of the function $u(x)$, while $u_(\min)=u(0)=0$ . Since $u_(xx)^("")(M_2)<0$, то $M_2$ - точка максимума функции $u(x)$, причём $u_{\max}=u\left(\frac{10}{9}\right)=\frac{500}{243}$.

The values ​​of the function $u(x)$ under the given connection condition coincide with the values ​​of the function $z(x,y)$, i.e. the found extrema of the function $u(x)$ are the desired conditional extrema of the function $z(x,y)$.

Answer: at the point $(0;0)$ the function has a conditional minimum, $z_(\min)=0$. At the point $\left(\frac(10)(9); -\frac(10)(9) \right)$ the function has a conditional maximum, $z_(\max)=\frac(500)(243)$.

Let's consider one more example, in which we find out the nature of the extremum by determining the sign of $d^2F$.

Example #3

Find the maximum and minimum values ​​of the function $z=5xy-4$ if the variables $x$ and $y$ are positive and satisfy the constraint equation $\frac(x^2)(8)+\frac(y^2)(2) -1=0$.

Compose the Lagrange function: $F=5xy-4+\lambda \left(\frac(x^2)(8)+\frac(y^2)(2)-1 \right)$. Find the stationary points of the Lagrange function:

$$ F_(x)^(")=5y+\frac(\lambda x)(4); \; F_(y)^(")=5x+\lambda y.\\ \left \( \begin(aligned) & 5y+\frac(\lambda x)(4)=0;\\ & 5x+\lambda y=0;\\ & \frac(x^2)(8)+\frac(y^2)(2)- 1=0;\\ & x > 0; \; y > 0. \end(aligned) \right.$$

All further transformations are carried out taking into account $x > 0; \; y > 0$ (this is stipulated in the condition of the problem). From the second equation, we express $\lambda=-\frac(5x)(y)$ and substitute the found value into the first equation: $5y-\frac(5x)(y)\cdot \frac(x)(4)=0$ , $4y^2-x^2=0$, $x=2y$. Substituting $x=2y$ into the third equation, we get: $\frac(4y^2)(8)+\frac(y^2)(2)-1=0$, $y^2=1$, $y =1$.

Since $y=1$, then $x=2$, $\lambda=-10$. The nature of the extremum at the point $(2;1)$ is determined from the sign of $d^2F$.

$$ F_(xx)^("")=\frac(\lambda)(4); \; F_(xy)^("")=5; \; F_(yy)^("")=\lambda. $$

Since $\frac(x^2)(8)+\frac(y^2)(2)-1=0$, then:

$$ d\left(\frac(x^2)(8)+\frac(y^2)(2)-1\right)=0; \; d\left(\frac(x^2)(8) \right)+d\left(\frac(y^2)(2) \right)=0; \; \frac(x)(4)dx+ydy=0; \; dy=-\frac(xdx)(4y). $$

In principle, here you can immediately substitute the coordinates of the stationary point $x=2$, $y=1$ and the parameter $\lambda=-10$, thus obtaining:

$$ F_(xx)^("")=\frac(-5)(2); \; F_(xy)^("")=-10; \; dy=-\frac(dx)(2).\\ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^(" ")dy^2=-\frac(5)(2)dx^2+10dx\cdot \left(-\frac(dx)(2) \right)-10\cdot \left(-\frac(dx) (2) \right)^2=\\ =-\frac(5)(2)dx^2-5dx^2-\frac(5)(2)dx^2=-10dx^2. $$

However, in other problems for a conditional extremum, there may be several stationary points. In such cases, it is better to represent $d^2F$ in a general form, and then substitute the coordinates of each of the found stationary points into the resulting expression:

$$ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=\frac(\lambda) (4)dx^2+10\cdot dx\cdot \frac(-xdx)(4y) +\lambda\cdot \left(-\frac(xdx)(4y) \right)^2=\\ =\frac (\lambda)(4)dx^2-\frac(5x)(2y)dx^2+\lambda \cdot \frac(x^2dx^2)(16y^2)=\left(\frac(\lambda )(4)-\frac(5x)(2y)+\frac(\lambda \cdot x^2)(16y^2) \right)\cdot dx^2 $$

Substituting $x=2$, $y=1$, $\lambda=-10$, we get:

$$ d^2 F=\left(\frac(-10)(4)-\frac(10)(2)-\frac(10 \cdot 4)(16) \right)\cdot dx^2=- 10dx^2. $$

Since $d^2F=-10\cdot dx^2< 0$, то точка $(2;1)$ есть точкой условного максимума функции $z=5xy-4$, причём $z_{\max}=10-4=6$.

Answer: at the point $(2;1)$ the function has a conditional maximum, $z_(\max)=6$.

In the next part, we will consider the application of the Lagrange method for functions of a larger number of variables.