Biographies Characteristics Analysis

The sample mean is a biased estimate of the mathematical expectation. Point estimate and its properties

The mathematical expectation is the probability distribution of a random variable

Mathematical expectation, definition, mathematical expectation of discrete and continuous random variables, selective, conditional expectation, calculation, properties, tasks, estimation of expectation, variance, distribution function, formulas, calculation examples

Expand content

Collapse content

The mathematical expectation is, the definition

One of the most important concepts in mathematical statistics and probability theory, characterizing the distribution of values ​​or probabilities of a random variable. Usually expressed as weighted average all possible parameters of the random variable. Widely used in technical analysis, research number series, the study of continuous and long processes. It is important in assessing risks, predicting price indicators when trading in financial markets, it is used in the development of strategies and methods of game tactics in theory gambling.

The mathematical expectation is the mean value of a random variable, the probability distribution of a random variable is considered in probability theory.

The mathematical expectation is measure of the mean value of a random variable in probability theory. Mathematical expectation of a random variable x denoted M(x).

The mathematical expectation is

The mathematical expectation is in probability theory, the weighted average of all possible values ​​that this random variable can take.

The mathematical expectation is the sum of the products of all possible values ​​of a random variable by the probabilities of these values.

The mathematical expectation is the average benefit from a particular decision, provided that such a decision can be considered within the framework of the theory big numbers and long distance.


The mathematical expectation is in gambling theory, the amount of winnings that a player can earn or lose, on average, for each bet. In the language of gamblers, this is sometimes called "gamer's edge" (if positive for the player) or "house edge" (if negative for the player).

The mathematical expectation is Percentage of profit per win multiplied by average profit minus probability of loss multiplied by average loss.


Mathematical expectation of a random variable in mathematical theory

One of the important numerical characteristics of a random variable is the mathematical expectation. Let us introduce the concept of a system of random variables. Consider a set of random variables that are the results of the same random experiment. If is one of the possible values ​​of the system, then the event corresponds to a certain probability that satisfies the Kolmogorov axioms. A function defined for any possible values ​​of random variables is called a joint distribution law. This function allows you to calculate the probabilities of any events from. In particular, the joint law of distribution of random variables and, which take values ​​from the set and, is given by probabilities.


The term "expectation" was introduced by Pierre Simon Marquis de Laplace (1795) and originated from the concept of "expected value of payoff", which first appeared in the 17th century in the theory of gambling in the works of Blaise Pascal and Christian Huygens. However, the first complete theoretical understanding and evaluation of this concept was given by Pafnuty Lvovich Chebyshev (mid-19th century).


The distribution law of random numerical variables (the distribution function and the distribution series or probability density) completely describe the behavior of a random variable. But in a number of problems it is enough to know some numerical characteristics of the quantity under study (for example, its average value and possible deviation from it) in order to answer the question posed. The main numerical characteristics of random variables are the mathematical expectation, variance, mode and median.

The mathematical expectation of a discrete random variable is the sum of the products of its possible values ​​and their corresponding probabilities. Sometimes the mathematical expectation is called the weighted average, since it is approximately equal to the arithmetic mean of the observed values ​​of a random variable over a large number of experiments. From the definition of mathematical expectation, it follows that its value is not less than the smallest possible value of a random variable and not more than the largest. The mathematical expectation of a random variable is a non-random (constant) variable.


The mathematical expectation has a simple physical meaning: if a unit mass is placed on a straight line, placing some mass at some points (for discrete distribution), or “smearing” it with a certain density (for an absolutely continuous distribution), then the point corresponding to the mathematical expectation will be the coordinate of the “center of gravity” of the straight line.


The average value of a random variable is a certain number, which is, as it were, its “representative” and replaces it in rough approximate calculations. When we say: “the average lamp operation time is 100 hours” or “the average point of impact is shifted relative to the target by 2 m to the right”, we indicate by this a certain numerical characteristic of a random variable that describes its location on the numerical axis, i.e. position description.

From the characteristics of the position in the theory of probability essential role plays the mathematical expectation of a random variable, which is sometimes called simply the average value of a random variable.


Consider a random variable X, which has possible values x1, x2, …, xn with probabilities p1, p2, …, pn. We need to characterize by some number the position of the values ​​of the random variable on the x-axis, taking into account the fact that these values ​​have various probabilities. For this purpose, it is natural to use the so-called "weighted average" of the values xi, and each value xi during averaging should be taken into account with a “weight” proportional to the probability of this value. Thus, we will calculate the mean of the random variable X, which we will denote M|X|:


This weighted average is called the mathematical expectation of the random variable. Thus, we introduced in consideration one of the most important concepts of probability theory - the concept of mathematical expectation. The mathematical expectation of a random variable is the sum of the products of all possible values ​​of a random variable and the probabilities of these values.

X due to a peculiar dependence with the arithmetic mean of the observed values ​​of a random variable with a large number of experiments. This dependence is of the same type as the dependence between frequency and probability, namely: with a large number of experiments, the arithmetic mean of the observed values ​​of a random variable approaches (converges in probability) its mathematical expectation. From the presence of a relationship between frequency and probability, one can deduce as a consequence the existence of a similar relationship between the arithmetic mean and mathematical expectation. Indeed, consider a random variable X, characterized by a series of distributions:


Let it be produced N independent experiments, in each of which the value X accepts certain value. Suppose the value x1 appeared m1 times, value x2 appeared m2 times, general meaning xi appeared mi times. Let us calculate the arithmetic mean of the observed values ​​of X, which, in contrast to the mathematical expectation M|X| we will denote M*|X|:

With an increase in the number of experiments N frequencies pi will approach (converge in probability) the corresponding probabilities. Therefore, the arithmetic mean of the observed values ​​of the random variable M|X| with an increase in the number of experiments, it will approach (converge in probability) to its mathematical expectation. The connection between the arithmetic mean and the mathematical expectation formulated above constitutes the content of one of the forms of the law of large numbers.

We already know that all forms of the law of large numbers state the fact that certain averages are stable over a large number of experiments. Here we are talking about the stability of the arithmetic mean from a series of observations of the same value. With a small number of experiments, the arithmetic mean of their results is random; with a sufficient increase in the number of experiments, it becomes "almost not random" and, stabilizing, approaches constant value- mathematical expectation.


The property of stability of averages for a large number of experiments is easy to verify experimentally. For example, weighing any body in the laboratory on accurate scales, as a result of weighing we get a new value each time; to reduce the error of observation, we weigh the body several times and use the arithmetic mean of the obtained values. It is easy to see that with a further increase in the number of experiments (weighings), the arithmetic mean reacts to this increase less and less, and with a sufficiently large number of experiments it practically ceases to change.

It should be noted that the most important characteristic of the position of a random variable - the mathematical expectation - does not exist for all random variables. It is possible to make examples of such random variables for which the mathematical expectation does not exist, since the corresponding sum or integral diverges. However, for practice, such cases are not of significant interest. Usually, the random variables we are dealing with have a limited range of possible values ​​and, of course, have an expectation.


In addition to the most important of the characteristics of the position of a random variable - the mathematical expectation, other position characteristics are sometimes used in practice, in particular, the mode and median of the random variable.


The mode of a random variable is its most probable value. The term "most likely value", strictly speaking, applies only to discontinuous quantities; for a continuous quantity, the mode is the value at which the probability density is maximum. The figures show the mode for discontinuous and continuous random variables, respectively.


If the distribution polygon (distribution curve) has more than one maximum, the distribution is said to be "polymodal".



Sometimes there are distributions that have in the middle not a maximum, but a minimum. Such distributions are called "antimodal".


AT general case the mode and mathematical expectation of a random variable do not coincide. In a particular case, when the distribution is symmetric and modal (i.e. has a mode) and there is a mathematical expectation, then it coincides with the mode and the center of symmetry of the distribution.

Another characteristic of the position is often used - the so-called median of a random variable. This characteristic is usually used only for continuous random variables, although it can be formally defined for a discontinuous variable as well. Geometrically, the median is the abscissa of the point at which the area bounded by the distribution curve is bisected.


In the case of a symmetric modal distribution, the median coincides with the mean and the mode.

Mathematical expectation is the average value of a random variable - a numerical characteristic of the probability distribution of a random variable. by the most in a general way mathematical expectation of a random variable X(w) is defined as the Lebesgue integral with respect to the probability measure R in the original probability space:


The mathematical expectation can also be calculated as the Lebesgue integral of X by probability distribution px quantities X:


In a natural way, one can define the concept of a random variable with infinite mathematical expectation. A typical example are the return times in some random walks.

With the help of mathematical expectation, many numerical and functional characteristics of the distribution are determined (as the mathematical expectation of the corresponding functions of a random variable), for example, generating function, characteristic function, moments of any order, in particular, variance, covariance.

Mathematical expectation is a characteristic of the location of the values ​​of a random variable (the average value of its distribution). In this capacity, the mathematical expectation serves as some "typical" distribution parameter and its role is similar to the role of the static moment - the coordinate of the center of gravity of the mass distribution - in mechanics. From other characteristics of the location, with the help of which the distribution is described in general terms - medians, modes, the mathematical expectation differs in that great value, which it and the corresponding scattering characteristic - dispersion - have in the limit theorems of probability theory. With the greatest completeness, the meaning of mathematical expectation is revealed by the law of large numbers (Chebyshev's inequality) and the strengthened law of large numbers.

Mathematical expectation of a discrete random variable

Let there be some random variable that can take one of several numerical values ​​(for example, the number of points in a die roll can be 1, 2, 3, 4, 5, or 6). Often in practice, for such a value, the question arises: what value does it take "on average" with a large number of tests? What will be our average return (or loss) from each of the risky transactions?


Let's say there is some kind of lottery. We want to understand whether it is profitable or not to participate in it (or even participate repeatedly, regularly). Let's say that every fourth ticket wins, the prize will be 300 rubles, and the price of any ticket will be 100 rubles. With an infinite number of participations, this is what happens. In three-quarters of the cases, we will lose, every three losses will cost 300 rubles. In every fourth case, we will win 200 rubles. (prize minus cost), that is, for four participations, we lose an average of 100 rubles, for one - an average of 25 rubles. In total, the average rate of our ruin will be 25 rubles per ticket.

We throw dice. If it's not cheating (without shifting the center of gravity, etc.), then how many points will we have on average at a time? Since each option is equally likely, we take the stupid arithmetic mean and get 3.5. Since this is AVERAGE, there is no need to be indignant that no particular throw will give 3.5 points - well, this cube does not have a face with such a number!

Now let's summarize our examples:


Let's take a look at the picture just above. On the left is a table of the distribution of a random variable. The value of X can take one of n possible values ​​(given in the top row). There can be no other values. Under each possible value, its probability is signed below. On the right is a formula, where M(X) is called the mathematical expectation. The meaning of this value is that with a large number of tests (with large sample) the average value will tend to this very mathematical expectation.

Let's go back to the same playing cube. The mathematical expectation of the number of points in a throw is 3.5 (calculate yourself using the formula if you don’t believe it). Let's say you threw it a couple of times. 4 and 6 fell out. On average, it turned out 5, that is, far from 3.5. They threw it again, 3 fell out, that is, on average (4 + 6 + 3) / 3 = 4.3333 ... Somehow far from the mathematical expectation. Now do a crazy experiment - roll the cube 1000 times! And if the average is not exactly 3.5, then it will be close to that.

Let's calculate the mathematical expectation for the above described lottery. The table will look like this:


Then the mathematical expectation will be, as we have established above.:


Another thing is that it is also "on the fingers", without a formula, it would be difficult if there were more options. Well, let's say there were 75% losing tickets, 20% winning tickets, and 5% winning tickets.

Now some properties of mathematical expectation.

It's easy to prove it:


A constant multiplier can be taken out of the expectation sign, that is:


This is a special case of the linearity property of the mathematical expectation.

Another consequence of the linearity of the mathematical expectation:

that is, the mathematical expectation of the sum of random variables is equal to the sum of the mathematical expectations of random variables.

Let X, Y be independent random variables, then:

This is also easy to prove) XY itself is a random variable, while if the initial values ​​could take n and m values, respectively, then XY can take nm values. The probability of each of the values ​​is calculated based on the fact that the probabilities independent events multiply. As a result, we get this:


Mathematical expectation of a continuous random variable

Continuous random variables have such a characteristic as the distribution density (probability density). It, in fact, characterizes the situation that some values ​​from the set real numbers a random variable takes more often, some - less often. For example, consider this chart:


Here X- actually a random variable, f(x)- distribution density. Judging by this graph, during the experiments, the value X will often be a number close to zero. chances to exceed 3 or be less -3 rather purely theoretical.


Let, for example, there is a uniform distribution:



This is quite consistent with the intuitive understanding. Let's say if we get a lot of random real numbers with a uniform distribution, each of the segment |0; 1| , then the arithmetic mean should be about 0.5.

The properties of mathematical expectation - linearity, etc., applicable for discrete random variables, are applicable here as well.

The relationship of mathematical expectation with other statistical indicators

In statistical analysis, along with mathematical expectation, there is a system of interdependent indicators that reflect the homogeneity of phenomena and the stability of processes. Often, variation indicators do not have independent meaning and are used for further data analysis. The exception is the coefficient of variation, which characterizes the homogeneity of the data, which is valuable statistical characteristic.


The degree of variability or stability of processes in statistical science can be measured using several indicators.

Most important indicator characterizing the variability of a random variable is Dispersion, which is most closely and directly related to the mathematical expectation. This parameter is actively used in other types of statistical analysis (hypothesis testing, analysis of cause-and-effect relationships, etc.). Like the mean linear deviation, the variance also reflects the extent to which the data spread around medium size.


It is useful to translate the language of signs into the language of words. It turns out that the dispersion is middle square deviations. That is, the average value is first calculated, then the difference between each original and average value is taken, squared, added up and then divided by the number of values ​​in this population. Difference between separate value and the average reflects the measure of deviation. It is squared so that all deviations become exclusively positive numbers and to avoid mutual destruction of positive and negative deviations when summing them up. Then, given the squared deviations, we simply calculate the arithmetic mean. Average - square - deviations. Deviations are squared, and the average is considered. clue magic word"dispersion" is just three words.

However, in pure form, such as the arithmetic mean, or index, the variance is not used. It is rather an auxiliary and intermediate indicator that is used for other types of statistical analysis. She doesn't even have a normal unit of measure. Judging by the formula, this is the square of the original data unit.

Let's measure a random variable N times, for example, we measure the wind speed ten times and want to find the average value. How is the mean value related to the distribution function?

Or we will roll the dice a large number of times. The number of points that will appear on the die on each roll is random variable and can take any natural values ​​from 1 to 6. The arithmetic mean of the points scored for all dice rolls is also a random variable, but for large N it aspires to specific number- mathematical expectation Mx. In this case, Mx = 3.5.

How did this value come about? Let in N trials n1 once 1 point is dropped, n2 times - 2 points and so on. Then the number of outcomes in which one point fell:


Similarly for the outcomes when 2, 3, 4, 5 and 6 points fell out.


Let us now assume that we know the distribution law of the random variable x, that is, we know that the random variable x can take the values ​​x1, x2, ..., xk with probabilities p1, p2, ..., pk.

The mathematical expectation Mx of a random variable x is:


The mathematical expectation is not always a reasonable estimate of some random variable. So, to estimate the average wages it is more reasonable to use the concept of a median, that is, such a value that the number of people receiving less than the median salary and more, are the same.

The probability p1 that the random variable x is less than x1/2 and the probability p2 that the random variable x is greater than x1/2 are the same and equal to 1/2. The median is not uniquely determined for all distributions.


Standard or Standard Deviation in statistics, the degree of deviation of observational data or sets from the AVERAGE value is called. Denoted by the letters s or s. A small standard deviation indicates that the data is grouped around the mean, and a large standard deviation indicates that the initial data is far from it. Standard deviation equals square root quantity called dispersion. It is the average of the sum of the squared differences of the initial data deviating from the mean. The standard deviation of a random variable is the square root of the variance:


Example. Under test conditions when shooting at a target, calculate the variance and standard deviation of a random variable:


Variation- fluctuation, variability of the value of the attribute in units of the population. Separate numerical values ​​of a feature that occur in the studied population are called variants of values. Insufficiency of the average value for complete characteristics the aggregate makes us supplement the average values ​​with indicators that allow us to assess the typicality of these averages by measuring the fluctuation (variation) of the trait under study. The coefficient of variation is calculated by the formula:


Span variation(R) is the difference between the maximum and minimum values trait in the study population. This indicator gives the most general idea about the fluctuation of the trait under study, as it shows the difference only between the limiting values ​​of the options. Dependence extreme values sign gives the range of variation unstable, random character.


Average linear deviation is the arithmetic mean of the absolute (modulo) deviations of all values ​​of the analyzed population from their average value:


Mathematical expectation in gambling theory

The mathematical expectation is the average amount of money a gambler can win or lose on a given bet. This is a very significant concept for a player, because it is fundamental to the assessment of most game situations. Mathematical expectation is also the best tool for analyzing basic card layouts and game situations.

Let's say you're playing coin with a friend, making an equal $1 bet each time, no matter what comes up. Tails - you win, heads - you lose. The chances of it coming up tails are one to one and you are betting $1 to $1. Thus, your mathematical expectation is zero, because mathematically speaking, you can't know if you'll lead or lose after two rolls or after 200.


Your hourly gain is zero. Hourly payout is the amount of money you expect to win in an hour. You can flip a coin 500 times within an hour, but you won't win or lose because your odds are neither positive nor negative. If you look, from the point of view of a serious player, such a betting system is not bad. But it's just a waste of time.

But suppose someone wants to bet $2 against your $1 in the same game. Then you immediately have a positive expectation of 50 cents from each bet. Why 50 cents? On average, you win one bet and lose the second. Bet the first dollar and lose $1, bet the second and win $2. You've bet $1 twice and are ahead by $1. So each of your one dollar bets gave you 50 cents.


If the coin falls 500 times in one hour, your hourly gain will be already $250, because. on average, you lost $1 250 times and won $2 250 times. $500 minus $250 equals $250, which is the total win. Note that the expected value, which is the amount you win on average on a single bet, is 50 cents. You won $250 by betting a dollar 500 times, which equals 50 cents of your bet.

Mathematical expectation has nothing to do with short-term results. Your opponent, who decided to bet $2 against you, could beat you on the first ten tosses in a row, but you, with a 2-to-1 betting advantage, all else being equal, make 50 cents on every $1 bet under any circumstances. It doesn't matter if you win or lose one bet or several bets, but only on the condition that you have enough cash to easily compensate for the costs. If you keep betting the same way, then over a long period of time your winnings will come up to the sum of expected values ​​in individual rolls.


Every time you make a best bet (a bet that can be profitable in the long run) when the odds are in your favor, you are bound to win something on it, whether you lose it or not in a given hand. Conversely, if you made a worse bet (a bet that is unprofitable in the long run) when the odds are not in your favor, you lose something, whether you win or lose the hand.

You bet with the best outcome if your expectation is positive, and it is positive if the odds are in your favor. By betting with the worst outcome, you have a negative expectation, which happens when the odds are against you. Serious players only bet with the best outcome, with the worst - they fold. What does the odds in your favor mean? You may end up winning more than the actual odds bring. The real odds of hitting tails are 1 to 1, but you get 2 to 1 due to the betting ratio. In this case, the odds are in your favor. You definitely get the best outcome with a positive expectation of 50 cents per bet.


Here's more complex example mathematical expectation. The friend writes down the numbers from one to five and bets $5 against your $1 that you won't pick the number. Do you agree to such a bet? What is the expectation here?

On average, you'll be wrong four times. Based on this, the odds against you guessing the number will be 4 to 1. The odds are that you will lose a dollar in one attempt. However, you win 5 to 1, with the possibility of losing 4 to 1. Therefore, the odds are in your favor, you can take the bet and hope for the best outcome. If you make this bet five times, on average you will lose four times $1 and win $5 once. Based on this, for all five attempts you will earn $1 with a positive mathematical expectation of 20 cents per bet.


A player who is going to win more than he bets, as in the example above, is catching the odds. Conversely, he ruins the chances when he expects to win less than he bets. The bettor can have either positive or negative expectation depending on whether he is catching or ruining the odds.

If you bet $50 to win $10 with a 4 to 1 chance of winning, you will get a negative expectation of $2, because on average, you will win four times $10 and lose $50 once, which shows that the loss per bet will be $10. But if you bet $30 to win $10, with the same odds of winning 4 to 1, then in this case you have a positive expectation of $2, because you again win four times $10 and lose $30 once, for a profit of $10. These examples show that the first bet is bad and the second is good.


Mathematical expectation is the center of any game situation. When a bookmaker encourages football fans to bet $11 to win $10, they have a positive expectation of 50 cents for every $10. If the casino pays out even money from the Craps pass line, then the house's positive expectation is approximately $1.40 for every $100; this game is structured so that everyone who bets on this line loses 50.7% on average and wins 49.3% of the time. Undoubtedly, it is this seemingly minimal positive expectation that brings huge profits to casino owners around the world. As Vegas World casino owner Bob Stupak noted, “One thousandth of a percent of a negative probability over a long enough distance will ruin richest man in the world".


Mathematical expectation when playing poker

The game of Poker is the most revealing and good example in terms of using the theory and properties of mathematical expectation.


Expected Value in Poker is the average benefit from a particular decision, provided that such a decision can be considered in the framework of the theory of large numbers and a long distance. Successful poker is about always accepting moves with a positive mathematical expectation.

The mathematical meaning of the mathematical expectation when playing poker is that we often encounter random variables when making a decision (we do not know which cards are in the opponent's hand, which cards will come on subsequent betting rounds). We must consider each of the solutions from the point of view of the theory of large numbers, which says that with a sufficiently large sample, the average value of a random variable will tend to its mathematical expectation.


Among the particular formulas for calculating the mathematical expectation, the following is most applicable in poker:

When playing poker, the mathematical expectation can be calculated for both bets and calls. In the first case, fold equity should be taken into account, in the second, the pot's own odds. When evaluating the mathematical expectation of a particular move, it should be remembered that a fold always has a zero mathematical expectation. Thus, discarding cards will always be a more profitable decision than any negative move.

Expectation tells you what you can expect (profit or loss) for every dollar you risk. Casinos make money because the mathematical expectation of all the games that are practiced in them is in favor of the casino. With a sufficiently long series of games, it can be expected that the client will lose his money, since the “probability” is in favor of the casino. However, professional casino players limit their games to short periods of time, thereby increasing the odds in their favor. The same goes for investing. If your expectation is positive, you can earn more money making many trades in a short period of time. The expectation is your percentage of profit per win times your average profit minus your probability of loss times your average loss.


Poker can also be considered in terms of mathematical expectation. You can assume that a certain move is profitable, but in some cases it may not be the best one, because another move is more profitable. Let's say you hit a full house in five card draw poker. Your opponent bets. You know that if you up the ante, he will call. So raising looks like the best tactic. But if you do raise, the remaining two players will fold for sure. But if you call the bet, you will be completely sure that the other two players after you will do the same. When you raise the bet, you get one unit, and simply by calling you get two. So calling gives you a higher positive expected value and is the best tactic.

The mathematical expectation can also give an idea of ​​which poker tactics are less profitable and which are more profitable. For example, if you play a particular hand and you think your average loss is 75 cents including the antes, then you should play that hand because this is better than folding when the ante is $1.


Another important reason to understand the essence of mathematical expectation is that it gives you a sense of calm whether you won a bet or not: if you made a good bet or folded in time, you will know that you have earned or saved a certain amount of money that a weaker player does not was able to save. It's much harder to fold if you're frustrated that your opponent has a better hand on the draw. That said, the money you save by not playing, instead of betting, is added to your overnight or monthly winnings.

Just remember that if you switched hands, your opponent would call you, and as you'll see in the Fundamental Theorem of Poker article, this is just one of your advantages. You should rejoice when this happens. You can even learn to enjoy losing a hand, because you know that other players in your shoes would lose much more.


As discussed in the coin game example at the beginning, the hourly rate of return is related to the expected value, and this concept especially important for professional players. When you are going to play poker, you must mentally estimate how much you can win in an hour of play. In most cases, you will need to rely on your intuition and experience, but you can also use some mathematical calculations. For example, if you are playing draw lowball and you see three players bet $10 and then draw two cards, which is a very bad tactic, you can calculate for yourself that every time they bet $10 they lose about $2. Each of them does this eight times an hour, which means that all three lose about $48 per hour. You are one of the remaining four players, who are approximately equal, so these four players (and you among them) must share $48, and each will make a profit of $12 per hour. Your hourly rate in this case is simply your share of the amount of money lost by three bad players per hour.

Over a long period of time, the total winnings of the player is the sum of his mathematical expectations in separate distributions. The more you play with positive expectation, the more you win, and conversely, the more hands you play with negative expectation, the more you lose. As a result, you should prioritize a game that can maximize your positive expectation or negate your negative one so that you can maximize your hourly gain.


Positive mathematical expectation in game strategy

If you know how to count cards, you may have an advantage over the casino if they don't notice and kick you out. Casinos love drunken gamblers and can't stand counting cards. The advantage will allow you to win more times than you lose over time. good management capital using expectation calculations can help you capitalize on your edge and reduce your losses. Without an advantage, you're better off giving the money to charity. In the game on the stock exchange, the advantage is given by the system of play, which creates more profit than losses, price differences and commissions. No amount of money management will save a bad gaming system.

A positive expectation is defined by a value greater than zero. The larger this number, the stronger the statistical expectation. If the value is less than zero, then the mathematical expectation will also be negative. The greater the modulus of a negative value, the worse situation. If the result is zero, then the expectation is break even. You can only win when you have a positive mathematical expectation, a reasonable game system. Playing on intuition leads to disaster.


Mathematical expectation and stock trading

Mathematical expectation is quite widely demanded and popular. statistic when carrying out exchange trading in financial markets. First of all, this parameter is used to analyze the success of trading. It is not difficult to guess that the more given value, the more reason to consider the studied trade successful. Of course, the analysis of the work of a trader cannot be carried out only with the help of this parameter. However, the calculated value, in combination with other methods of assessing the quality of work, can significantly increase the accuracy of the analysis.


The mathematical expectation is often calculated in trading account monitoring services, which allows you to quickly evaluate the work performed on the deposit. As exceptions, we can cite strategies that use the “overstaying” of losing trades. A trader may be lucky for some time, and therefore, in his work there may be no losses at all. In this case, it will not be possible to navigate only by the expectation, because the risks used in the work will not be taken into account.

In trading on the market, mathematical expectation is most often used when predicting the profitability of a trading strategy or when predicting a trader's income based on the statistics of his previous trades.

In terms of money management, it is very important to understand that when making trades with negative expectation, there is no money management scheme that can definitely bring high profits. If you continue to play the exchange under these conditions, then regardless of how you manage your money, you will lose your entire account, no matter how big it was at the beginning.

This axiom is not only true for negative expectation games or trades, it is also true for even odds games. Therefore, the only case where you have a chance to benefit in the long run is when making deals with a positive mathematical expectation.


The difference between negative expectation and positive expectation is the difference between life and death. It doesn't matter how positive or how negative the expectation is; what matters is whether it is positive or negative. Therefore, before considering money management, you must find a game with a positive expectation.

If you don't have that game, then no amount of money management in the world will save you. On the other hand, if you have a positive expectation, then it is possible, through proper money management, to turn it into an exponential growth function. It doesn't matter how small the positive expectation is! In other words, it doesn't matter how profitable a trading system based on one contract is. If you have a system that wins $10 per contract on a single trade (after fees and slippage), you can use money management techniques to make it more profitable than a system that shows an average profit of $1,000 per trade (after deduction of commissions and slippage).


What matters is not how profitable the system was, but how certain it can be said that the system will show at least a minimal profit in the future. Therefore, the most important preparation a trader can make is to make sure that the system shows a positive expected value in the future.

In order to have a positive expected value in the future, it is very important not to limit the degrees of freedom of your system. This is achieved not only by eliminating or reducing the number of parameters to be optimized, but also by reducing as much as possible more system rules. Every parameter you add, every rule you make, every tiny change you make to the system reduces the number of degrees of freedom. Ideally, you want to build a fairly primitive and simple system, which will constantly bring a small profit in almost any market. Again, it's important that you understand that it doesn't matter how profitable a system is, as long as it's profitable. The money you earn in trading will be earned through effective management money.

A trading system is simply a tool that gives you a positive mathematical expectation so that money management can be used. Systems that work (show at least a minimal profit) in only one or a few markets, or have different rules or parameters for different markets, will most likely not work in real time for long. The problem with most technical traders is that they spend too much time and effort optimizing the various rules and parameters of a trading system. This gives completely opposite results. Instead of wasting energy and computer time to increase the profits of the trading system, direct your energy to increase the level of reliability of obtaining a minimum profit.

Knowing that money management is just number game, which requires the use of positive expectations, the trader may stop looking for the "holy grail" of stock trading. Instead, he can start testing his trading method, find out how this method is logically sound, whether it gives positive expectations. Correct Methods money management, applied to any, even very mediocre trading methods, will do the rest of the work.


For any trader to be successful in his work, he needs to solve the three most important tasks: . To ensure that the number of successful transactions exceeds the inevitable mistakes and miscalculations; Set up your trading system so that the opportunity to earn money is as often as possible; Achieve a stable positive result of your operations.

And here, for us, working traders, mathematical expectation can provide a good help. This term in probability theory is one of the key. It can be used to give an average estimate of some random value. The mathematical expectation of a random variable is like the center of gravity, if you imagine everything possible probabilities points with different masses.


In relation to a trading strategy, to evaluate its effectiveness, the mathematical expectation of profit (or loss) is most often used. This parameter is defined as the sum of the products of given levels of profit and loss and the probability of their occurrence. For example, the developed trading strategy assumes that 37% of all operations will bring profit, and the remaining part - 63% - will be unprofitable. At the same time, the average income from a successful transaction will be $7, and the average loss will be $1.4. Let's calculate the mathematical expectation of trading using the following system:

What does given number? It says that, following the rules of this system, on average, we will receive 1.708 dollars from each closed transaction. Since the resulting efficiency estimate Above zero, then such a system can be used for real work. If, as a result of the calculation, the mathematical expectation turns out to be negative, then this already indicates an average loss and such trading will lead to ruin.

The amount of profit per trade can also be expressed as relative value as %. For example:

– percentage of income per 1 transaction - 5%;

– percentage of successful trading operations - 62%;

– loss percentage per 1 trade - 3%;

- the percentage of unsuccessful transactions - 38%;

That is, the average transaction will bring 1.96%.

It is possible to develop a system that, despite the predominance of losing trades, will give a positive result, since its MO>0.

However, waiting alone is not enough. It is difficult to make money if the system gives very few trading signals. In this case, its profitability will be comparable to bank interest. Let each operation bring in only 0.5 dollars on average, but what if the system assumes 1000 transactions per year? This will be a very serious amount in a relatively short time. It follows logically from this that another hallmark A good trading system can be considered a short holding period.


Sources and links

dic.academic.ru - academic online dictionary

mathematics.ru - educational site on mathematics

nsu.ru is an educational website of the Novosibirsk state university

webmath.ru educational portal for students, applicants and schoolchildren.

exponenta.ru educational mathematical site

en.tradimo.com - free online school trading

crypto.hut2.ru - multidisciplinary information resource

poker-wiki.ru - free encyclopedia of poker

sernam.ru Scientific Library selected natural science publications

reshim.su - website SOLVE tasks control coursework

unfx.ru – Forex on UNFX: education, trading signals, trust management

slovopedia.com - Large encyclopedic Dictionary Slovopedia

pokermansion.3dn.ru - Your guide to the world of poker

statanaliz.info – informational blog « Statistical analysis data"

forex-trader.rf - portal Forex-Trader

megafx.ru - up-to-date Forex analytics

fx-by.com - everything for a trader

In order for statistical estimates to give a good approximation of the estimated parameters, they must be unbiased, efficient, and consistent.

unbiased is called the statistical estimate of the parameter , the mathematical expectation of which is equal to the estimated parameter for any sample size.

Displaced called statistical evaluation
parameter , whose mathematical expectation is not equal to the estimated parameter.

efficient called statistical evaluation
parameter , which for a given sample size has the smallest variance.

Wealthy called statistical evaluation
parameter , which at
tends in probability to the estimated parameter.

i.e. for any

.

For samples of different sizes, different values ​​of the arithmetic mean and statistical variance are obtained. Therefore, the arithmetic mean and statistical variance are random variables for which there is a mathematical expectation and variance.

Let's calculate the mathematical expectation of the arithmetic mean and variance. Denote by mathematical expectation of a random variable

Here, the following are considered as random variables: – S.V., the values ​​of which are equal to the first values ​​obtained for different volume samples from the general population
–S.V., the values ​​of which are equal to the second values ​​obtained for different samples volume from the general population, ...,
- S.V., whose values ​​are equal -th values ​​obtained for different volume samples from the general population. All these random variables are distributed according to the same law and have the same mathematical expectation.

From formula (1) it follows that the arithmetic mean is an unbiased estimate of the mathematical expectation, since the mathematical expectation of the arithmetic mean is equal to the mathematical expectation of a random variable. This estimate is also consistent. The efficiency of this estimate depends on the type of distribution of the random variable
. If, for example,
normally distributed, estimating the expected value using the arithmetic mean will be efficient.

Let's find now statistical evaluation dispersion.

The expression for the statistical variance can be transformed as follows

(2)

Let us now find the mathematical expectation of the statistical variance

. (3)

Given that
(4)

we get from (3) -

It can be seen from formula (6) that the mathematical expectation of the statistical variance differs by a factor from the variance, i.e. is a biased estimate of the population variance. This is because instead of true value
, which is unknown, the statistical mean is used to estimate the variance .

Therefore, we introduce the corrected statistical variance

(7)

Then the mathematical expectation of the corrected statistical variance is

those. the corrected statistical variance is an unbiased estimate of the population variance. The resulting estimate is also consistent.

PURPOSE OF THE LECTURE: to introduce the concept of estimating an unknown distribution parameter and give a classification of such estimators; get point and interval estimates of the mathematical expectation and variance.

In practice, in most cases, the law of distribution of a random variable is unknown, and according to the results of observations
it is necessary to evaluate numerical characteristics (for example, mathematical expectation, variance or other moments) or an unknown parameter , which defines the distribution law (distribution density)
random variable under study. So, for an exponential or Poisson distribution, it is enough to evaluate one parameter, and for a normal distribution, two parameters are already to be evaluated - the mathematical expectation and variance.

Types of assessments

Random value
has a probability density
, where is an unknown distribution parameter. As a result of the experiment, the values ​​of this random variable were obtained:
. To make an assessment in essence means that the sample values ​​of a random variable must be associated with a certain value of the parameter , i.e. create some function of the results of observations
, the value of which is taken as an estimate parameter . Index indicates the number of experiments performed.

Any function that depends on the results of observations is called statistics. Since the results of observations are random variables, then the statistics will also be a random variable. Therefore, the estimate
unknown parameter should be considered as a random variable, and its value calculated from experimental given volume, – as one of the possible values ​​of this random variable.

Estimates of distribution parameters (numerical characteristics of a random variable) are divided into point and interval. Point Estimation parameter determined by one number , and its accuracy is characterized by the variance of the estimate. interval estimation called an estimate, which is determined by two numbers, and – by the ends of the interval covering the estimated parameter with a given confidence level.

Classification of point estimates

To make a point estimate of an unknown parameter
is the best in terms of accuracy, it needs to be consistent, unbiased, and efficient.

Wealthy called score
parameter , if it converges in probability to the estimated parameter, i.e.

. (8.8)

Based on the Chebyshev inequality, it can be shown that sufficient condition relation (8.8) is the equality

.

Consistency is an asymptotic characteristic of the estimate for
.

unbiased called score
(estimate without systematic error), the mathematical expectation of which is equal to the estimated parameter, i.e.

. (8.9)

If equality (8.9) is not satisfied, then the estimate is called biased. Difference
called the bias or bias of the estimate. If equality (8.9) is satisfied only for
, then the corresponding estimate is called asymptotically unbiased.

It should be noted that if consistency is an almost obligatory condition for all estimates used in practice (inconsistent estimates are used extremely rarely), then the property of unbiasedness is only desirable. Many commonly used estimators do not have the unbiased property.

In the general case, the accuracy of estimating a certain parameter obtained on the basis of experimental data
, is characterized by the mean square error

,

which can be brought to the form

,

where is the dispersion,
is the square of the estimation bias.

If the estimate is unbiased, then

At final estimates may differ by the mean square of the error . Naturally, the smaller this error, the more closely the evaluation values ​​are grouped around the estimated parameter. Therefore, it is always desirable that the estimation error be as small as possible, i.e., the condition

. (8.10)

Estimate satisfying condition (8.10) is called an estimate with a minimum squared error.

efficient called score
, for which the mean squared error is not greater than the mean squared error of any other estimate, i.e.

where – any other parameter estimate .

It is known that the variance of any unbiased estimate of one parameter satisfies the Cramer–Rao inequality

,

where
– conditional probability distribution density of the obtained values ​​of a random variable with the true value of the parameter .

So the unbiased estimator
, for which the Cramer-Rao inequality becomes an equality, will be effective, i.e., such an estimate has a minimum variance.

Point estimates of mathematical expectation and variance

If we consider a random variable
, which has mathematical expectation and dispersion , both of these parameters are assumed to be unknown. Therefore, over a random variable
produced independent experiments that give results:
. It is necessary to find consistent and unbiased estimates of unknown parameters and .

As estimates and usually, the statistical (sample) mean and statistical (sample) variance are chosen respectively:

; (8.11)

. (8.12)

The expectation estimate (8.11) is consistent according to the law of large numbers (Chebyshev's theorem):

.

Mathematical expectation of a random variable

.

Therefore, the estimate is unbiased.

The dispersion of the estimate of the mathematical expectation:

If the random variable
distributed according to the normal law, then the estimate is also effective.

Mathematical expectation of the variance estimate

In the same time

.

As
, a
, then we get

. (8.13)

Thus,
is a biased estimate, although it is consistent and efficient.

It follows from formula (8.13) that in order to obtain an unbiased estimate
the sample variance (8.12) should be modified as follows:

which is considered "better" than estimate (8.12), although for large these estimates are almost equal to each other.

Methods for obtaining estimates of distribution parameters

Often in practice, based on the analysis of the physical mechanism that generates a random variable
, we can conclude about the law of distribution of this random variable. However, the parameters of this distribution are unknown, and they must be estimated from the results of the experiment, usually presented as a finite sample.
. To solve such a problem, two methods are most often used: the method of moments and the maximum likelihood method.

Method of moments. The method consists in equating the theoretical moments with the corresponding empirical moments of the same order.

Empirical initial moments th order are determined by the formulas:

,

and the corresponding theoretical initial moments th order - formulas:

for discrete random variables,

for continuous random variables,

where is the estimated distribution parameter.

To obtain estimates of the parameters of a distribution containing two unknown parameters and , the system is composed of two equations

where and – theoretical and empirical central points second order.

The solution of the system of equations is the estimates and unknown distribution parameters and .

Equating the theoretical empirical initial moments of the first order, we obtain that by estimating the mathematical expectation of a random variable
, which has an arbitrary distribution, will be the sample mean, i.e.
. Then, equating the theoretical and empirical central moments of the second order, we obtain that the estimate of the variance of the random variable
, which has an arbitrary distribution, is determined by the formula

.

In a similar way one can find estimates of theoretical moments of any order.

The method of moments is simple and does not require complex calculations, but the estimates obtained by this method are often inefficient.

Maximum likelihood method. The maximum likelihood method of point estimation of unknown distribution parameters is reduced to finding the maximum function of one or more estimated parameters.

Let be
is a continuous random variable, which as a result tests took the values
. To get an estimate of an unknown parameter need to find the value , at which the probability of realization of the obtained sample would be maximum. As
are mutually independent quantities with the same probability density
, then likelihood function call the argument function :

The maximum likelihood estimate of the parameter this value is called , at which the likelihood function reaches its maximum, i.e., is a solution to the equation

,

which obviously depends on the test results
.

Since the functions
and
reach a maximum at the same values
, then often, to simplify the calculations, they use the logarithmic likelihood function and look for the root of the corresponding equation

,

which is called likelihood equation.

If you need to evaluate several parameters
distribution
, then the likelihood function will depend on these parameters. To find estimates
distribution parameters, it is necessary to solve the system likelihood equations

.

The maximum likelihood method gives consistent and asymptotically efficient estimates. However, the estimates obtained by the maximum likelihood method are sometimes biased, and, in addition, to find the estimates, one often has to solve rather complex systems of equations.

Interval parameter estimates

The accuracy of point estimates is characterized by their dispersion. At the same time, there is no information about how close the obtained estimates are to the true values ​​of the parameters. In a number of tasks, it is required not only to find for the parameter suitable numerical value, but also evaluate its accuracy and reliability. It is necessary to find out what errors the parameter replacement can lead to. its point estimate and with what degree of confidence can we expect that these errors will not go beyond known limits.

Such problems are especially relevant for a small number of experiments. when the point estimate largely random and approximate substitution on the can lead to significant errors.

A more complete and reliable way to estimate the parameters of distributions is to determine not a single point value, but an interval that, with a given probability, covers the true value of the estimated parameter.

Let the results experiments, an unbiased estimate is obtained
parameter . It is necessary to evaluate the possible error. Some sufficiently large probability is chosen
(for example), such that an event with this probability can be considered a practically certain event, and such a value is found , for which

. (8.15)

In this case, the range of practically possible values ​​of the error that occurs when replacing on the , will
, and large absolute value errors will occur only with a small probability .

Expression (8.15) means that with probability
unknown parameter value falls into the interval

. (8.16)

Probability
called confidence level, and the interval covering with probability the true value of the parameter is called confidence interval. Note that it is incorrect to say that the parameter value lies within the confidence interval with the probability . The wording used (covers) means that although the estimated parameter is unknown, it has a constant value and therefore does not have a spread, since it is not a random variable.

Basic properties of point estimates

In order for an assessment to be of practical value, it must have the following properties.

1. A parameter estimate is called unbiased if its mathematical expectation is equal to the estimated parameter, i.e.

If equality (22.1) is not satisfied, then the estimate can either overestimate the value (M>) or underestimate it (M<) . Естественно в качестве приближенного неизвестного параметра брать несмещенные оценки для того, чтобы не делать систематической ошибки в сторону завышения или занижения.

2. An estimate of a parameter is called consistent if it obeys the law of large numbers, i.e. converges in probability to the estimated parameter with an unlimited increase in the number of experiments (observations) and, therefore, the following equality is satisfied:

where > 0 is an arbitrarily small number.

For (22.2) to hold, it is sufficient that the variance of the estimate tends to zero as, i.e.,

and moreover, that the estimator be unbiased. It is easy to pass from formula (22.3) to (22.2) if we use Chebyshev's inequality.

So, the consistency of the estimate means that with a sufficiently large number of experiments and with arbitrarily high certainty, the deviation of the estimate from the true value of the parameter is less than any in advance set value. This justifies the increase in the sample size.

Since is a random variable, the value of which varies from sample to sample, then the measure of its dispersion around the mathematical expectation will be characterized by the variance D. Let and be two unbiased estimates of the parameter, i.e. M = and M = , respectively D and D and, if D< D , то в качестве оценки принимают.

3. An unbiased estimate that has the smallest variance among all possible unbiased parameter estimates calculated from samples of the same size is called an effective estimate.

In practice, when estimating parameters, it is not always possible to simultaneously satisfy requirements 1, 2, 3. However, the choice of an estimate should always be preceded by its critical examination from all points of view. When sampling practical methods processing of experimental data, it is necessary to be guided by the formulated properties of estimates.

Estimation of the mathematical expectation and variance for the sample

The most important characteristics of a random variable are the mathematical expectation and variance. Consider the question of which sample characteristics best estimate the mathematical expectation and variance in terms of unbiasedness, efficiency, and consistency.

Theorem 23.1. The arithmetic mean calculated from n independent observations over a random variable that has the mathematical expectation M = , is an unbiased estimate of this parameter.

Proof.

Let - n independent observations over a random variable. By condition M = , and since are random variables and have the same distribution law, then. By definition, the arithmetic mean

Consider the mathematical expectation of the arithmetic mean. Using the property of mathematical expectation, we have:

those. . By virtue of (22.1) is an unbiased estimate. ?

Theorem 23.2 . The arithmetic mean calculated from n independent observations over a random variable that has M = u is a consistent estimate of this parameter.

Proof.

Let - n independent observations over a random variable. Then, by virtue of Theorem 23.1, we have M = .

For the arithmetic mean, we write the Chebyshev inequality:

Using the dispersion properties 4.5 and (23.1), we have:

because according to the theorem.

Hence,

So, the variance of the arithmetic mean is n times less than the variance of the random variable. Then

which means that is a consistent estimate.

Comment : 1 . We accept without proof a result that is very important for practice. If N (a,), then the unbiased estimate of the mathematical expectation a has a minimum variance equal to, therefore, is an effective estimate of the parameter a. ?

Let's move on to the estimate for the variance and check it for consistency and unbiasedness.

Theorem 23.3 . If a random sample consists of n independent observations on a random variable with

M = and D = , then the sample variance

is not an unbiased estimate of the D - general variance.

Proof.

Let - n independent observations over a random variable. Conditionally and for everyone. We transform the formula (23.3) of the sample variance:


Let's simplify the expression

Taking into account (23.1), whence

Let a random variable with unknown mathematical expectation and variance be subjected to independent experiments that yielded results - . Let us calculate consistent and unbiased estimates for the parameters and .

As an estimate for the mathematical expectation, we take the arithmetic mean of the experimental values

. (2.9.1)

According to the law of large numbers, this estimate is wealthy , with magnitude in probability. The same estimate is unbiased , insofar as

. (2.9.2)

The variance of this estimate is

. (2.9.3)

It can be shown that for a normal distribution, this estimate is effective . For other laws, this may not be the case.

Let us now estimate the variance. Let us first choose a formula for estimating statistical dispersion

. (2.9.4)

Let us check the consistency of the variance estimate. Let's open the brackets in the formula (2.9.4)

.

For , the first term converges in probability to the quantity , in the second - to . Thus, our estimate converges in probability to the variance

,

hence she is wealthy .

Let's check unbiasedness estimates for the quantity . To do this, we substitute expression (2.9.1) into formula (2.9.4) and take into account that random variables independent

,

. (2.9.5)

Let us pass in formula (2.9.5) to fluctuations of random variables

Expanding the brackets, we get

,

. (2.9.6)

Let us calculate the mathematical expectation of the value (2.9.6), taking into account that

. (2.9.7)

Relation (2.9.7) shows that the value calculated by formula (2.9.4) is not an unbiased estimator for dispersion. Its mathematical expectation is not equal, but somewhat less. This assessment leads to systematic error towards decreasing. To eliminate such a bias, it is necessary to introduce a correction by multiplying not the value . Then such a corrected statistical variance can serve as an unbiased estimate for the variance

. (2.9.8)

This estimate is just as consistent as the estimate , because for .

In practice, instead of estimate (2.9.8), it is sometimes more convenient to use an equivalent estimate related to the second initial statistical moment

. (2.9.9)

Estimates (2.9.8), (2.9.9) are not efficient. It can be shown that in the case of a normal distribution they will be asymptotically efficient (when will tend to the minimum possible value).

Thus, we can formulate the following rules for processing a limited in volume statistical material. If in independent experiments the random variable takes the values with unknown mathematical expectation and variance , then to determine these parameters, one should use approximate estimates

(2.9.10)

End of work -

This topic belongs to:

Lecture notes on mathematics probability theory mathematical statistics

department higher mathematics and informatics.. lecture notes.. in mathematics..

If you need additional material on this topic, or you did not find what you were looking for, we recommend using the search in our database of works:

What will we do with the received material:

If this material turned out to be useful for you, you can save it to your page on social networks:

All topics in this section:

Probability theory
Probability theory is a branch of mathematics that studies the patterns of random mass phenomena. Random is a phenomenon that

Statistical definition of probability
An event is a random phenomenon that, as a result of experience, may or may not appear (two-valued phenomenon). Designate events in capital Latin letters

Space of elementary events
Let a set of events be associated with some experience, and: 1) as a result of the experience, one and only one

Actions on events
The sum of two events and

Permutations
The number of different permutations of elements is denoted

Accommodations
Placement of elements by

Combinations
A combination of elements

The formula for adding probabilities for incompatible events
Theorem. The probability of the sum of two incompatible events is equal to the sum of the probabilities of these events. (one

Probability Addition Formula for Arbitrary Events
Theorem. The probability of the sum of two events is equal to the sum of the probabilities of these events without the probability of their product.

Probability Multiplication Formula
Let two events be given. Consider an event

Total Probability Formula
Let be a complete group of incompatible events, they are called hypotheses. Consider some event

Formula of probabilities of hypotheses (Bayes)
Consider again - the complete group of incompatible hypotheses and the event

Asymptotic Poisson formula
In cases where the number of trials is large and the probability of occurrence of an event

Random discrete variables
A random value is a quantity that, when the experiment is repeated, can take on unequal numerical values. The random variable is called discrete,

Random continuous variables
If, as a result of the experiment, a random variable can take on any value from a certain segment or the entire real axis, then it is called continuous. law

Probability density function of a random continuous variable
Let be. Consider a point and give it an increment

Numerical characteristics of random variables
Random discrete or continuous variables are considered to be completely specified if their distribution laws are known. Indeed, knowing the laws of distribution, one can always calculate the probability of hitting

Quantiles of random variables
Quantile of the order of a random continuous variable

Mathematical expectation of random variables
The mathematical expectation of a random variable characterizes its average value. All values ​​of the random variable are grouped around this value. Consider first a random discrete variable

Standard deviation and variance of random variables
Consider first a random discrete variable. Numerical characteristics of mode, median, quantiles and mathematical expectation

Moments of random variables
In addition to mathematical expectation and dispersion, the theory of probability uses numerical characteristics of higher orders, which are called moments of random variables.

Theorems on numerical characteristics of random variables
Theorem 1. The mathematical expectation of a non-random variable is equal to this value itself. Proof: Let

Binomial distribution law

Poisson distribution law
Let a random discrete variable taking the values

Uniform distribution law
The uniform law of distribution of a random continuous variable is the law of the probability density function, which

Normal distribution law
The normal law of distribution of a random continuous variable is the law of the density function

Exponential distribution law
The exponential or exponential distribution of a random variable is used in such applications of probability theory as the theory queuing, reliability theory

Systems of random variables
In practice, in applications of probability theory, one often encounters problems in which the results of an experiment are described not by one random variable, but by several random variables at once.

System of two random discrete variables
Let two random discrete quantities form a system. Random value

System of two random continuous variables
Now let the system be formed by two random continuous variables. The distribution law of this system is called probably

Conditional laws of distribution
Let and dependent random continuous variables

Numerical characteristics of a system of two random variables
The initial moment of the order of the system of random variables

System of several random variables
The results obtained for a system of two random variables can be generalized to the case of systems consisting of an arbitrary number of random variables. Let the system be formed by the set

Normal distribution of a system of two random variables
Consider a system of two random continuous quantities. The distribution law of this system is normal law rasp

Limit theorems of probability theory
The main goal of the discipline of probability theory is to study the patterns of random mass phenomena. Practice shows that the observation of a mass of homogeneous random phenomena revealing

Chebyshev's inequality
Consider a random variable with mathematical expectation

Chebyshev's theorem
If the random variables are pairwise independent and have finite variances bounded in the population

Bernoulli's theorem
With an unlimited increase in the number of experiments, the frequency of occurrence of an event converges in probability to the probability of an event

Central limit theorem
When adding random variables with any distribution laws, but with variances limited in the aggregate, the distribution law

Main tasks of mathematical statistics
The laws of probability theory discussed above are a mathematical expression of real patterns that actually exist in various random mass phenomena. studying

A simple statistic. Statistical distribution function
Consider some random variable whose distribution law is unknown. Required based on experience

Statistical line. bar graph
With a large number of observations (on the order of hundreds) population becomes inconvenient and cumbersome for recording statistical material. For clarity and compactness, statistical material

Numerical characteristics of the statistical distribution
In probability theory, various numerical characteristics of random variables were considered: mathematical expectation, variance, initial and central moments of various orders. Similar numbers

Choice of theoretical distribution by the method of moments
In any statistical distribution, there are inevitably elements of randomness associated with the limited number of observations. With a large number of observations, these elements of randomness are smoothed out,

Testing the plausibility of the hypothesis about the form of the distribution law
Let the given statistical distribution be approximated by some theoretical curve or

Consent Criteria
Consider one of the most commonly used goodness-of-fit tests, the so-called Pearson test. Assume

Point estimates for unknown distribution parameters
In p.p. 2.1. – 2.7 we have considered in detail the ways of solving the first and second main tasks mathematical statistics. These are the tasks of determining the laws of distribution of random variables according to experimental data

Confidence interval. Confidence probability
In practice, with a small number of experiments on a random variable, an approximate replacement of an unknown parameter