Biographies Characteristics Analysis

Basic categories of probability theory. Making managerial decisions under risk

Many of you studied probability theory and statistics at school or college. You have no doubt seen a graph like the one in Figure 4-1.

Figure 4-1. Normal (Gaussian) distribution of female height

Figure 4-1 depicts the so-called normal distribution. This figure shows the distribution of women by height. The horizontal axis shows growth in inches, and the vertical axis shows two types of probability.

1. Probability Frequency Plot - The shaded area is related to the left vertical axis and shows how often a particular growth occurs. In our example, the average height is 5 feet 4 inches. The probability that a woman's height will be closer to this one average, higher than the likelihood that its growth will differ significantly from the average. The higher the dot in the center of the graph, the higher the probability of a match, the areas on the left and right show less likely options. For example, the height of the curve at 70" is much lower than at 68", making a woman less likely to be 5'10" compared to the average height of 5'8".

2. Curve of cumulative probability - thin line starts at 0 percent and goes to 100 percent (on the right vertical axis). This curve shows the cumulative (cumulative) probability that a woman will have at least this height. For example, if you look at this line, you will notice that it is almost approaching 100 percent at 70 inches. The real figure at 70 inches is 99.18 percent, which means that less than one percent of women are 5'10" or taller.

This graph, like others like it, uses complex mathematical formulas, but its essence is quite simple: the further the height parameter is from the center, indicating the average value, the less likely you are to meet a woman of that height.

Why are probability calculations done like this? complicated way? You can skip long formulas and build a graph similar to the one below using a simple method. Go to a place where you can meet a lot of women, such as a student dorm. Then, randomly select 100 women and measure their height. Divide the height measurements into 1-inch intervals and count the number of women in each interval. The result will most likely be approximately 16 women 64 inches tall, 15 women each 63 and 65 inches tall, 12 each 62 and 66 inches tall, 8 each 61 and 67 inches tall, 4 each 60 and 68 inches tall, two women are 59 and 69 inches tall and one each is 58 and 79 inches tall. If you make a bar graph of the number of women of each height, it will look something like the one we have shown in Figure 4-2.


Figure 4-2. Histogram of female height distribution

Copyright 2006 Trading Blox, all rights reserved.

The type of graph in Figure 4-2 is called a histogram. It graphically shows the frequency of occurrence of a particular value compared to other values ​​(in our case, women's height) and has the same shape as the normal distribution plot in Figure 4-1, but it has one advantage: you can create it without involving complex mathematical formulas. You just need to be able to count and categorize.

A bar chart of this kind can be generated from your trade data and give you an idea of ​​what the future holds for you; the graph allows you to think in terms of probability rather than prediction. Figure 4-3 is a histogram of monthly results from a twenty-year test of the Donchian trend system, a simplified version of the Turtle system. It is simple and uses an extended dataset, unlike the Turtle system.

Figure 4-3. Distribution of monthly results

Copyright 2006 Trading Blox, all rights reserved.

The portions of the histogram in Figure 4-3 are divided into 2 percent segments. One column shows the number of months in which the result was positive and was in the range from 0 to 2 percent, the next column covers the range from 2 to 4 percent, and so on. Notice how the shape of the histogram resembles the height normal distribution we talked about earlier. The significant difference is that the graph is skewed to the right. This slope indicates positive months, sometimes referred to as a skewed distribution or "heavy tail".

The histogram in Figure 4-4 depicts the distribution of the trades themselves. The left side reflects unsuccessful transactions, the right one - successful ones. Note that each graph has two scales on the left and right, and the percentages on the central vertical scale are distributed between 0 and 100 percent. The cumulative lines move from 0 to 100 percent out of the center of the chart.

The numbers on the scales on the left and right show the number of trades in each 20 percent interval. For example, 100 percent on losing trades is 3746; this means that in the 22 years the study was conducted, there were 3,746 losing trades. For winning trades, this figure is 1854 trades (which is equal to 100 percent).

Trades are divided into columns depending on the profit divided by the amount of risk on this trade. This concept, known as the R-multiple, was created by trader Chuck Branscombe as a convenient way to compare trades made with different systems and in different markets (R-multiple was popularized by Van Tharp in the book "Trading - Your Path to Financial Freedom").

Figure 4–4 Distribution of trades according to Donchian, R-multiple™

Copyright 2006 Trading Blox, all rights reserved.

An example will illustrate this system. If you buy an August gold contract at $450 with a stop price of $440 (in case the market moves against you), you risk $1,000 (the difference between $450 and $440 times 100 ounces is the volume of one contract) . If the trade makes 5000 profit, it is called a 5R trade because the profit of $5000 is five times the amount you risked ($1000). In Figure 4-4, winning trades are grouped at 1R intervals, and losing trades at 0.5R intervals.

It may seem strange that the number of losing trades so much exceeds the number of winning ones. In fact, this is a common occurrence for trend-following systems. However, although the number of losing trades is high, most of the losses are approximately equal to our predetermined entry risk level of 1R. In contrast, the outcome of winning trades is many times the entry risk, with 43 trades yielding at least 10 times the entry risk.

The Turtles never knew which trade would succeed and which would fail. We just imagined approximate form distribution curve of possible outcomes. The distribution should have resembled those shown in the figures above. We believed that each trade could be profitable, but we understood that, most likely, it would be unsuccessful. We knew that some trades would bring in 4 or 5R, few would bring in 12R, and very few would make 20 or even 30R. But the Turtles knew for sure that the winnings on the trades would be so high that they would cover the losses from unsuccessful trades and even remain profitable.

Therefore, when carrying out operations, we did not measure own state the result of the transaction, because they knew that, most likely, it would be unprofitable. We reasoned in terms of probabilities, and this gave us the confidence to make decisions in the face of high levels of risk and doubt.

Basic concepts of the theory

  • Probability
  • Probability space
  • Random value
  • Local de Moivre-Laplace theorem
  • distribution function
  • Expected value
  • Variance of a random variable
  • Independence
  • Conditional Probability
  • Law big numbers
  • Central limit theorem

Probability theory

Introduction…………………………………………………………………….2

The main provisions of the theory ………………………..………………………3

Conclusion……………………………………………………………………11

The theory of probability arose in the middle of the 17th century. in connection with the problems of calculating the winning chances of players in gambling Oh. Passionate dice player Frenchman de Mere, trying to get rich, came up with new rules of the game. He offered to roll the die four times in a row and bet that a six would come up at least once (6 points). For greater confidence in winning, de Mere turned to his friend, the French mathematician Pascal, with a request to calculate the probability of winning in this game. We present Pascal's reasoning. The dice is a regular die, on the six sides of which the numbers 1, 2, 3, 4, 5 and 6 (the number of points) are applied. When throwing a die "at random", the loss of any number of points is a random event; it depends on many unaccounted influences: the initial positions and initial velocities of various sections of the bone, the movement of air along its path, certain roughness at the point of impact that occurs when it hits the surface elastic forces etc. Since these influences are chaotic, there is no reason, for reasons of symmetry, to give preference to the loss of one number of points over another (unless, of course, there are irregularities in the die itself or some exceptional dexterity of the thrower).

Therefore, when throwing a die, there are six equally possible cases excluding each other, and the probability of a given number of points falling should be taken equal to 1/6 (or 100/6%). When throwing a die twice, the result of the first throw - the loss of a certain number of points - will not have any effect on the result of the second throw, therefore, there will be 6 6 = 36 of all equally possible cases. Of these 36 equally possible cases, in 11 cases the six will appear at least once and in 5 · 5 = 25 cases the six will never come up.

The chances of a six appearing at least once will be equal to 11 out of 36, in other words, the probability of the event A, consisting in the fact that a six appears at least once when throwing a die twice, is equal to 11/100, i.e. equal to the ratio of the number of cases of favorable event A to the number of all equally possible cases. The probability that the six will never appear, i.e., the probability of an event called the opposite of event A, is 25/36. With a three-time throw of the die, the number of all equally possible cases will be 36 6 = 63, with a four-time throw of the die, the number of cases in which the six does not appear even once is 25 · 5 = 53, with four times 53 · 5 \u003d 54. Therefore, the probability of an event consisting in the fact that a six is ​​never thrown at a four-fold throw is equal, and the probability of the opposite event, i.e. the probability of a six appearing at least once, or the probability of de Mere winning, is equal.

Thus, de Mere was more likely to win than lose.

Pascal's reasoning and all his calculations are based on classical definition the concept of probability as the ratio of the number of favorable cases to the number of all equally possible cases.

It is important to note that the above calculations and the very concept of probability as a numerical characteristic of a random event referred to mass phenomena. Statement that the probability of rolling a six on a toss dice equal to 1/6, has the following objective meaning: when in large numbers throws, the share of the number of sixes will be on average equal to 16; Thus, with 600 throws, a six may appear 93, or 98, or 105, etc. times, but with a large number of series of 600 throws, the average number of appearances of a six in a series of 600 throws will be very close to 100.

The ratio of the number of occurrences of an event to the number of trials is called the frequency of the event. For homogeneous mass phenomena the frequencies of events behave stably, i.e., they fluctuate little around average values, which are taken as the probabilities of these events (statistical definition of the concept of probability).

In the XVII-XVIII centuries. the theory of probability developed slightly, since the scope of its application, due to the low level of natural science, was limited to a small range of issues (insurance, gambling, demography). In the 19th century and up to the present, in connection with the demands of practice, the theory of probability is continuously and rapidly developing, finding applications in more and more diverse fields of science, technology, economics (the theory of observational errors, shooting theory, statistics, molecular and atomic physics, chemistry, meteorology, planning issues, statistical control in production, etc.)

Probability theory is a branch of mathematics that studies the patterns of random mass events of a stable frequency.

The main position of the theory

Probability theory is a science that studies the laws of mass random phenomena. The same patterns, only in a narrower subject area of ​​socio-economic phenomena, are studied by statistics. Between these sciences there is a commonality of methodology and a high degree of interrelation. Practically any conclusions made by statistics are considered as probabilistic.

Particularly clear probabilistic nature statistical studies manifests itself in the sampling method, since any conclusion drawn from the results of the sample is evaluated with a given probability.

With the development of the market, probability and statistics are gradually merging, this is especially evident in risk management, commodity stocks, a portfolio of securities, etc. Abroad, the theory of probability and math statistics apply very widely. In our country, it is still widely used in product quality management, so the distribution and implementation of methods of probability theory in practice is an urgent task.

As already mentioned, the concept of the probability of an event is defined for mass phenomena or, more precisely, for homogeneous mass operations. A homogeneous mass operation consists of multiple repetitions of single operations similar to each other, or, as they say, tests. Each individual test consists in the fact that a certain set of conditions is created that are essential for a given mass operation. In principle, it should be possible to reproduce this set of conditions an unlimited number of times.

Example1. When throwing a dice "at random", the essential condition is only that the dice is thrown on the table, and all other circumstances ( starting speed, air pressure and temperature, table color, etc.) are not taken into account.

Example 2: A shooter repeatedly fires at a certain target with given distance from a standing position; each individual shot is a test in a mass shooting operation under given conditions. If the shooter is allowed to change position during different shots ("standing", "lying", "kneeling"), then the previous conditions change significantly and one should speak of a mass shooting operation from a given distance.

The possible outcomes of a single operation, or trial S, are called random events. A random event is an event that may or may not occur when S is tested. Instead of "occur" they also say "come", "appear", "take place".

So, when throwing a dice, random events are: the loss of a given number of points, the loss of an odd number of points, the loss of a number of points not exceeding three, etc.

When shooting, a random event is a hit on the target (the shooter can both hit the target and miss), the opposite random event is a miss. This example clearly shows that the concept of a random event in probability theory should not be understood in an everyday sense: "this is pure chance", since for a good shooter hitting the target will be more like a rule, and not by chance, understood in the ordinary sense.

Suppose that for a certain number n of trials, event A occurred m times, i.e., m results of a single operation turned out to be “successful”, in the sense that the event A of interest to us took place, and n-m results turned out to be “unsuccessful” - event A did not happen.

The probability of the event A, or the probability of a “successful” outcome of a single operation, is the average value of the frequency, i.e., the average value of the ratio of the number of “successful” outcomes to the number of all performed single operations (tests).

It goes without saying that if the probability of an event is equal, then in n trials event A can occur both more than m times and less than m times; it only occurs on average m times, and in most series of n trials the number of occurrences of event A will be close to m, especially if n - big number.

Thus, the probability P(A) is some constant number between zero and one:

P(A) Ј 1

Sometimes it is expressed as a percentage: R(A) 100% is the average percentage of the number of occurrences of event A. Of course, it should be remembered that we are talking about some mass operation, i.e., the conditions S for the production of tests are certain; if they are significantly changed, then the probability of event A may change: it will be the probability of event A in another mass operation, with other test conditions. In the future, we will assume, without stipulating this each time, that we are talking about a certain mass operation; if the conditions under which the tests are carried out change, this will be specially noted.

Two events A and B are said to be equivalent if at each trial they either both occur or both do not occur.

In this case, write

and make no distinction between these events. The probabilities of equivalent events A = B are obviously the same:

The converse is, of course, not true: the fact that P(A) = P(B) does not at all imply that A = B.

An event that necessarily occurs during each test is called certain.

We agree to denote it by the letter D.

For a reliable event, the number of its occurrences m is equal to the number of trials n, therefore its frequency is always equal to one, i.e., the probability of a reliable event should be taken equal to one:

P(D) = 1

An event that obviously cannot happen is called impossible.

We agree to denote it by the letter H.

For an impossible event, m = 0, therefore, its frequency is always zero, i.e., the probability of an impossible event should be considered equal to zero:

P(H) = 0

The greater the probability of an event, the more often it occurs, and vice versa, the lower the probability of an event, the less often it occurs. When the probability of an event is close to one or equal to one, then it occurs in almost all trials. They say about such an event that it is practically certain, that is, that one can certainly count on its occurrence.

Conversely, when the probability is zero or very small, then the event occurs extremely rarely; such an event is said to be practically impossible.

How small must the probability of an event be for it to be practically impossible? A general answer cannot be given here, since everything depends on how important this event is.

For example. If, for example, the probability that a light bulb is damaged is 0.01, then this can be reconciled. But if 0.01 is the probability that a strong botulinum poison is formed in a can of canned food, then this cannot be reconciled, since approximately one case out of a hundred people will be poisoned and human lives will be under threat.

Like any science, probability theory and mathematical statistics operate with a number of main categories:

Developments;

Probability;

Accident;

Probability distribution, etc.

Developments- is called an arbitrary set of some set of all possible outcomes, there can be:

§ Reliable;

§ Impossible;

§ Random.

credible An event is called an event that is certain to occur when certain conditions are met.

Impossible An event is called an event that certainly will not occur under certain conditions.

Random name events that may or may not occur under certain conditions.

Events are called the only possible if the occurrence of one of them is a certain event.

Events are called equally possible if none of them is more feasible than the others.

Events are called incompatible if the appearance of one of them excludes the possibility of the appearance of the other in the same trial.

Many, faced with the concept of "probability theory", are frightened, thinking that this is something overwhelming, very complex. But it's really not all that tragic. Today we will consider the basic concept of probability theory, learn how to solve problems using specific examples.

The science

What does such a branch of mathematics as “probability theory” study? She notes patterns and magnitudes. For the first time, scientists became interested in this issue back in the eighteenth century, when they studied gambling. The basic concept of probability theory is an event. It is any fact that is ascertained by experience or observation. But what is experience? Another basic concept of probability theory. It means that this composition of circumstances was not created by chance, but for a specific purpose. As for observation, here the researcher himself does not participate in the experiment, but simply is a witness to these events, he does not influence what is happening in any way.

Developments

We learned that the basic concept of probability theory is an event, but did not consider the classification. All of them fall into the following categories:

  • Reliable.
  • Impossible.
  • Random.

No matter what kind of events are observed or created in the course of experience, they are all subject to this classification. We offer to get acquainted with each of the species separately.

Credible Event

This is a circumstance before which the necessary set of measures has been taken. In order to better understand the essence, it is better to give a few examples. Physics, chemistry, economics, and higher mathematics are subject to this law. Probability theory includes such an important concept as a certain event. Here are some examples:

  • We work and receive remuneration in the form of wages.
  • We passed the exams well, passed the competition, for this we receive a reward in the form of admission to educational institution.
  • We invested money in the bank, if necessary, we will get it back.

Such events are reliable. If we have fulfilled all the necessary conditions, then we will definitely get the expected result.

Impossible events

We now consider elements of probability theory. We propose to move on to an explanation of the next type of event, namely, the impossible. To begin with, we will stipulate the most important rule - the probability of an impossible event is zero.

It is impossible to deviate from this formulation when solving problems. To clarify, here are examples of such events:

  • The water froze at a temperature of plus ten (this is impossible).
  • The lack of electricity does not affect production in any way (just as impossible as in the previous example).

More examples should not be given, since the ones described above very clearly reflect the essence of this category. The impossible event will never happen during the experience under any circumstances.

random events

Studying the elements of probability theory, Special attention should be given to this particular type of event. It is they who are studying given science. As a result of experience, something may or may not happen. In addition, the test can be repeated an unlimited number of times. Prominent examples are:

  • Tossing a coin is an experience, or a test, heading is an event.
  • Pulling the ball out of the bag blindly is a test, a red ball is caught is an event, and so on.

There can be an unlimited number of such examples, but, in general, the essence should be clear. To summarize and systematize the knowledge gained about events, a table is given. Probability theory studies only the last type of all presented.

title

definition

Credible

Events that occur with a 100% guarantee, subject to certain conditions.

Admission to an educational institution with a good passing of the entrance exam.

Impossible

Events that will never happen under any circumstances.

It is snowing at an air temperature of plus thirty degrees Celsius.

Random

An event that may or may not occur during an experiment/test.

Hit or miss when throwing a basketball into the hoop.

Laws

Probability theory is a science that studies the possibility of an event occurring. Like the others, it has some rules. There are the following laws of probability theory:

  • Convergence of sequences of random variables.
  • The law of large numbers.

When calculating the possibility of a complex, you can use the complex simple events to achieve results in an easier and faster way. Note that the laws are easily proved with the help of some theorems. Let's start with the first law.

Convergence of sequences of random variables

Note that there are several types of convergence:

  • The sequence of random variables is convergent in probability.
  • Almost impossible.
  • RMS convergence.
  • Distribution Convergence.

So, on the fly, it's very hard to get to the bottom of it. Here are some definitions to help you understand this topic. Let's start with the first look. The sequence is called convergent in probability, if the following condition is met: n tends to infinity, the number to which the sequence tends, Above zero and close to unity.

Let's move on to the next one, almost certainly. The sequence is said to converge almost certainly to a random variable with n tending to infinity, and P tending to a value close to unity.

The next type is RMS convergence. When using SC-convergence, the study of vector random processes is reduced to the study of their coordinate random processes.

The last type remains, let's briefly analyze it in order to proceed directly to solving problems. Distribution convergence has another name - “weak”, we will explain why below. Weak convergence is the convergence of distribution functions at all points of continuity of the limiting distribution function.

We will definitely fulfill the promise: weak convergence differs from all of the above in that the random variable is not defined on probability space. This is possible because the condition is formed exclusively using distribution functions.

Law of Large Numbers

Excellent assistants in proving this law will be theorems of probability theory, such as:

  • Chebyshev's inequality.
  • Chebyshev's theorem.
  • Generalized Chebyshev's theorem.
  • Markov's theorem.

If we consider all these theorems, then this question can drag on for several tens of sheets. Our main task is to apply the theory of probability in practice. We invite you to do this right now. But before that, let's consider the axioms of probability theory, they will be the main assistants in solving problems.

Axioms

We already met the first one when we talked about the impossible event. Let's remember: the probability of an impossible event is zero. We gave a very vivid and memorable example: snow fell at an air temperature of thirty degrees Celsius.

The second is as follows: a certain event occurs with a probability equal to one. Now let's show how to write it down using the mathematical language: P(B)=1.

Third: A random event may or may not occur, but the possibility always ranges from zero to one. The closer the value is to one, the greater the chance; if the value approaches zero, the probability is very low. Let's write it down mathematical language: 0<Р(С)<1.

Consider the last, fourth axiom, which sounds like this: the probability of the sum of two events is equal to the sum of their probabilities. We write in mathematical language: P (A + B) \u003d P (A) + P (B).

The axioms of probability theory are the simplest rules that are easy to remember. Let's try to solve some problems, based on the knowledge already gained.

Lottery ticket

To begin with, consider the simplest example - the lottery. Imagine that you bought one lottery ticket for good luck. What is the probability that you will win at least twenty rubles? In total, a thousand tickets participate in the circulation, one of which has a prize of five hundred rubles, ten of one hundred rubles, fifty of twenty rubles, and one hundred of five. Problems in probability theory are based on finding the possibility of luck. Let's take a look at the solution to the above problem together.

If we denote by the letter A a win of five hundred rubles, then the probability of getting A will be 0.001. How did we get it? You just need to divide the number of "happy" tickets by their total number (in this case: 1/1000).

B is a win of one hundred rubles, the probability will be equal to 0.01. Now we acted on the same principle as in the previous action (10/1000)

C - the winnings are equal to twenty rubles. We find the probability, it is equal to 0.05.

The remaining tickets are of no interest to us, since their prize fund is less than that specified in the condition. Let's apply the fourth axiom: The probability of winning at least twenty rubles is P(A)+P(B)+P(C). The letter P denotes the probability of the occurrence of this event, we have already found them in the previous steps. It remains only to add the necessary data, in the answer we get 0.061. This number will be the answer to the question of the assignment.

card deck

Problems in the theory of probability are also more complex, for example, take the following task. Before you is a deck of thirty-six cards. Your task is to draw two cards in a row without mixing the pile, the first and second cards must be aces, the suit does not matter.

To begin with, we find the probability that the first card will be an ace, for this we divide four by thirty-six. They put it aside. We take out the second card, it will be an ace with a probability of three thirty-fifths. The probability of the second event depends on which card we drew first, we are interested in whether it was an ace or not. It follows that event B depends on event A.

The next step is to find the probability of simultaneous implementation, that is, we multiply A and B. Their product is found as follows: we multiply the probability of one event by the conditional probability of another, which we calculate, assuming that the first event happened, that is, we drew an ace with the first card.

In order to make everything clear, let's give a designation to such an element as events. It is calculated assuming that event A has occurred. Calculated as follows: P(B/A).

Let's continue the solution of our problem: P (A * B) \u003d P (A) * P (B / A) or P (A * B) \u003d P (B) * P (A / B). The probability is (4/36) * ((3/35)/(4/36). Calculate by rounding to hundredths. We have: 0.11 * (0.09/0.11)=0.11 * 0, 82 = 0.09 The probability that we will draw two aces in a row is nine hundredths.The value is very small, it follows that the probability of the occurrence of the event is extremely small.

Forgotten number

We propose to analyze a few more options for tasks that are studied by probability theory. You have already seen examples of solving some of them in this article, let's try to solve the following problem: the boy forgot the last digit of his friend's phone number, but since the call was very important, he began to dial everything in turn. We need to calculate the probability that he will call no more than three times. The solution of the problem is the simplest if the rules, laws and axioms of probability theory are known.

Before looking at the solution, try to solve it yourself. We know that the last digit can be from zero to nine, that is, there are ten values ​​in total. The probability of getting the right one is 1/10.

Next, we need to consider options for the origin of the event, suppose that the boy guessed right and immediately scored the right one, the probability of such an event is 1/10. The second option: the first call is a miss, and the second is on target. We calculate the probability of such an event: multiply 9/10 by 1/9, as a result we also get 1/10. The third option: the first and second calls turned out to be at the wrong address, only from the third the boy got where he wanted. We calculate the probability of such an event: we multiply 9/10 by 8/9 and by 1/8, we get 1/10 as a result. According to the condition of the problem, we are not interested in other options, so it remains for us to add up the results, as a result we have 3/10. Answer: The probability that the boy calls no more than three times is 0.3.

Cards with numbers

There are nine cards in front of you, each of which contains a number from one to nine, the numbers are not repeated. They were placed in a box and mixed thoroughly. You need to calculate the probability that

  • an even number will come up;
  • two-digit.

Before moving on to the solution, let's stipulate that m is the number of successful cases, and n is the total number of options. Find the probability that the number is even. It will not be difficult to calculate that there are four even numbers, this will be our m, there are nine options in total, that is, m = 9. Then the probability is 0.44 or 4/9.

We consider the second case: the number of options is nine, and there can be no successful outcomes at all, that is, m equals zero. The probability that the drawn card will contain a two-digit number is also zero.

Probability is an intermediate category that makes a gradual or smooth transition from necessity to chance and from chance to necessity. Less probability is closer to chance. A high probability is closer to the need. With one of its "ends" probability rests against chance, passes into it, and at the other "end" passes into necessity.

Speaking about the origins of the category "probability", we should first of all mention Aristotle. More than once in his writings he pointed out that there is an intermediate category between chance and necessity. True, Aristotle did not designate this category with any single, specific term. He usually used the expression "for the most part" in the context of a comparison with chance (which can only sometimes be) and necessity (which always takes place). In the "First Analysts" he spoke of the intermediate between the contingent and the necessary as "possible in one sense," contrasting it with the contingent as "possible in another sense" (32b 4-23). In the same work, the term "probable" (70a 3-10) is encountered, which is used in a meaning close to the expression "for the most part." Here are some texts:

“Accidental, or accidental, is that which is inherent in something and about which it can be correctly said, but is inherent not by necessity and not for the most part” .

"And so, since with one of the existing things it is the same always and by necessity (this is a necessity not in the sense of violence, but in the sense of what cannot be otherwise), with the other, not by necessity and not always, but for the most part , - then this is the beginning and this is the reason that there is an incidental, for what exists not always and not for the most part, we call accidental, or incidental. Thus, if bad weather and cold come in the summer, we will say that this happened by chance, and not when heat and heat sets in, because the latter happens / in summer / always or in most cases, while the former does not. And that a person is pale is something incidental (after all, this does not happen either always, or in most cases) "(p.183-185; 1026b 27-35). "Therefore, since not everything exists or becomes necessarily and always, but the majority - for the most part, there must necessarily be something incidental (otherwise everything would be by necessity); so that the cause of the incident will be matter, which can be otherwise, than it is for the most part. First of all, it is necessary to find out whether there really is nothing that does not exist either always or for the most part, or whether this is impossible. In fact, besides this, there is something that can be in one way and another, And whether there is / only / what happens in most cases, and nothing always exists, or whether there is something eternal - this must be considered later, and that there is no science of the incident - this is obvious, because any science - about what is always there, or about what happens for the most part. Indeed, how else would a person learn something or teach another? After all, it should be defined as being always or mostly, for example, that useful for a sick person with fever dkoy in most cases. As for what goes against this, it will not be possible to indicate when there will be no benefit from the honey mixture, for example, on a new moon, but then “on a new moon” also means something that always happens or for the most part” (p. 184; 1027a 8-27) .

"... accidental, or incidental, is what, however, happens, but not always and not by necessity and not for the most part" .

Random "that, the cause of which is not defined, occurs not for the sake of anything and not always and not for the most part, and not according to any law."

"About the accidental, / or incidental /, there is no knowledge through proof. For the accidental is neither what necessarily happens, nor what happens for the most part, but it is something that happens in addition to both."

"As for the proofs and knowledge about what often happens, such as about a lunar eclipse, it is clear that, since they are such, they are always / the same /; since they are not always / the same /, they private".

"So, some / events / are general (for they are always and but in all cases either in such a state, or so they occur), while others do not always occur, but only in most cases; for example, not all men grow a beard , but only for the majority.

"And since some things exist by necessity, others - for the most part, and still others - as it happens, then / the interlocutor / always provides an opportunity for attacks, if he necessarily passes off the existing as happening for the most part or happening for the most part - for the necessary existing, whether it is what happens for the most part or its opposite.Indeed, if / the interlocutor/ necessarily presents the existing as happening for the most part, then it is clear that he says that it is not inherent in everything, although in fact it is inherent in everything, so that he he is mistaken. likewise - if the opposite of what happens for the most part, he passes off as necessarily existing, because the opposite of what happens for the most part is always called that which happens to be more p caustically. For example, if people are mostly bad, then good people are more rare, so /interlocutor/ is even more mistaken when he says that people are necessarily good. And in the same way, they are mistaken when they pass off the accidental as necessarily existing, or as happening for the most part. And when /the interlocutor/ did not specify whether he spoke about the object as happening for the most part or as necessarily existing, and /in fact the object/ exists for the most part, then you can argue with him, as if he said that this the subject must exist. For example, if he, without specifying, asserts that the disinherited are bad people, then one can argue with him, as if he were asserting that they are bad by necessity.

"... it is obvious that not everything exists and happens by virtue of necessity, but something depends on the case and regarding it the assertion is no more true than the negation; and the other, although it happens rather and for the most part this way than otherwise, however, it can happen in another way, and not like that.

"... some / events / always occur in the same way, and others - for the most part, then it is obvious that neither for those nor for others the cause can be considered an accident or an accident - nor for what / happens / out of necessity and always , nor for what / occurs only / for the most part ".

"For the spontaneous and accidental / takes place / contrary to what is or happens always or as a rule" .

“After all, what is generated by nature either always arises, or for the most part in the same way, and what deviates from this is always or mostly spontaneous or accidental” (my italics everywhere - L.B.).

From these texts it is clear that for Aristotle the category "for the most part" is no less important than necessity and chance. He almost always thinks in triads: "necessary -

mostly random. Therefore, those researchers who, when analyzing Aristotle's work, limit themselves to considering a pair of categories "necessity-accident" are wrong. This contradicts the historical truth, not to mention the fact that it distorts Aristotle's position on the question of the dialectic of the necessary, the probable, and the accidental. Aristotle's position on this issue is perhaps much more balanced and dialectical than the position of many, many philosophers who lived after him, including Hegel. For the Greek thinker, it was quite clear that there was an intermediate link between the necessary and the accidental. Another thing is that he did not study it as carefully as he did with the categories of necessary and accidental. Nevertheless, Aristotle left ample evidence of how he understood the intermediate category. Here is another text in which the philosopher, speaking of the possible in two senses, clearly means by the possible in the first sense the probable:

"... Let us say again that / the expression / "to be possible" is used in a double sense: in one sense it is possible that which usually happens, but is not necessary, such as, for example, that a person turns gray, or grows fat, or loses weight, or in general what is naturally inherent in him (for all this is not connected with necessity, since a person does not exist forever, but if he exists, all this is either necessary or usually happens.) In another sense, "to be possible" means something indefinite, that which may or may not be so, for example, that a living being walks, or that an earthquake occurs while he is walking, and in general everything that happens is accidental, because by nature all this can happen no more than vice versa. Therefore, the premises about each of these kinds of possibility are reversible, but not in the same way: the premise about what happens by nature is reversible to the premise about what is inherently not necessary (thus, a person may not turn gray); reversible to the premise that it can equally be both. Of the indefinite there is neither a science nor a demonstrative syllogism, for lack of a firmly established middle term. About what is happening by nature, they are. And about what is possible in this sense, reasoning and research, perhaps, happen.

In two cases, Aristotle speaks directly about the probable, gives a definition of the probable:

“The probable is a plausible premise, for what is known to happen in most cases in such and such a way or does not happen, exists or does not exist, is a probable, for example, for envious people to hate or for lovers to love” .

"Probable is that which happens for the most part, and not simply that which happens, as some define it, but that which may happen in another way; it is related to that in relation to which it is probable, as the general is to the particular" .

Both definitions of the probable fully correspond to the expressions used in the previous texts "for the most part", "in most cases", "usually", "usually". Thus, by the intermediate category (between the necessary and the contingent), Aristotle clearly meant the probable.

In our philosophical literature, at least two authors indicate that Aristotle had already explored the problem of probability. Here is what V.I. Kuptsov: "The concepts of possibility, probability, chance, firmly rooted in everyday language from time immemorial, served a person as, although imperfect, but still effective means of cognizing reality ... Already among ancient thinkers, they became the subject of systematic research. They are especially remarkable in In this regard, the work of Aristotle, who examines in detail various types of indefinite statements and problematic conclusions, analyzing their role in the cognitive process. to a great extent diverse in the nature of their implementation.Some of them "always arise in the same way, others for the most part," while others are completely individual, but even in phenomena that "have not happened by chance, much happens by chance" (Aristotle. Physics. M. , 1937, p. 38)" . And now we give the opinion of A.S. Kravets: "The history of the problem of probability can be traced far enough into the past. Already Aristotle was interested in this problem. In "Rhetoric" he gave an analysis of some probabilistic inferences and tried to define the concept of probability" (further A.S. Kravets cites the definition of probability quoted above - L. B.). "In this definition," he writes further, "Aristotle is already making an attempt to connect probability with the categories of necessity, chance, possibility, general and particular."

IN AND. Kuptsov and A.S. Kravets thus tried to restore historical justice and paid tribute to Aristotle as the first thinker who investigated the objective status of probability.

Unfortunately, another great categoriologist - Hegel - practically ignored this category. E.P. Sitkovsky writes about this:

“P.L. Lavrov in his work “Hegelism” (1858) says that the Hegelian “Encyclopedia of Philosophical Sciences” really covered almost everything, especially the Hegelian “Logic”. But then he adds: "However, not quite. As an example of a gap, one can cite the theory of probability, a rather remarkable science not only in practical, but also in metaphysical terms." Lavrov even indicates that section of Hegelian logic in which the concept of probability should be introduced, namely the department of essence, the subdivision "Phenomenon" (see P.L. Lavrov. Philosophy and sociology, vol. 1, M., 1965, p. 172) .

Probability is a concept by which the degree of feasibility of a possibility or chance is determined. The concept of probability plays a large role in modern mathematics, economic statistics, sociology, etc. The metaphysical significance of this concept lies in the fact that it is closely connected with the dialectical categories of possibility and chance, with the concept of law and regularity (especially statistical regularity), with the concept necessity (the manifestation of which is chance), as well as with the category of reality (since the possibility is always considered in the perspective of its transition into reality). In ordinary word usage, the concept of probability often merges with the concept of possibility, the very distinction between abstract and real possibility contains an element of probability (more or less, depending on the nature of the possibility). Perhaps that is precisely why Hegel bypassed the concept of probability...

In any case, the concept of probability actually carries a metaphysical (as P.L. Lavrov put it) load and should be represented in the logic of categories. Should it appear in logic in the subdivisions "Phenomena" or "Reality" or, perhaps, in the subdivision where it is a question of quantity, should it appear as an independent category or as a particular scientific concept involved in facilitating and clarifying the analysis other categories is a secondary issue. In formal logic, as is known, predicaments-categories and predicables are distinguished, which are usually considered as derivative concepts derived from predicaments-categories. It is possible to estimate the categorical value of probability as a predicable.

The reason for ignoring the category of probability by Hegel is that he thought according to the scheme of the triad "thesis-antithesis-synthesis" (or "affirmation-denial-denial of negation"), in which there was no place for an intermediate link. Synthesis ("negation of negation") has the character of combining categories, as a result of which a new category arises. In our version of categorical logic, the Hegelian synthesis (“negation of negation”) corresponds mainly to organic synthesis, the mutual mediation of opposite categories. However, along with synthesis in our version, an important place is given to intermediate, transitional states from one opposite category to another. Hegel, carried away by the "synthetic" idea, did not notice that there is an intermediate link between the opposite definitions. By the way, Aristotle understood this well. But on the other hand, he had a "weakness" in relation to the "synthetic" representation. Aristotle, compared with Hegel, seems to be an eclecticist.

So, for Hegel, the idea of ​​intermediate categories was not characteristic. As a result, he "overlooked" that there is a smooth transition between chance and necessity, and that this transition is expressed in a special category - probability. Following Hegel, Marxist philosophers for a long time, in essence, ignored the categorical status of probability, did not find a place for it in the system of categories. IN AND. Koryukin and M.N. Rutkevich, noting in 1963 that "as a philosophical category, probability is much 'younger' than as a logical and mathematical concept," they only raised the question of the need to "consider" it "as a category of dialectics and analyze the application of this category in various areas of knowledge in order to try to give a more general, philosophical definition of probability on this basis.

In the last three decades, the Hegelian neglect of the category of probability has gradually been eliminated, and the task of determining the status of this category in the system of philosophical categories has been posed more and more clearly. A lot has already been done along this path. Philosophers are increasingly realizing that probability is a bridge between chance and necessity. While not completely covering these categories, it nonetheless "captures" part of their "territory", namely, it embraces statistical or probable contingency and statistical or probable necessity. The latter are the poles of probability. In this respect, it can be represented or defined as the unity of statistical chance and statistical necessity.

Above, we have already given the definition of probability theory given by one of its creators - B. Pascal. In his opinion, it connects the "indeterminacy of the case" with the "accuracy of mathematical proofs" and not just connects, but "reconciles" "these seemingly contradictory elements". As he rightly said! Indeed, probability reconciles chance and necessity. To such an understanding probability comes more and more philosophers and scientists. M. M. Rozental directly writes: "probability is an expression of the connection between necessity and chance". B. I. Koryukin and M. N. Rutkevich give a close interpretation. They write: "A random event (which can be, but may not be) there is always a possible event, and this "accidental" possibility is not alien to necessity. It is in the concept of probability that we express the degree of necessity contained in an event that may occur (but may not occur, and therefore random).

“Radioactive decay,” they explain, “is a wonderful example of an objective probabilistic process ... The probability (P) of decay for any atom in t years is expressed by the formula: P = 1 - e m, where the constant l = 0.000486.

The pattern of radioactive decay is statistical. With an equal probability for any atom to decay during this period, some atoms will decay, others will not, and the proportion of decayed atoms in the total number of atoms will be exactly expressed by the above formula. The fact that N atoms decay in time t is a necessity. But the fact that it is these atoms that will decay, and not others in relation to the general need for the behavior of the "collective" is an accident. Of course, each act of decay of the radium nucleus is causally determined. Probability is a quantitative characteristic that makes it possible to judge to what extent the general necessity is embodied in the individual behavior of a given nucleus, characterizing the possibility of its disintegration.

Another example of probability in a statistical process, where (unlike radioactive decay) the causes of individual deviations from statistical averages, i.e. the necessity of a particular order, are well known.

Suppose we have a vessel with a gas, for example, nitrogen, at a temperature of 148 o C. The average speed of nitrogen molecules at this temperature is calculated by the formula v = y8RT / p and will be approximately 570 m / s. In accordance with the statistical distribution found by Maxwell, some of the molecules have much larger (5.4% of the molecules have v > 1000 m/sec) or much smaller (0.6% of the molecules have v) m / sec? The answer to this question inevitably turns out to be twofold. There is a certain degree of necessity, i.e. the probability of acquiring a given speed by any molecule, in our example this probability is 0.054. This probability reflects the presence of a general (st 2 atistic) need for a possible individual event".

L.B. writes about the same. Bazhenov and N.V. Pilipenko. "A statistical law," LB Bazhenov believes, "expresses an objective necessity in its inextricable connection with chance." According to N.V. Pilipenko in statistical regularities "necessity and chance are in unity, interrelation". He explains:

"Their interrelation in statistical laws follows from a kind of interweaving of large and small causes in the objects of statistical aggregates. Necessity is the result of the qualitative homogeneity of objects, follows from the action of fundamental causes. Randomness is a consequence of the disordered nature of the interaction of objects, the susceptibility of each of them to the action of small causes. It depends both from the general properties of the statistical aggregate, and from the individual features of an individual object in a series of identical, similar objects ...

The mechanism of the emergence of necessity and chance in probabilistic-statistical ... natural and social systems and the relationship of these categories is not yet clear in its entirety. However, its common features can be imagined if we consider the relationship between the system and its components (elements)...

The components or elements included in the structure of the system have, on the one hand, an individual, and on the other hand, a systemic nature. As individual components of the system, they discover random properties, and as interacting elements of a single whole - system (necessary) properties ".

Now about the position of scientists in this matter. E.S. Wentzel writes: the subject of probability theory "is the specific regularities observed in random phenomena. Practice shows that, observing in the aggregate masses of homogeneous random phenomena, we usually find in them quite definite regularities, a kind of stability in and that are characteristic of mass random phenomena.” She gives such an example and comments:

“A vessel contains some volume of gas, consisting of a very large number of molecules. Each molecule per second experiences many collisions with other molecules, repeatedly changes the speed and direction of motion; the trajectory of each individual molecule is random. It is known that the gas pressure on the wall of the vessel is due to the combination of impacts of molecules on this wall. It would seem that if the trajectory of each individual molecule is random, then the pressure on the vessel wall should also change in a random and uncontrolled way; However, it is not. If the number of molecules is large enough, then the gas pressure is practically independent of the trajectories of individual molecules and obeys a well-defined and very simple pattern.

Random features inherent in the movement of each individual molecule are mutually compensated in the mass; as a result, despite the complexity and entanglement of an individual random phenomenon, we get a very simple pattern that is valid for the mass of random phenomena. We note that it is precisely the mass of random phenomena that ensures the fulfillment of this regularity; with a limited number of molecules begin to affect random deviations from patterns, so-called fluctuations...

Similar specific, so-called "statistical" patterns are always observed when we are dealing with a mass of homogeneous random phenomena. The patterns that manifest themselves in this mass turn out to be practically independent of the individual characteristics of individual random phenomena included in the mass. These individual features in the mass, as it were, cancel each other out, level out, and the average result of the mass of random phenomena turns out to be practically no longer random. It is this stability of mass random phenomena, repeatedly confirmed by experience, that serves as the basis for the application of probabilistic (statistical) research methods.

E.S. Wentzel has shown well here that probability is formed at the junction of mass randomness and statistical stability, a regularity inherent in these randomnesses. As a result of countless collisions of gas molecules, irreversible processes occur en masse, i.e., in each individual case, the direct process (for example, the movement of a molecule in one direction at a certain speed) does not reverse, i.e., is not replaced by a reverse process (the movement of a molecule in reverse side at the same speed). However, when a large number of collisions of molecules occur, their forward and reverse movements seem to be mutually canceled out, neutralized, and we have a pseudo-reversible process, a well-known statistical stability. The pseudo-reversibility of such processes is due, firstly, to the fact that each direct process does not correspond in the strict sense to an inverse process (as is the case, for example, in the orbital motion of planets) - only after many collisions can a molecule change the direction of movement to the opposite and end up in that same place; Secondly that there is no complete neutralization, mutual cancellation of direct and reverse processes - the general gas process goes in one direction, which is expressed in one or another value of statistical stability. Thus, at the macro level, there is irreversibility, more precisely, statistical irreversibility. It "makes its way" through a mass of random processes, to some extent extinguishing, neutralizing each other. It can be said about statistical necessity (regularity) that it is a necessity (regularity) of pseudo- or quasi-reversible processes, which are based on mass irreversible processes. (Accordingly, a non-statistical necessity /law/ can be said to be a necessity, a law of strictly reversible processes (similar to the orbital motion of the planets).

A.N. Kolmogorov writes: "The statistical description of a set of objects occupies an intermediate position between the individual description of each of the objects of the set, on the one hand, and the description of the set according to its general properties, which do not at all require its division into separate objects, on the other." As we can see, Kolmogorov directly points to the intermediate nature of the probabilistic-statistical approach.

We find an interesting reasoning in the work of the mathematician A. Renyi. “The other day, putting books in order,” he writes, “I came across the Meditations by Marcus Aurelius and accidentally opened the page where he writes about two possibilities: either the world is a huge chaos, or order and regularity reign in it. of the two mutually exclusive possibilities is realized, a thinking person must decide for himself ... And although I have read these lines many times, but now for the first time I thought about why, in fact, Marcus Aurelius believed that either chance or order dominated in the world and regularity? Why did he think that these two possibilities exclude each other? It seems to me that in reality both statements do not contradict each other, moreover, they operate simultaneously: chance dominates in the world and order and regularity operate simultaneously ... That is why I and I attach such importance to the clarification of the concept of probability and I am interested in the questions inextricably linked with this -

A. Renyi connects probability with the fact that randomness and order, regularity act simultaneously in the world. Thus, he indirectly indicates that probability is based on the unity of chance and necessity.

M. Born wrote: "Nature, like human affairs, seems subject to both necessity and chance. And yet, even chance is not completely arbitrary, because there are laws of chance formulated in the mathematical theory of probability" . Our philosophy is dualistic; nature is governed, as it were, by an intricate tangle of laws of cause and laws of chance.

Further, he wrote: "I mean patterns of a completely different type, where we are dealing with a large number of objects, namely, statistical, or, more precisely, stochastic laws. (The term "stochastic" is used at the present time when a system consisting of many particles, changes its state as a result of random influences and interactions.)

To correctly explain these patterns, one should apply the theory of probability developed by Pascal to better understand games in which chance plays a major role. Starting with a description of gambling, this mathematical discipline has illuminated many other types of human activity in a new way. Currently, it is used in the insurance business, for the study of production processes, in the distribution and regulation of traffic flows, and in many other areas. It is also used in many branches of knowledge, for example, in stellar astronomy, genetics, epidemiology, the study of the distribution of plant and animal species, etc.

In physics, statistical methods are closely related to the atomistic concept...

The movement of an atom in a gas is a process in which regularity and chance are combined. Physics has successfully used the combination of these two features in the construction of a remarkable edifice called the statistical theory of heat" (italics mine - LB).

According to M. Born, the probabilistic-statistical approach is based on a combination, as he himself puts it, of regularity and chance. Comments, as they say, are unnecessary.

L.V. Tarasov writes: "the dialectical unity of the necessary and the accidental, which, by the way, is expressed through probability."

Among philosophers there is sometimes an idea of ​​probability as a "degree of possibility" or "quantitative measure of possibility". This representation captures only the fact that the probability can be greater or lesser, that it is calculable (methods of probability theory). However, it says nothing about the nature of probability. After all, one can speak of chance as more or less, and about necessity. In general, any categorical definition can be somehow characterized from a quantitative point of view. For example, the calculus of contradictions has not yet been created, although the fact has long been known that contradictions have their minima and maxima. We dare to assert that such a calculus will eventually be created. All objective categorical definitions have a quantitative side and therefore inevitable mathematization awaits them.

The above statements of philosophers and scientists reveal the nature of probability as an intermediate category linking chance and necessity. Only in the coordinates of these categories is its content determined and it can be characterized as having a greater or lesser degree.

A.S. Kravets in the book "The Nature of Probability" gave a meaningful analysis of this category and showed that it "removes" the opposite of chance and necessity. "In any random sequence, - he writes, - despite its irregularity and disorder, there is a quite stable distribution of elements. In the chaotic sequence of random events, a certain regularity is captured (usually called stochastic regularity), which is qualitatively different from rigid determination schemes and is the objective basis of probabilistic laws. Analyzing the nature of probabilistic laws, we will see a deep connection between chance and necessity" .

According to A.S. Kravets "the probabilistic structure has three specific properties: 1) the unity of irregularity and stability in the class of events; 2) the unity of autonomy and dependence of events; 3) the unity of disorder and order in the class of events" . Concerning probability as a unity of irregularity and stability in a class of events, he writes:

"In the most general terms, irregularity can be characterized as the absence of regularity, i.e., a stable lawfulness of the process of realizing events. We say, for example, that events can be realized in such and such an order. If the sequence of events is irregular, then this means that those but the events can in principle be realized in some other order.If we now assume that events will develop according to our second plan, then irregularity means that this plan can again be easily violated, etc. Irregularity is a constant violation and non-observance of any predetermined rules for the implementation of events ...

The irregularity of behavior is inherent in every probabilistic system. On the contrary, a system whose behavior is characterized by regularity obeys the laws of rigid determination. If, for example, we randomly throw a metal needle onto a graphed plane, then the hitting of the needle on different zones will be irregular, and we can only calculate the probability of the needle hitting a certain zone. But if a plane is placed between the poles of a magnet, then the process immediately becomes regular, and the fall of the needle will obey a certain unambiguous law.

Irregularity, therefore, is expressed in the variability of the behavior of observed objects, in the deep variability of behavior, in the high dynamism of probabilistic systems ...

However, the irregularity found in the behavior of probabilistic systems is by no means absolute. In the disorder of individual events, a certain regularity of the set of events as a whole is realized, some cumulative stability of this set. Although in each separate case"anything" can happen (naturally, only within the limits of the possible), nevertheless, in general, in a large set of random events, certain stable groups of such events are always reproduced. The irregularity of the realization of individual events turns out to be limited by the stability of their set as a whole, due to which the relations between events acquire a certain regular, repetitive character. In practice, this is usually fixed in the form of stable, tending to some constant value) relative frequencies of the realization of certain events.

The amazing stability of the parameters of probabilistic systems, which is well known to us from statistical reference books (the number of deaths per year, the number of divorced couples per year, the number of boys in the entire population of newborns per year, the amount of precipitation per year, etc.), is a manifestation of objective laws, which prescribe the case a certain framework. It is the stable type of relations between the elements that form a probabilistic system, the stable nature of the changes taking place in it continuously that makes it possible to derive a certain probabilistic law of the system's behavior. Thus, in the behavior of a probabilistic system, a dialectical unity of variability is revealed, breaking in each individual case the ossified and unchanging course of processes, and stability, guiding this variability as a whole along a certain channel of regular tendencies.

Now about probability as a unity of autonomy and dependence of events:

"The idea of ​​probability is organically connected with the idea of ​​independence of observed events. Both classical and frequent approaches to the definition of probability are based on the idea that the realization of events occurs in an independent way, as a result of which their probabilities turn out to be independent of each other.

With the development of theoretical and probabilistic concepts, the role of the principle of autonomy in the cognition of material systems was more and more clearly realized. Each new step in expanding the scope of application of theoretical and probabilistic concepts dealt a crushing blow to the metaphysical picture of the world, according to which the world is a strictly determined system of events. In such a system, everything is equally essential, everything has the same significance for the fate of the universe - a speck of dust and a planet, the life of an individual and the fate of a people. In the rigid and ossified world of unambiguous determination, any event is predetermined by previous events, there is no place for autonomous phenomena in it, there are no accidents, the whole is strictly determined by its parts (p. 60).

The autonomy of phenomena is one of the fundamental properties of objective reality, no less fundamental than their interdependence (p. 62).

In science, the recognition of the principle of autonomy of systems came along with the approval of probabilistic-statistical methods for their study and the establishment of probabilistic laws for the behavior of objects. Autonomy expresses an essential feature of a probabilistic connection, and the very concept of probability is directly based on the idea of ​​a set of independent events. In probabilistic concepts, the idea of ​​autonomy is not some kind of additional appendage, but it is one of the fundamental methodological principles, one of the defining axioms of probability theory" (p. 63).

"Initially, the concept of absolutely independent event. However, it soon became clear that the mathematical models obtained in this way were inapplicable to many phenomena, the study of which faced natural science. I had to return to the idea of ​​dependence again, but this time on a new, probabilistic basis. A new concept was developed, adequate to the situations under study - the concept of probabilistic dependence.

It is amazing how unexpectedly dialectics makes its way into knowledge! During the period of rigid determinism, which recognized only the unambiguous interdependence of phenomena, the idea of ​​local autonomy was tacitly assumed as a necessary condition for the identification of rigid causal connections. Indeed, from the entire infinite set of connections in the Universe, one can single out some rigid, strictly unambiguous connection only under one important condition, namely, under the condition that the selected local group of phenomena does not depend on all other phenomena in the Universe. Thus, mechanistic determinism, while explicitly denying the idea of ​​autonomy, at the same time implicitly recognized it literally at every step, in relation to each individual connection.

With the probabilistic-statistical method of consideration, on the contrary, they started with the assumption of the autonomy of the phenomena under study and only then were forced to limit this autonomy and formulate the idea of ​​a probabilistic dependence. A probabilistic dependence is qualitatively different from a dependence of a strictly deterministic type: such a dependence excludes a rigid, unambiguous connection between phenomena, allowing only a connection between the probabilities of their realization. Initially, the idea of ​​a probabilistic dependence was formulated in relation to elementary random events, which led to the development of the concept conditional probability. Then this idea was generalized to random variables, which led to the introduction of the concept of a conditional probability distribution law. Finally, the idea of ​​probabilistic dependence was developed in relation to the concept random functions, which led to the emergence of the theory of probabilistic (stochastic) processes. In the theory of probability, a special section arose - correlation analysis, which explores mathematical properties probabilistic dependencies (p.64-65)".

On probability as a combination of disorder and order A.S. Kravets writes:

"The third feature of the relationships that develop in the class of random events is a characteristic combination of disorder and order. Order is usually understood as a certain regular pattern of events, some of their consistency in space and time, a certain regular relationship between their volumetric and other parameters, consistency between their functions etc. All systems have order to one degree or another, however, probabilistic systems, along with order, are also characterized by some randomness.Sometimes, to justify the probabilistic approach, appropriate hypotheses about the lack of order in the system under study are specially introduced.A probabilistic system is distinguished by the absence of rigid links between elements, the autonomy of elements, the irregular nature of relations, etc. ...

In physics, the disorder in the relations between the elements of a probabilistic system is reflected in the idea of ​​"molecular chaos" or "molecular disorder". "A feature of the movement that bears the name of heat," noted J. Maxwell, "is that it is completely random" (J. K. Maxwell. Articles and speeches. M.-L., 1940, p. 125)

However, the presence of disorder in the system should not be considered evidence of the absence of any regularity in the relationship between elements. The concepts of order and disorder are correlative, correlative. Disorder, being the dialectical opposite of order, does not mean the absence of any objective regularity in the behavior of the elements of the system, but the presence of some specific probabilistic regularity, just as irregularity does not express the absence of any regularity in the implementation of events at all, but only the presence of a specific stochastic regularity, some stable trend replaying multiple events as a whole.

So, in systems, disorder is always associated with probabilistic patterns ...

Absolute order and absolute disorder are the limits of the spectrum of possible structures, the possible organization of systems. Absolute order is usually observed in a rigidly determined system, where any autonomy of subsystems is excluded. On the contrary, absolute disorder characterizes systems of independent and equal subsystems in the probabilistic sense. However, in objective reality, these two limiting cases are realized quite rarely and represent rather some kind of idealization. Majority real systems lies in between these extreme cases...

Thus, systems that obey probabilistic patterns are characterized by a specific structure that qualitatively distinguishes them from systems that obey rigid forms of determination... The existence of systems that have a specific probabilistic structure is the objective basis of probabilistic representations" (p. 66-68) .

A.S. Kravets draws the correct conclusion that probability is of an intermediate nature, but he, like any specialist immersed in his field of study, somewhat exaggerates the importance of probability, considering improbable chance and necessity only as limiting cases, which "are realized quite rarely and represent rather some idealization." It can be said in advance, a priori, that any intermediate states are possible and exist only due to the presence of pronounced extreme states. If there are no last, then there are no first. It is ridiculous to say that they are "rather some idealization." If we deny the reality of extreme states, then by doing so we cut the branch on which we sit, i.e., we will be forced to deny the reality of intermediate states. Intermediate states are intermediate because they "locate" somewhere between the extreme states and their existence depends on the existence of these states. Probability is intermediate due to the fact that there are chance and necessity - poles of interdependence. Located between them, the probability does not absorb them, but connects them, makes the transition from one pole of interdependence to another. This is its meaning and purpose.

On the intermediate and dual character of probability A.S. Kravets writes in another place in the book:

“To understand the nature of probability, it is essential that it is always associated with the analysis of relations given on a certain set of events. The concept of probability does not make sense outside the consideration of a set of events ... However, the concept of probability does not make sense even if it is not related to some element or subset of the original set of elements.At its essence, the probability is structural characteristic the behavior of an element in a series of identical, similar elements that form an integral system ... Probability is just such a characteristic that connects an individual element with the system as a whole, allows you to highlight stable relationships between the elements of the system. In other words, the probability is a kind quantitative measure irregularity, autonomy, disorder, occupying an intermediate position - thirst for the parameters of the system as a whole and as a set of autonomous elements (events, outcomes, expected phenomena). This is the dual nature of probability."

A.S. Kravets concludes:

“An important philosophical conclusion follows from the analysis of probabilistic structures about the complexity and deeply dialectical nature of the structure of our world. Philosophical concepts that absolutize the “original” order of the external world, the rigid connection of phenomena in the Universe, the uniqueness of the connection of objects, apparently, are just as arbitrary and one-sided, as well as concepts that depict the world in the form of primordial and eternal chaos, absolutizing the independence of phenomena.From the absolutization of interdependence, order, fatalistic concepts such as Laplacian determinism follow, while the absolutization of world disorder leads to finite concepts such as "the heat death of the Universe."

However, the actual physical picture of the world cannot be entirely contained in Procrustean bed absolute determinism, nor immersed in an amorphous fog of ideas about a chaotic universe.

The intermediate nature of probability is indicated by the fact that probabilistic stability can be closer to chance, i.e., be more particular, and can be closer to necessity, i.e., be more general. The first kind of probabilistic stability is usually classified as an empirical statistical regularity. The second kind - to the category of theoretical statistical regularities. Some scientists and philosophers even doubt whether it is possible in all cases to call particular statistical stability empirical regularities. And they are right to some extent. Probabilistic stability "smoothly" turns into purely random processes of an indefinite nature. The narrower the area covered by them, the more similar they are to pure accidents and the less reason to call them empirical regularities. (For more on this, see below, paragraph 3522.3 "Statistical regularity").

Adoption management decisions at risk

Essay on the course "Development of management decisions"

Performed:

Zavyazkina Marina Vyacheslavovna

student of group GMU-551

Checked:

Andreeva Yulia Andreevna,

Senior Lecturer

Yekaterinburg, 2012


Introduction. 3

2. Classification of risks. 5

3. Assessment of the degree of risk probability. 9

4. Risk management in making managerial decisions. 12

5. Risk management in public administration. 15

Conclusion. twenty

List of used sources and literature.. 21


Introduction

Managers at various levels often have to prepare management decisions in the face of insufficient or unreliable information, high staff turnover, unscrupulous suppliers or buyers, frequent changes in legislation, market conditions, etc. As a result, unintentional errors in the text of SD are possible. In the process of implementing SD, unforeseen situations are also possible that make it difficult to accurately implement it. Therefore, the actual results of SD do not always coincide with the planned ones. They may even be opposite. Thus, SD is characterized by uncertainty and risk.

aim this work is complex analysis making managerial decisions under risk. To achieve the goal, the following tasks:

· Describe the concept of "risk" in terms of management decisions;

· Consider different types of risks, their classification;

· Identify ways to assess the degree of probability of risk;

· Analyze options for risk management when making managerial decisions, including in the field of public administration.


Risk - this is a possible danger of losses arising from the specifics of certain natural phenomena and activities of human society. This is historical and economic category. Thus, decision-making under risk means the choice of a decision option under conditions when each action leads to one of the many possible particular outcomes, and each outcome has a calculated or expertly determined probability of occurrence.

As a historical category, risk is a possible danger realized by a person. This indicates that the risk is historically associated with the entire course of social development. As an economic category, risk is an event that may or may not occur. If such an event occurs, three economic outcomes are possible:

negative (loss, damage, loss);

zero;

positive (gain, benefit, profit).

If usually the concept of "uncertainty" is associated with the preparation of SD, then "risk" - with the implementation of SD, that is, with the results.

Risk is closely related to uncertainty, in addition, they can turn into each other. The transition of risks in uncertainty occurs if there are several URs following one after another, then the risks of previous URs become uncertainties for subsequent URs. In a situation of risk, it is possible, using the theory of probability, to calculate the probability of a particular change in the environment; in a situation of uncertainty, the probability values ​​cannot be obtained.

The risk determines the ratio of two polar results obtained from the implementation of SD: negative (complete failure) and positive (achievement of the planned). Typically, risk is assessed discretely, either as a ratio of a pair of numbers (for example, ; ), or as a percentage of a negative outcome (for example, 0.01%). For example, risk means that only two times out of ten the solution will not be implemented; 10% risk means that 10% is not guaranteed a positive outcome decision; risk means an equal probability of both a negative and a positive outcome of the process. At a low level of uncertainty, the risk increases slightly, and it can often be neglected. Medium and high levels uncertainties significantly increase the risk of obtaining a negative result. The super high level of uncertainties does not leave hope for positive results.

Risk classification

The classification of risks should be understood as the distribution of risk on specific groups in certain ways in order to achieve the set goals. Scientifically based risk classification allows you to clearly determine the place of each risk in their common system. It creates opportunities for the effective application of appropriate methods, risk management techniques, since each risk has its own system of risk management techniques.

Fig.1 - Classification of risks

The qualification system of risks includes a group, categories, types, subtypes and varieties of risks. Depending on the possible result (risk event), risks can be divided into two categories. large groups:

1. Pure risks means the possibility of obtaining a negative or zero result. These risks include risks: natural, environmental, political, transport and part of commercial (property, production, trade);

2. Speculative risks expressed in the possibility of obtaining both positive and negative results. These risks include financial risks that are part of commercial risks.

For the main reason for(basic or natural risk) risks are divided into the following categories:

· natural risks– risks associated with the manifestation of the elemental forces of nature (earthquake, flood, storm, fire, epidemic, etc.);

· environmental risks– risks associated with pollution environment;

· political risks- the risks associated with political situation in the country and the activities of the state. Political risks arise when the conditions of the production and trade process are violated for reasons that are not directly dependent on the economic entity. Political risks include:

ü the impossibility of carrying out economic activities due to military operations, revolution, aggravation of the internal political situation in the country, nationalization, confiscation of goods and enterprises, embargoes, due to the refusal of the new government to fulfill the obligations assumed by its predecessors, etc.;

ü introduction of a deferment (moratorium) on external payments for certain period due to emergency circumstances (strike, war, etc.);

ü unfavorable changes in tax legislation;

ü prohibition or restriction of the conversion of the national currency into the currency of payment (in this case, the obligation to exporters can be fulfilled in the national currency, which has a limited scope);

· transport risks– risks associated with the transportation of goods by transport: road, sea, river, rail, aircraft, etc.;

· commercial risks– danger of losses in the process of financial and economic activity. They mean the uncertainty of the results of a given commercial transaction.

On a structural basis, commercial risks are divided into the following categories:

· property risks- risks associated with the probability of loss of the entrepreneur's property due to theft, sabotage, negligence, overvoltage of technical and technological systems, etc.;

· production risks– risks associated with loss from production shutdown due to impact various factors and, above all, with the death or damage of fixed and working capital (equipment, raw materials, transport, etc.), as well as the risks associated with the introduction into production new technology and technology;

· trading risks– represent risks associated with loss due to delayed payments, refusal to pay during the period of transportation of goods, non-delivery of goods, etc.; financial risks - associated with the probability of loss of financial resources (i.e. cash).

· risks associated with the purchasing power of money:

ü inflationary risk - the risk that with an increase in inflation (depreciation of money and, accordingly, an increase in prices), the received cash incomes depreciate in terms of real purchasing power faster than they grow;

ü deflationary risk - the risk that with the growth of deflation (a decrease in prices and, accordingly, an increase in the purchasing power of money), a fall in the price level occurs, deterioration economic conditions entrepreneurship and declining income;

ü currency risks - the danger of currency losses associated with a change in the exchange rate of one foreign currency against another, when conducting foreign economic, credit and other foreign exchange transactions;

ü liquidity risks - risks associated with the possibility of losses in the sale of securities or other goods due to a change in the assessment of their quality and use value;

the risks associated with investing capital ( investment risks):

ü the risk of lost profits - the risk of indirect (collateral) financial damage (lost profit) as a result of the failure to carry out any activity (for example, insurance, hedging, investment, etc.);

ü risk of profitability reduction - the risk arising from a decrease in the amount of interest and dividends on portfolio investments, on deposits and loans, as well as on portfolio investments associated with the formation of an investment portfolio, which is the acquisition of securities and other assets (this may include: risks - the risk of losses by commercial banks, credit institutions, investment institutions, selling companies as a result of the excess of interest rates paid by them on attracted funds over the rates on loans granted, risks of losses that investors may incur due to changes in dividends on shares, interest rates in the market for bonds, certificates and other securities;

ü credit risk - the risk of non-payment by the borrower of principal and interest due to the creditor, the risk of such an event in which the issuer that issued debt securities will not be able to pay interest on them or the principal amount of the debt);

ü risks of direct financial losses - exchange risks, representing the risk of losses from exchange transactions (risk of non-payment on commercial transactions, risk of non-payment of commission fees of a brokerage firm, etc.);

ü selective risk - the risk of incorrect choice of types of capital investment, type of securities for investment in comparison with other types of securities when forming an investment portfolio;

ü the risk of bankruptcy - the danger as a result of the wrong choice of capital investment of the complete loss of the entrepreneur's own capital and his inability to pay for his obligations.

In addition to the above classification, risks can be classified according to other criteria. According to the consequences, it is customary to divide the risks into three categories:

· acceptable risk- this is the risk of a decision, as a result of which the enterprise is threatened with loss of profit; within this zone, entrepreneurial activity retains its economic expediency, i.e. losses take place, but they do not exceed the size of the expected profit;

· critical risk- is the risk at which the company is threatened with loss of revenue; in other words, the critical risk zone is characterized by the danger of losses that obviously exceed the expected profit and, in extreme cases, can lead to the loss of all funds invested by the enterprise in the project;

· catastrophic risk- the risk at which there is an insolvency of the enterprise; losses can reach a value equal to the property status of the enterprise. This group also includes any risk associated with a direct danger to human life or the occurrence of environmental disasters.

Obviously, the above classifications are interconnected, and the second one is more general.

Summarizing the above, it should be noted that there are a large number of risk classifications depending on the specifics of the company's activities. There are no established criteria to unambiguously classify all risks for a number of reasons: the specifics of the activities of economic entities, various manifestations of risks and their various sources.

Estimation of risk probability degree

When making management decisions, it is required to assess the degree of risk and determine its value. . Degree of risk is the probability of occurrence of a loss event, as well as the amount of possible damage from it. The risk assessment can be:

· objective based on the results of objective research;

· subjective based on expert opinion;

· objectivelysubjective based both on the results of objective research and on expert assessments.

Risk is an action in the hope of a happy outcome on the principle of "lucky or not lucky." The entrepreneur is forced to take on the risk, first of all, by the uncertainty of the economic situation, i.e. the uncertainty of the conditions of the political and economic situation surrounding a particular activity, and the prospects for changing these conditions. The greater the uncertainty of the economic situation when making a decision, the greater the degree of risk.

The uncertainty of the economic situation is due to the following factors: lack of complete information, chance, opposition.

Randomness largely determines the uncertainty of the economic situation. Accident- this is what happens differently under similar conditions, and therefore it cannot be foreseen and predicted in advance. However, with a large number of observations of randomness, you can find that certain patterns operate in the world of randomness. The mathematical apparatus for studying these regularities is provided by the theory of probability. Random events become the subject of probability theory only when certain numerical characteristics are associated with them - their probabilities.

There are several ways to calculate the risk probability. Of these, the most accurate results probability estimates can be obtained using the Chebyshev inequality.

Chebyshev's inequality allows us to find the upper bound on the probability that a random variable X will deviate in both directions from its mean value by more than β.

The Chebyshev inequality is conveyed by the following formula

P ((x-x cf)> β)<

In this formula:

X is a random variable

X av - the average value of a random variable;

X i is the value of the random variable in the i observation

β is a given number

N is the total number of observations of the random variable

If you find the probability of deviation of a random variable X only in one direction (for example, in a large direction), then the result obtained by this formula must be divided by 2.

If it is not possible to assess the probability by any formal methods, then you can use the scale of qualitative risk assessment (P - probability).

Table 1. Qualitative risk assessment


Similar information.