Biographies Characteristics Analysis

Gödel's incompleteness theorem in simple terms. Interesting facts and useful tips

Any system of mathematical axioms, starting from a certain level of complexity, is either internally inconsistent or incomplete.

In 1900, the World Conference of Mathematicians was held in Paris, at which David Hilbert (1862-1943) presented in the form of abstracts the 23 most important, in his opinion, problems formulated by him, which were to be solved by theoretical scientists of the coming twentieth century. Number two on his list was one of those simple tasks, the answer to which seems obvious until you dig a little deeper. talking modern language, that was the question: is mathematics sufficient on its own? Hilbert's second problem was to prove rigorously that the system axioms- the basic statements taken in mathematics as a basis without proof - is perfect and complete, that is, it allows you to mathematically describe everything that exists. It was necessary to prove that it is possible to set such a system of axioms that, firstly, they will be mutually consistent, and secondly, one can draw a conclusion from them regarding the truth or falsity of any statement.

Let's take an example from school geometry. Standard Euclidean planimetry(geometry on the plane) it is possible to prove unconditionally that the statement "the sum of the angles of a triangle is 180°" is true, and the statement "the sum of the angles of a triangle is 137°" is false. Speaking essentially, in Euclidean geometry, any statement is either false or true, and the third is not given. And at the beginning of the twentieth century, mathematicians naively believed that the same situation should be observed in any logically consistent system.

And then in 1931, some Viennese bespectacled mathematician Kurt Godel took and published a short article that simply overturned the whole world of so-called "mathematical logic". After long and complex mathematical and theoretical preambles, he literally established the following. Let's take any statement like: "Assumption #247 is logically unprovable in this system of axioms" and call it "statement A". So Gödel simply proved the following amazing property any axiom systems:

"If a statement A can be proved, then a non-A statement can be proved."

In other words, if it is possible to prove the validity of the statement "Assumption 247 not provable", then it is possible to prove the validity of the statement "Assumption 247 provably". That is, returning to the formulation of the second Hilbert problem, if the system of axioms is complete (that is, any statement in it can be proved), then it is inconsistent.

The only way out of this situation is to accept an incomplete system of axioms. That is, we have to put up with the fact that in the context of any logical system we will be left with “type A” statements that are obviously true or false - and we can judge their truth only outside the framework of the axiomatics we have adopted. If there are no such statements, then our axiomatics is contradictory, and within its framework there will inevitably be formulations that can be both proved and refuted.

So the wording first,or weak Gödel's incompleteness theorems: "Any formal system of axioms contains unresolved assumptions." But Gödel did not stop there, formulating and proving second, or strong Godel's incompleteness theorem: “The logical completeness (or incompleteness) of any system of axioms cannot be proved within the framework of this system. To prove or disprove it, additional axioms (strengthening of the system) are required.”

It would be safer to think that Godel's theorems are abstract and do not concern us, but only areas of sublime mathematical logic, but in fact it turned out that they are directly related to the structure of the human brain. The English mathematician and physicist Roger Penrose (born 1931) showed that Gödel's theorems could be used to prove fundamental differences between the human brain and a computer. The point of his reasoning is simple. The computer operates strictly logically and is not able to determine whether statement A is true or false if it goes beyond the scope of axiomatics, and such statements, according to Gödel's theorem, inevitably exist. A person, faced with such a logically unprovable and irrefutable statement A, is always able to determine its truth or falsity - based on everyday experience. At least in this human brain outperforms a computer shackled with clean logic circuits. The human brain is able to understand the full depth of truth contained in Gödel's theorems, but a computer one can never. Therefore, the human brain is anything but a computer. He is capable make decisions, and the Turing test will pass.

I wonder if Hilbert had any idea how far his questions would take us?

Kurt Godel, 1906-78

Austrian, then American mathematician. Born in Brünn (Brünn, now Brno, Czech Republic). He graduated from the University of Vienna, where he remained a teacher in the Department of Mathematics (since 1930 - a professor). In 1931 he published a theorem that later received his name. Being a purely apolitical person, he extremely hard survived the murder of his friend and department employee by a Nazi student and fell into a deep depression, the relapses of which haunted him until the end of his life. In the 1930s, he emigrated to the United States, but returned to his native Austria and got married. In 1940, at the height of the war, he was forced to flee to America in transit through the USSR and Japan. Worked for a while at the Princeton Institute advanced research. Unfortunately, the psyche of the scientist could not stand it, and he died of starvation in a psychiatric clinic, refusing to eat, because he was convinced that they intended to poison him.

I have long been interested in what the sensational Gödel theorem is. And how it is useful for life. And finally I was able to figure it out.

The most popular formulation of the theorem is:
"Any system of mathematical axioms, starting from a certain level of complexity, is either internally inconsistent or incomplete."

I would translate it into a human non-mathematical language like this (an axiom is the initial position of a theory, accepted within the framework of this theory as true without requiring proof and used as the basis for proving its other provisions). In life, an axiom is the principles followed by a person, society, scientific direction, states. Among the representatives of religion, axioms are called dogmas. Consequently, any of our principles, any system of views, starting from a certain level, becomes internally contradictory, or incomplete. In order to be convinced of the truth of a certain statement, one will have to go beyond the given system of views and build a new one. But it will also be imperfect. That is, the PROCESS OF KNOWLEDGE IS INFINITE. The world cannot be fully known until we reach the source.

"... if we consider the ability to reason logically as the main characteristic of the human mind, or at least its main tool, then Gödel's theorem directly indicates the limited capabilities of our brain. Agree that it is very difficult for a person brought up on faith in the infinite power of thought to accept the thesis about the limits of its power ... Many experts believe that the formal-computational, "Aristotelian" processes underlying logical thinking, are only a part human consciousness. Its other area, fundamentally “non-computational”, is responsible for such manifestations as intuition, creative insights and understanding. And if the first half of the mind falls under Gödel's restrictions, then the second half is free from such limits ... Physicist Roger Penrose went even further. He suggested the existence of some quantum effects of a non-computational nature, which ensure the realization of creative acts of consciousness... One of the numerous consequences of the Penrose hypothesis can be, in particular, the conclusion that artificial intelligence on the basis of modern computing devices, even if the advent of quantum computers will lead to a grandiose breakthrough in the field of computing technology. The fact is that any computer can only more and more accurately model the work of the formal-logical, "computational" activity of the human consciousness, but the "non-computational" abilities of the intellect are inaccessible to it.

One important consequence of Gödel's theorem is the conclusion that one cannot think in extremes. Always within existing theory there is a statement that can neither be proved nor disproved. Or, in other words, to some statement there is always a pair that refutes it.

Next conclusion. Good and evil are just 2 sides of the same coin, without which it cannot exist. And it comes from the principle that in the Universe there is only one source of everything: good and evil, love and hate, life and death.

Any declaration of completeness of the system is false. You can not rely on dogmas, because sooner or later they will be refuted.

In this sense, modern religions are in a critical position: the dogmas of the church oppose the development of our ideas about the world. They try to squeeze everything into the framework of rigid concepts. But this leads to the fact that from Monotheism, from the single source of all natural processes they move on to paganism, where there are forces of good and forces of evil, there is a god of good somewhere far away in heaven, and there is a devil (the god of evil), who has long laid his paw on everything that is on Earth. This approach leads to the division of all people into friends and foes, righteous and sinners, believers and heretics, friends and enemies.

Here is another small text that popularly reveals the essence that follows from Gödel's theorem:
"It seems to me that this theorem carries an important philosophical meaning. Only two options are possible:

a) The theory is incomplete, i.e. in terms of a theory, one can formulate a question to which it is impossible to derive either a positive or a negative answer from the axioms/postulates of the theory. At the same time, answers to all such questions can be given within the framework of a more comprehensive theory, in which the old one will be a special case. But this new theory will have its own "unanswered questions" and so on ad infinitum.

b) Complete, but contradictory. Any question can be answered, but some questions can be answered with both yes and no at the same time.

Scientific theories are of the first type. They are consistent, but that means they don't describe everything. There can be no "final" scientific theory. Any theory is incomplete and does not describe something, even if we do not yet know what it is. One can only create more and more comprehensive theories. For me personally, this is a reason for optimism, because it means that the progress of science will never stop.

"Almighty God" belongs to the second type. Almighty God is the answer to every question. And this automatically means that it leads to logical absurdity. Paradoxes like the "heavy stone" can be invented in batches.

Generally, scientific knowledge is true (consistent), but at any given time does not describe everything. At the same time, nothing prevents pushing the boundaries of the known to infinity, further and further and sooner or later any unknown becomes known. Religion claims to Full description world "right now", but it is automatically incorrect (absurd)."

At one time, when I was just starting my adult life I was programming. And there was such a principle: if a lot of corrections are made to the program, it must be rewritten again. This principle, in my opinion, corresponds to Godel's theorem. If the program becomes more complex, it becomes inconsistent. And it won't work right.

Another example from life. We live in an era when officials say that the main principle of existence should be the law. That is, the legal system. But as soon as legislation becomes more complex and rule-making flourishes, laws begin to contradict each other. What we are now seeing. You can never create legal system, which would prescribe all aspects of life. On the other hand, it would be fair for everyone. Because the limitations of our understanding of the world will always come out. And human laws will begin at some point to come into conflict with the laws of the universe. We understand many things intuitively. Also intuitively, we must judge the actions of other people. It is enough for the state to have a constitution. And relying on the articles of this constitution, to regulate relations in society. But sooner or later, the constitution will have to be changed.

The USE is another example of the fallacy of our ideas about human capabilities. We are trying to test the computational capabilities of the brain in an exam. But intuitive abilities at school have ceased to be developed. But man is not a biorobot. It is impossible to create a scoring system that would be able to reveal all the possibilities inherent in a person, in his consciousness, in his subconscious and in his psyche.

Nearly 100 years ago, Gödel took an incredible step in understanding the laws of the universe. And we still have not been able to use this, considering this theorem as highly specialized math problem for a narrow circle of people dealing with some abstract topics in their own circle. Together with quantum theory and the teachings of Christ, Gödel's theorem enables us to escape from the captivity of false dogmas, to overcome the crisis that still persists in our worldview. And time is running out.

One of the most known theorems mathematical logic, lucky and unlucky at the same time. In this she is like special theory Einstein's relativity. On the one hand, almost everyone has heard something about them. On the other hand, in the popular interpretation, Einstein's theory, as you know, "says everything in the world is relative". And Gödel's incompleteness theorem (hereinafter simply TGN), in an approximately equally free folk formulation, "proves that there are things incomprehensible to the human mind". And so some try to adapt it as an argument against materialism, while others, on the contrary, prove with its help that there is no god. It's funny not only that both sides cannot be right at the same time, but also that neither one nor the other bothers to figure out what, in fact, this theorem says.

So what? Below I will try to "on the fingers" to talk about it. My exposition will, of course, not be rigorous and intuitive, but I will ask mathematicians not to judge me strictly. It is possible that for non-mathematicians (which, in fact, I also belong to), there will be something new and useful in what is told below.

Mathematical logic is indeed a rather complicated science, and most importantly, not very familiar. It requires careful and strict maneuvers, in which it is important not to confuse the really proven with the fact that "it's already clear." However, I hope that in order to understand the following “outline of the proof of TGN”, the reader will need only knowledge of school mathematics / computer science, logical thinking skills and 15-20 minutes of time.

Simplifying somewhat, TGN claims that it is enough difficult languages there are unsubstantiated statements. But in this phrase, almost every word needs an explanation.

Let's start by trying to figure out what a proof is. Let's take some school problem in arithmetic. For example, let it be required to prove the correctness of the following uncomplicated formula: "" (I remind you that the symbol is read "for any" and is called the "universal quantifier"). It can be proved by identically transforming, say, like this:


The transition from one formula to another occurs according to some known rules. The transition from the 4th formula to the 5th occurred, say, because every number is equal to itself - such is the axiom of arithmetic. And the whole procedure of proving, thus, translates the formula into the boolean value TRUE. The result could be FALSE - if we refuted some formula. In this case, we would prove its negation. It is possible to imagine a program (and such programs are actually written) that would prove such (and more complex) propositions without human intervention.

Let's state the same thing a little more formally. Suppose we have a set consisting of strings of characters of some alphabet, and there are rules by which a subset of the so-called statements- that is, grammatically meaningful phrases, each of which is true or false. We can say that there is a function that matches statements from one of two values: TRUE or FALSE (that is, maps them to a Boolean set of two elements).

Let's call such a pair - a set of statements and a function from to - "language of statements". Note that in the everyday sense the concept of language is somewhat broader. For example, the Russian phrase "Well, come here!" is not true and not false, that is, from the point of view of mathematical logic, it is not a statement.

For what follows, we need the notion of an algorithm. I will not give its formal definition here - this would lead us quite far aside. I will limit myself to the informal: "algorithm"- this sequence of unambiguous instructions ("program"), which behind finite number steps converts input data into output. The italicized is fundamentally important - if the program gets hung up on some initial data, then it does not describe the algorithm. For simplicity and in application to our case, the reader can consider that an algorithm is a program written in any programming language known to him, which, for any input data from a given class, is guaranteed to complete its work with a Boolean result.

Let us ask ourselves: is there a “proving algorithm” for every function (or, in short, "deductive") equivalent to this function, that is, translating each statement into exactly the same boolean value as it? More concisely, the same question can be formulated as follows: is every function over a set of propositions computable? As you can already guess, it follows from the validity of TGN that no, not any - there are non-computable functions of this type. In other words, not every true statement can be proven.

It may very well be that this statement will cause you an internal protest. This is due to several circumstances. First, when we are taught school mathematics, then sometimes there is a false impression of the almost complete identity of the phrases "the theorem is true" and "it is possible to prove or verify the theorem". But if you think about it, it's not at all obvious. Some theorems are proved quite simply (for example, by enumeration of a small number of options), and some are very difficult. Consider, for example, Fermat's famous Last Theorem:


the proof of which was found only three and a half centuries after the first formulation (and it is far from elementary). It is necessary to distinguish between the truth of a statement and its provability. It does not follow from anywhere that there are no true, but unprovable (and not fully verifiable) statements.

The second intuitive argument against TGN is more subtle. Suppose we have some unprovable (within the framework of this deductive) statement. What prevents us from accepting it as a new axiom? Thus, we will slightly complicate our system of proofs, but this is not terrible. This argument would be perfectly correct if there were a finite number of unprovable propositions. In practice, the following may happen - after postulating a new axiom, you will stumble upon a new unprovable statement. Take it as another axiom - you will stumble upon the third. And so on ad infinitum. They say deductica will stay incomplete. We can also take forceful measures so that the proving algorithm ends after a finite number of steps with some result for any statement of the language. But at the same time, he will begin to lie - lead to the truth for incorrect statements, or to lies - for the faithful. In such cases it is said that the deductive contradictory. Thus, one more formulation of TGN sounds like this: “There are propositional languages ​​for which complete consistent deductics is impossible” - hence the name of the theorem.

Sometimes called "Gödel's theorem" is the statement that any theory contains problems that cannot be solved within the framework of the theory itself and require its generalization. In a sense this is true, although such a formulation obscures the issue rather than clarifies it.

I also note that if we were talking about the usual functions that display the set real numbers into it, then the "non-computability" of the function would not surprise anyone (just do not confuse "computable functions" and "computable numbers" - these are different things). Any schoolchild knows that, say, in the case of a function, you must be very lucky with the argument so that the process of calculating the exact decimal representation of the value of this function ends in a finite number of steps. And most likely you will calculate it using an infinite series, and this calculation will never lead to exact result, although it can come as close to it as you like - simply because the value of the sine of most of the arguments is irrational. TGN simply tells us that even among functions whose arguments are strings and whose values ​​are zero or one, non-computable functions, although arranged in a completely different way, also exist.

For what follows, we describe the language formal arithmetic". Consider a class of text strings of finite length, consisting of Arabic numerals, variables (letters Latin alphabet), taking natural values, spaces, characters arithmetic operations, equality and inequality, quantifiers (“exists”) and (“for any”) and, perhaps, some other symbols (their exact number and composition are unimportant for us). It is clear that not all such lines are meaningful (for example, "" is nonsense). The subset of meaningful expressions from this class (that is, strings that are true or false in terms of ordinary arithmetic) will be our set of statements.

Examples of formal arithmetic statements:


etc. Now let's call a "formula with a free parameter" (FSP) a string that becomes a statement if a natural number is substituted into it as this parameter. Examples of FSP (with parameter):


etc. In other words, FSPs are equivalent to functions of a natural argument with a Boolean value.

Denote the set of all FSPs by the letter . It is clear that it can be ordered (for example, we first write out one-letter formulas ordered alphabetically, followed by two-letter formulas, etc.; it is not fundamental for us which alphabet the ordering will take place in). Thus, any FSP corresponds to its number in the ordered list, and we will denote it .

Let us now turn to a sketch of the proof of TGN in the following formulation:

  • For the propositional language of formal arithmetic, there is no complete consistent deduction.

We will prove by contradiction.

So let's assume that such a deductive exists. Let's describe the following auxiliary algorithm that assigns a boolean value to a natural number as follows:


Simply put, the algorithm results in the value TRUE if and only if the result of substituting into the FSP its own number in our list gives a false statement.

Here we come to the only place where I will ask the reader to take my word for it.

Obviously, under the above assumption, any FSP from can be associated with an algorithm containing a natural number at the input and a Boolean value at the output. Less obvious is the opposite:


The proof of this lemma would require at least a formal, not an intuitive, definition of the notion of an algorithm. However, if you think about it a little, it is quite plausible. Indeed, algorithms are written in algorithmic languages, among which there are such exotic ones as, for example, Brainfuck , which consists of eight one-character words, in which, nevertheless, any algorithm can be implemented. It would be strange if the richer language of formal arithmetic formulas that we have described turned out to be poorer - although, no doubt, it is not very suitable for ordinary programming.

After passing this slippery place, we quickly get to the end.

So, we have described the algorithm above. According to the lemma I asked you to believe, there exists an equivalent FSP. It has some number in the list - let's say . Let's ask ourselves, what's the point? Let it be TRUE. Then, according to the construction of the algorithm (and hence the function equivalent to it), this means that the result of substituting a number into the function is FALSE. The opposite is checked in the same way: from FALSE follows TRUE. We have arrived at a contradiction, which means that the original assumption is wrong. Thus, for formal arithmetic, there is no complete consistent deduction. Q.E.D.

Here it is appropriate to recall Epimenides (see the portrait in the title), who, as you know, declared that all Cretans are liars, being himself a Cretan. In a more concise formulation, his statement (known as the "liar paradox") can be formulated as: "I'm lying." It is precisely such a statement, which itself proclaims its falsity, that we have used for the proof.

In conclusion, I want to note that TGN does not claim anything particularly surprising. After all, everyone has long been accustomed to the fact that not all numbers can be represented as a ratio of two integers (remember, this statement has a very elegant proof that is more than two thousand years old?). And the roots of polynomials with rational coefficients are also not all numbers. And now it turned out that not all functions of a natural argument are computable.

The sketch of the proof given was for formal arithmetic, but it is not difficult to see that THN applies to many other propositional languages ​​as well. Of course, not all languages ​​are like that. For example, let's define a language like this:

  • "Any phrase Chinese is a true statement if it is contained in the quote book of Comrade Mao Tse Tung, and is incorrect if it is not contained.

Then the corresponding complete and consistent proving algorithm (it can be called "dogmatic deductive") looks something like this:

  • “Flip through Comrade Mao Tse Tung’s quote book until you find the statement you are looking for. If it is found, then it is true, and if the quote book is over, and the statement is not found, then it is false.

Here we are saved by the fact that any citation is obviously finite, so the process of "proving" will inevitably end. Thus, TGN is inapplicable to the language of dogmatic statements. But we were talking about complex languages, right?

Tags: Add tags

09sen

Any system of mathematical axioms, starting from a certain level of complexity, is either internally inconsistent or incomplete.

In 1900, the World Conference of Mathematicians was held in Paris, where David Gilbert(David Hilbert, 1862–1943) outlined in the form of theses the 23 most important, in his opinion, tasks that theoreticians of the coming twentieth century had to solve. Number two on his list was one of those simple problems that seem obvious until you dig a little deeper. In modern terms, it was the question: is mathematics sufficient on its own? Hilbert's second task was reduced to the need to strictly prove that the system of axioms - basic statements taken in mathematics as a basis without proof - is perfect and complete, that is, it allows mathematical description of everything that exists. It was necessary to prove that it is possible to set such a system of axioms that, firstly, they will be mutually consistent, and secondly, one can draw a conclusion from them regarding the truth or falsity of any statement.

Let's take an example from school geometry. In standard Euclidean planimetry (geometry on a plane), it can be unconditionally proved that the statement "the sum of the angles of a triangle is 180°" is true, and the statement "the sum of the angles of a triangle is 137°" is false. Speaking essentially, in Euclidean geometry, any statement is either false or true, and the third is not given. And at the beginning of the twentieth century, mathematicians naively believed that the same situation should be observed in any logically consistent system.

And then in 1931 some Viennese bespectacled mathematician Kurt Gödel- took and published a short article that simply overturned the whole world of so-called "mathematical logic". After long and complex mathematical and theoretical preambles, he literally established the following. Let's take any statement like: "Assumption #247 is logically unprovable in this system of axioms" and call it "statement A". So, Gödel simply proved the following amazing property of any system of axioms:

"If a statement A can be proved, then a non-A statement can be proved."

In other words, if it is possible to prove the validity of the statement "Assumption 247 is not provable", then it is also possible to prove the validity of the statement "Assumption 247 is provable". That is, returning to the formulation of the second Hilbert problem, if the system of axioms is complete (that is, any statement in it can be proved), then it is inconsistent.

The only way out of this situation is to accept an incomplete system of axioms. That is, we have to put up with the fact that in the context of any logical system we will still have “type A” statements that are obviously true or false - and we can judge their truth only outside the framework of the axiomatics we have adopted. If there are no such statements, then our axiomatics is contradictory, and within its framework there will inevitably be formulations that can be both proved and refuted.

So the formulation of Gödel's first, or weak, incompleteness theorem is: "Any formal system of axioms contains unresolved assumptions". But Gödel did not stop there, formulating and proving Gödel's second or strong incompleteness theorem: “The logical completeness (or incompleteness) of any system of axioms cannot be proved within the framework of this system. To prove or disprove it, additional axioms (strengthening of the system) are required.

It would be safer to think that Godel's theorems are abstract and do not concern us, but only areas of sublime mathematical logic, but in fact it turned out that they are directly related to the structure of the human brain. The English mathematician and physicist Roger Penrose (born 1931) showed that Gödel's theorems can be used to prove the existence of fundamental differences between the human brain and a computer. The point of his reasoning is simple. The computer operates strictly logically and is not able to determine whether statement A is true or false if it goes beyond the scope of axiomatics, and such statements, according to Gödel's theorem, inevitably exist. A person, faced with such a logically unprovable and irrefutable statement A, is always able to determine its truth or falsity - based on everyday experience. At least in this, the human brain is superior to a computer shackled by pure logical circuits. The human brain is able to understand the full depth of truth contained in Gödel's theorems, but a computer one can never. Therefore, the human brain is anything but a computer. He is able to make decisions, and the Turing test will pass.

on the topic: "GODEL'S THEOREM"

Kurt Gödel

Kurt Gödel is the greatest specialist in mathematical logic- was born on April 28, 1906 in Brunn (now Brno, Czech Republic). He graduated from the University of Vienna, where he defended his doctoral dissertation, was an assistant professor in 1933–1938. After the Anschluss, he emigrated to the United States. From 1940 to 1963 Gödel worked at the Princeton Institute higher studies. Gödel, Honorary Doctorate from Yale and Harvard Universities, Fellow National Academy Sciences of the United States and the American Philosophical Society.

In 1951, Kurt Gödel was awarded the highest scientific award in the United States, the Einstein Prize. In an article dedicated to this event, another of the greatest mathematicians of our time, John von Neumann, wrote: “The contribution of Kurt Gödel to modern logic is truly monumental. This is more than just a monument. This is a milestone separating two eras ... It can be said without any exaggeration that Gödel's work fundamentally changed the very subject of logic as a science.

Indeed, even a dry list of Godel's achievements in mathematical logic shows that their author essentially laid the foundations for entire sections of this science: the theory of models (1930; the so-called theorem on the completeness of the narrow predicate calculus, showing, roughly speaking, the sufficiency of the means of "formal logic ”to prove all true sentences expressed in its language), constructive logic (1932–1933; results on the possibility of reducing some classes of classical logic sentences to their intuitionistic counterparts, which laid the foundation for the systematic use of “immersing operations” that allow such a reduction of various logical systems each other), formal arithmetic (1932–1933; results on the possibility of reducing classical arithmetic to intuitionistic arithmetic, showing in a sense the consistency of the first with respect to the second), the theory of algorithms and recursive functions (1934; definition of the concept of a general recursive function, which played decisive role in establishing the algorithmic unsolvability of the series critical issues mathematics on the one hand. And in the implementation of logical and mathematical problems on electronic computers - on the other hand), axiomatic set theory (1938; proof of the relative consistency of the axiom of choice and Cantor's continuum hypothesis from the axioms of set theory, which marked the beginning of the series key results on the relative consistency and independence of set-theoretic principles).

Godel's incompleteness theorem

Introduction

In 1931, in one of the German scientific journals a comparatively short paper appeared with the rather daunting title "On Formally Undecidable Propositions of Principia Mathematica and Related Systems". Its author was a twenty-five-year-old mathematician from the University of Vienna, Kurt Gödel, who later worked at the Princeton Institute for Advanced Study. This work played a decisive role in the history of logic and mathematics. In the decision Harvard University awarding Gödel an honorary doctorate (1952), she was described as one of the greatest achievements modern logic.

However, at the time of publication, no title of Gödel's work. Neither its content said anything to most mathematicians. Mentioned in its title, Principia Mathematica is Alfred North Whitehead and Bertrand Russell's monumental three-volume treatise on mathematical logic and the foundations of mathematics; familiarity with the treatise was by no means necessary condition for successful work in most branches of mathematics. Interest in the issues addressed in Gödel's work has always been the lot of a very small group of scientists. At the same time, the arguments given by Gödel in his proofs were so unusual for their time. That a complete understanding of them required an exclusive knowledge of the subject and familiarity with the literature devoted to these very specific problems.

First incompleteness theorem

Gödel's first incompleteness theorem seems to be the most significant result in mathematical logic. It sounds like this:

For an arbitrary consistent formal and computable theory in which basic arithmetic propositions can be proved, a true arithmetic proposition can be constructed whose truth cannot be proved within the framework of the theory. In other words, any perfectly useful theory sufficient to represent arithmetic cannot be both consistent and complete.

Here the word "theory" means " infinite set» statements, some of which are assumed to be true without proof (such statements are called axioms), while others (theorems) can be derived from axioms, and therefore are assumed (proved) to be true. The phrase "provable in theory" means "derivable from the axioms and primitives of the theory (constant symbols of the alphabet) using standard (first-order) logic." A theory is consistent (consistent) if it is impossible to prove a contradictory statement in it. The phrase "can be built" means that there is some mechanical procedure (algorithm) that can build a statement based on axioms, primitives and first-order logic. "Elementary arithmetic" is the presence of operations of addition and multiplication over natural numbers. The resulting true but unprovable proposition is often referred to for a given theory as the "Gödel sequence", but there are an infinite number of other propositions in the theory that have the same property of being unprovable within the theory.

The assumption that the theory is computable means that it is in principle possible to implement a computer algorithm ( computer program) which (if allowed to compute arbitrarily long times, up to infinity) computes a list of all the theorems of the theory. In fact, it is sufficient to compute only the list of axioms, and all theorems can be efficiently derived from such a list.

The first incompleteness theorem was titled "Theorem VI" in Gödel's 1931 paper. On Formally Undecidable Propositions in Principia Mathematica and Related Systems I. In Gödel's original recording, it sounded like this:

“The general conclusion about the existence of undecidable propositions is this:

Theorem VI .

For every ω-consistent recursive class k FORMULA there are recursive SIGNS r such that neither (v Gen r), nor ¬( v Gen r)do not belong to Flg (k)(where v is FREE VARIABLE r ) ».

Designation Flg comes from him. Folgerungsmenge- set of sequences, Gen comes from him. generalization- generalization.

Roughly speaking, Gödel's statement G asserts: "truth G cannot be proven." If G could be proved within the framework of the theory, then the theory would contain a theorem that contradicts itself, and therefore the theory would be inconsistent. But if G unprovable, then it is true, and therefore the theory is incomplete (the statement G not deducible in it).

This explanation is in the usual natural language, and therefore not quite mathematically rigorous. To provide a rigorous proof, Gödel numbered statements with natural numbers. In this case, the theory describing numbers also belongs to the set of propositions. Questions about the provability of propositions can be represented in this case in the form of questions about the properties of the natural numbers, which must be computable if the theory is complete. In these terms, Gödel's statement says that there is no number with some definite property. A number with this property will be proof of the inconsistency of the theory. If such a number exists, the theory is inconsistent, contrary to the original assumption. So assuming that the theory is consistent (as the premise of the theorem suggests), it turns out that there is no such number, and Gödel's statement is true, but this cannot be proved within the framework of the theory (hence the theory is incomplete). An important conceptual note is that one must assume that a theory is consistent in order to declare Gödel's statement to be true.

Gödel's second incompleteness theorem

Gödel's second incompleteness theorem reads as follows:

For any formally recursively enumerable (that is, effectively generated) theory T, including basic arithmetic truth statements and certain formal provability statements, this theory T includes a statement about its consistency if and only if the theory T is inconsistent.

In other words, consistency is enough rich theory cannot be proved by means of this theory. However, it may well turn out that the consistency of one particular theory can be established by means of another, more powerful formal theory. But then the question arises of the consistency of this second theory, and so on.

Many have tried to use this theorem to prove that intelligent activity cannot be reduced to calculations. For example, back in 1961, the famous logician John Lucas came up with a similar program. His reasoning turned out to be quite vulnerable - however, he set the task more broadly. Roger Penrose takes a slightly different approach, which is presented in the book completely, "from scratch".

Discussions

The consequences of theorems affect the philosophy of mathematics, especially those formalisms that use formal logic to define their principles. One can rephrase the first incompleteness theorem as follows: it is impossible to find a comprehensive system of axioms that would be able to prove all mathematical truths, and not a single lie". On the other hand, from the point of view of strict formality, this reformulation does not make much sense, since it assumes the concepts of "true" and "false" are defined in an absolute sense, rather than in a relative one for each specific system.