Biographies Characteristics Analysis

Google on the problem of artificial intelligence: a machine is a perfect person. Can a computer think

In order to improve people's lives, artificial intelligence must be human-centric in the first place. As the chief scientist explains Google Cloud Fei-Fei Li, in practice, this means improving communication and collaboration skills in addition to diversifying the teams that develop these technologies.

Artificial intelligence today

One of the biggest artificial intelligence (AI) companies is Google, a company that many associate with the popular search engine. Over the years, its specialists have been developing AI, which can be seen in some of the current products. For example, Google Assistant, which was released to Android smartphones in February; in addition, AI also helps the aforementioned search engine do its job; finally, the Google Home smart speaker is equipped with an AI system. Right now, AI looks like a first grader, but in the long run, it could reach the point where it becomes smarter than the average person interacting with it.

However, to achieve this goal, AI itself will first have to become more human-like. At least that's what Fei-Fei Li, chief scientist at Google Cloud and director of the Stanford Artificial Intelligence and Vision Lab, thinks. In her opinion, in this way AI will help us improve everyday life and provide psychological comfort while communicating with it.

In an interview with the MIT Technology Review, Li explained that thinking about the impact of AI on the world around us is a critical part of the analytical process and that working with artificial intelligence has clearly shown that future developments need to be more human-centric.

Development of machine intelligence

“If we evaluate what level AI technology has reached at the present time, then in my opinion the most striking marker will be excellent pattern recognition. Modern AI is too focused on, it lacks the contextual awareness and flexible learning inherent in humans. We also want to make technology that makes people's lives better, safer, more productive, and better at work — and that requires a very high level of human-machine interaction,” says Li.

We can already see the first signs of such a trend, although its potential has yet to be revealed. For example, in July, Google DeepMind demonstrated AI that has “imagination” and is able to analyze information and plan actions without human intervention. The company said that its brainchild is "on the verge" of being able to perfectly imitate human speech. Another Google technology, Google Clips, is able to take pictures on its own, eliminating the need for a person to guess the “perfect moment” for photographing. This fits well with Lee's vision, but also highlights the need for further work towards AI self-reliance.

AI as the ideal human of the future

As Lee assures, a simple upgrade of artificial intelligence and adding new features to it will not be enough. The stronger the connection between a machine and a person, the higher the risk that immoral parts of society can affect artificial intelligence, and therefore, first of all, the creative approach and collective work of many specialists is needed here.

“When you create a technology that is widespread and plays a crucial role for humanity, then, of course, you need to make sure that it carries the values ​​inherent in all people and serves the needs of the entire population of the Earth. If the developers make every effort to ensure that the program is equally useful to everyone without exception, this will indeed allow a revolution in artificial intelligence systems.

Of course, artificial intelligence is our ticket to the future. But, being just a tool in the hands of a person, it can be used both for good and for selfish, sometimes even illegal purposes. If this technology is destined to change the world, then let's hope that scientists will make the right choice.

Algorithmic Artificial Intelligence machine understanding of natural language, i.e., question-answer systems and access to natural language databases, translation from one language to another, pattern recognition, image analysis of 3-dimensional scenes, logistical knowledge representation systems and logical inference, heuristic programming, theorem proving, decision making, games, databases, knowledge bases, robots, expert systems

Preliminary definition of thinking The brain arose and evolved to ensure the existence of animals, that is, for survival. A simple functional definition of thinking is possible, based on the idea of ​​what thinking is for (human or animal). Thinking is an active process in the living brain, aimed at: 1. building in the brain an active hierarchical model of the environment, necessary and sufficient for the perception of the environment and the control of active purposeful behavior in a multi-extreme environment; 2. implementation of the process of perception of the environment; 3. implementation of the behavior control process in a multi-extreme environment; 4. implementation of the learning process; 5. solution of non-algorithmic (creative) problems.

The man did not even suspect the absence of a brain, and this did not prevent him from leading a full-fledged intellectual life. A 44-year-old man, head of the municipal administration. The coefficient of general intelligence - IQ - is 75, verbal - 84, non-verbal - 70. These are really not very large values, but in general they are above the lower limit of the norm, which, as a rule, coincides with the value of 70.

There is a dragon in each of us. Organizationally, the human brain, like all higher animals, can be conditionally divided into three functional structures: the brain of reptiles, the stem brain (including the spinal cord), and the neocortex. Behavior, including intellectual behavior, is determined by the integral, united work of all these structures. Everything that we inherited from reptiles is as human as the satisfaction of higher spiritual needs. Many highly "spiritual" teachings are essentially a form of reptilian behavior. To kill the "Dragon", according to the metaphor of E. Schwartz, is not possible. With the "Dragon" you can only agree.

Classical artificial intelligence is unlikely to be embodied in thinking machines; the limit of human ingenuity in this area, apparently, will be limited to the creation of systems that mimic the work of the brain.

The science of artificial intelligence (AI) is undergoing a revolution. In order to explain its causes and meaning and put it into perspective, we must first turn to history.

In the early 1950s, the traditional, somewhat vague question of whether a machine could think gave way to the more accessible question of whether a machine that manipulated physical symbols according to structure-based rules could think. This question is formulated more precisely, because formal logic and the theory of computation have advanced significantly in the preceding half century. Theorists began to appreciate the possibilities of abstract symbol systems that undergo transformations in accordance with certain rules. It seemed that if these systems could be automated, then their abstract computing power would manifest itself in a real physical system. Such views contributed to the birth of a well-defined research program on a fairly deep theoretical basis.

Can a machine think?

There were many reasons for answering yes. Historically, one of the first and deepest causes has been two important results of the theory of computation. The first result was Church's thesis that every effectively computable function is recursively computable. The term "efficiently computable" means that there is some kind of "mechanical" procedure by which it is possible to calculate the result in a finite time given the input data. "Recursively computable" means that there is a finite set of operations that can be applied to a given input, and then sequentially and repeatedly applied to the newly obtained results to compute the function in a finite time. The concept of a mechanical procedure is not formal, but rather intuitive, and therefore Church's thesis has no formal proof. However, it gets to the heart of what computation is, and a lot of different evidence converges to support it.

The second important result was obtained by Alan M. Turing, who showed that any recursively computable function can be computed in finite time using a maximally simplified symbol-manipulating machine, which later came to be called the universal Turing machine. This machine is governed by recursively applicable rules sensitive to the identity, order, and location of elementary symbols that act as input.

A very important corollary follows from these two results, namely that a standard digital computer, provided with the correct program, with sufficient memory, and with sufficient time, can compute any rule-driven function with input and output. In other words, he can demonstrate any systematic set of responses to arbitrary influences from the external environment.

To concretize this as follows: the results discussed above mean that a properly programmed machine that manipulates symbols (hereinafter referred to as an MC machine) must satisfy the Turing test for the presence of a conscious mind. The Turing test is purely a behavioral test, yet its requirements are very strong. (How valid this test is, we will discuss below, where we meet with the second, fundamentally different "test" for the presence of a conscious mind.) According to the original version of the Turing test, the input to the MS machine should be questions and phrases in natural colloquial language, which we we type on the keyboard of the input device, and the output is the answers of the MS machine printed by the output device. A machine is considered to have passed this test for the presence of a conscious mind if its responses cannot be distinguished from those typed by a real, intelligent person. Of course, at present no one knows the function by which it would be possible to obtain an output that does not differ from the behavior of a rational person. But the results of Church and Turing guarantee us that whatever this (presumably efficient) function is, an appropriately designed MS machine can compute it.

This is a very important conclusion, especially considering that Turing's description of interaction with a machine by means of a typewriter is an insignificant limitation. The same conclusion holds even if the MC-machine interacts with the world in more complex ways: through the apparatus of direct vision, natural speech, etc. In the end, the more complex recursive function still remains Turing computable. There remains only one problem: to find that undoubtedly complex function that controls the responses of a person to influences from the external environment, and then write a program (a set of recursively applicable rules) with which the MS machine will calculate this function. These goals formed the basis of the scientific program of classical artificial intelligence.

First results were encouraging

MC machines with ingeniously programmed programs demonstrated a whole range of actions that seemed to belong to the manifestations of the mind. They responded to complex commands, solved difficult arithmetic, algebraic and tactical problems, played checkers and chess, proved theorems and maintained simple dialogue. Results continued to improve with the advent of larger storage devices, faster machines, and the development of more powerful and sophisticated programs. Classical or "programmed" AI has been a very vibrant and successful field of science in almost every way. The recurrent denial that MC machines would eventually be able to think seemed to be biased and uninformed. The evidence in favor of a positive answer to the question posed in the title of the article seemed more than convincing.

Of course, there were some ambiguities. First of all, MS machines didn't look much like the human brain. However, here, too, classical AI had a convincing answer ready. First, the physical material an MS machine is made of has essentially nothing to do with the function it computes. The latter is included in the program. Secondly, the technical details of the functional architecture of the machine are also irrelevant, since completely different architectures, designed to work with completely different programs, can nevertheless perform the same input-output function.

Therefore, the aim of AI was to find a function that is characteristic of the mind in terms of input and output, and also to create the most efficient of many possible programs in order to calculate this function. At the same time, it was said that the specific way in which the function is calculated by the human brain does not matter. This completes the description of the essence of classical AI and the grounds for a positive answer to the question posed in the title of the article.

Can a machine think? There were also some arguments in favor of a negative answer. During the 1960s, noteworthy negative arguments were relatively rare. The objection has sometimes been raised that thinking is not a physical process and that it takes place in an immaterial soul. However, such a dualistic view did not seem convincing enough from either an evolutionary or a logical point of view. It has not had a deterrent effect on AI research.

Considerations of a different nature attracted much more attention of AI specialists. In 1972, Hubert L. Dreyfus published a book that was highly critical of parade displays of intelligence in AI systems. He pointed out that these systems did not adequately model true thinking, and uncovered a pattern inherent in all these failed attempts. In his opinion, the models lacked that huge stock of non-formalized general knowledge about the world that any person has, as well as the ability inherent in common sense to rely on certain components of this knowledge, depending on the requirements of a changing environment. Dreyfus did not deny the fundamental possibility of creating an artificial physical system capable of thinking, but he was highly critical of the idea that this could only be achieved by manipulating symbols with recursively applied rules.

In the circles of artificial intelligence specialists, as well as philosophers of reasoning Dreyfus were perceived mainly as short-sighted and biased, based on the inevitable simplifications inherent in this still very young field of research. Perhaps these shortcomings really took place, but they, of course, were temporary. The time will come when more powerful machines and better programs will make it possible to get rid of these shortcomings. It seemed that time works for artificial intelligence. Thus, these objections did not have any noticeable impact on further research in the field of AI.

However, it turned out that time worked for Dreyfus: in the late 70s - early 80s, an increase in the speed and memory of computers did not increase their "mental abilities" much. It turned out, for example, that pattern recognition in machine vision systems requires an unexpectedly large amount of computation. To obtain practically reliable results, more and more computer time had to be spent, far exceeding the time required to perform the same tasks for a biological vision system. Such a slow simulation process was alarming: after all, in a computer, signals propagate about a million times faster than in the brain, and the clock frequency of the computer's central processing unit is about the same times higher than the frequency of any oscillations found in the brain. And yet, on realistic tasks, the tortoise easily overtakes the hare.

In addition, to solve realistic problems it is necessary that the computer program has access to an extremely large database. Building such a database is already a rather complex problem in itself, but it is exacerbated by another circumstance: how to provide access to specific, context-dependent fragments of this database in real time. As the databases became more and more capacious, the problem of access became more complicated. An exhaustive search took too long, and heuristic methods were not always successful. Fears similar to those expressed by Dreyfus have begun to be shared even by some experts working in the field of artificial intelligence.

Around this time (1980), John Searle presented a groundbreaking critical concept that called into question the very fundamental assumption of the classical AI research agenda, namely, the idea that the correct manipulation of structured symbols by recursively applying rules that take into account their structure, may constitute the essence of the conscious mind.

Searle's main argument was based on a thought experiment in which he demonstrates two very important facts. First, he describes an MS machine that (as we should understand) implements a function that, on input and output, is capable of passing the Turing test in the form of a conversation that takes place exclusively in Chinese. Secondly, the internal structure of the machine is such that no matter what behavior it exhibits, there is no doubt to the observer that neither the machine as a whole, nor any part of it, understands the Chinese language. All it contains is an English-only person following the rules written in the instructions by which to manipulate the characters entering and exiting through the mailbox in the door. In short, the system satisfies the Turing test positively, despite the fact that it does not have a genuine understanding of the Chinese language and the actual semantic content of messages (see J. Searle's article "The Mind of the Brain - a Computer Program?").

The general conclusion from this is that any system that simply manipulates physical symbols according to structure-sensitive rules will at best be a poor parody of a real conscious mind, since it is impossible to generate "real semantics" simply by turning the knob of "empty syntax". It should be noted here that Searle does not put forward a behavioral (non-behavioral) test for the presence of consciousness: the elements of the conscious mind must have real semantic content.

There is a temptation to reproach Searle with the fact that his thought experiment is not adequate, since the system he proposes, acting like a "Rubik's cube", will work absurdly slowly. However, Searle insists that speed does not play any role in this case. He who thinks slowly still thinks right. Everything necessary for the reproduction of thinking, according to the concept of classical AI, in his opinion, is present in the "Chinese room".

Searle's article elicited enthusiastic responses from AI experts, psychologists, and philosophers. On the whole, however, it was met with even more hostility than Dreyfus's book. In his article, which is published simultaneously in this issue of the journal, Searle makes a number of critical arguments against his concept. In our opinion, many of them are legitimate, especially those whose authors greedily “take the bait”, claiming that, although the system consisting of a room and its contents is terribly slow, it still understands Chinese.

We like these answers, but not because we think the Chinese room understands Chinese. We agree with Searle that she does not understand him. The attraction of these arguments is that they reflect a failure to accept the all-important third axiom in Searle's argument: "Syntax by itself does not constitute semantics and is not sufficient for the existence of semantics." This axiom may be true, but Searle cannot justifiably claim that he knows this for sure. Moreover, to suggest that it is true is to beg the question of whether the program of classical AI research is sound, since this program is based on the very interesting assumption that if we can only set in motion an appropriately structured process, a kind of internal dance of syntactic elements, correctly connected with the inputs and outputs, then we can get the same states and manifestations of the mind that are inherent in man.

That Searle's third axiom really begs this question becomes apparent when we directly compare it with his own first conclusion: "Programs appear as the essence of the mind and their presence is not sufficient for the presence of the mind." It is not difficult to see that his third axiom already carries 90% of the conclusion almost identical to it. This is why Searle's thought experiment is specifically designed to support the third axiom. This is the whole point of the Chinese room.

Although the Chinese room example makes axiom 3 attractive to the uninitiated, we do not think that it proves the validity of this axiom, and to demonstrate the failure of this example, we offer our parallel example as an illustration. Often a single good example that refutes a disputed claim is much better at clarifying the situation than an entire book full of logical juggling.

There have been many examples of skepticism in the history of science, such as we see in Searle's reasoning. In the XVIII century. Irish Bishop George Berkeley considered it inconceivable that compression waves in air could in themselves be the essence of sound phenomena or a sufficient factor for their existence. The English poet and painter William Blake and the German naturalist Johann Goethe considered it unthinkable that small particles of matter could themselves be an entity or factor sufficient for the objective existence of light. Even in this century there have been men who could not imagine that inanimate matter by itself, no matter how complex its organization, could be an organic entity or a sufficient condition of life. Clearly, what people may or may not imagine often has nothing to do with what actually exists or does not exist in reality. This is true even when it comes to people with a very high level of intelligence.

To see how these historical lessons can be applied to Searle's reasoning, let's apply an artificial parallel to his logic and reinforce this parallel with a thought experiment.

Axiom 1. Electricity and magnetism are physical forces.

Axiom 2. An essential property of light is luminosity.

Axiom 3. Forces themselves appear as the essence of the glow effect and are not sufficient for its presence.

Conclusion 1. Electricity and magnetism are not the essence of light and are not sufficient for its presence.

Let us assume that this reasoning was published shortly after James K. Maxwell in 1864 suggested that light and electromagnetic waves were identical, but before the systematic parallels between the properties of light and the properties of electromagnetic waves were fully realized in the world. The above logical reasoning might seem like a convincing objection to Maxwell's bold hypothesis, especially if it were accompanied by the following comment in support of Axiom 3.

Consider a dark room in which there is a person holding a permanent magnet or a charged object in his hands. If a person starts moving the magnet up and down, then, according to Maxwell's theory of artificial lighting (AI), a propagating sphere of electromagnetic waves will come from the magnet and the room will become brighter. But, as is well known to anyone who has tried to play with magnets or charged balls, their forces (and for that matter, any other forces), even when these objects are in motion, do not create any glow. Therefore, it seems unthinkable that we could achieve a real glowing effect simply by manipulating forces!

Fluctuations in electromagnetic forces are light, although the magnet that a person moves does not produce any glow. Likewise, the manipulation of symbols according to certain rules may constitute intelligence, although the rule-based system found in Searle's China Room seems to lack real understanding.

What could Maxwell answer if this challenge were thrown to him?

First, he might have insisted that the "luminous room" experiment misleads us about the properties of visible light, because the frequency of the magnet's vibration is extremely low, about 1015 times less than necessary. This may be followed by the impatient reply that the frequency does not play any role here, that the room with the oscillating magnet already contains everything necessary for the manifestation of the glow effect in full accordance with the theory of Maxwell himself.

In its turn Maxwell could "take the bait" by claiming quite rightly that the room is already full of luminosity, but the nature and strength of this luminescence is such that a person is not able to see it. (Due to the low frequency with which a person moves a magnet, the length of the generated electromagnetic waves is too long and the intensity is too low for the human eye to react to them.) However, given the level of understanding of these phenomena in the considered period of time (60s of the last century), such an explanation would probably have caused laughter and mocking remarks. Glowing room! But excuse me, Mr. Maxwell, it’s completely dark in there!”

So we see that the poor Maxwell has to be hard. All he can do is insist on the following three points. First, axiom 3 in the above reasoning is not true. Indeed, despite the fact that intuitively it seems quite plausible, we involuntarily raise a question about it. Secondly, the glowing room experiment does not show us anything interesting about the physical nature of light. And third, in order to really solve the problem of light and the possibility of artificial luminescence, we need a research program that will allow us to establish whether, under appropriate conditions, the behavior of electromagnetic waves is completely identical to the behavior of light. The same answer should be given by classical artificial intelligence to Searle's reasoning. Although Searle's Chinese room may seem "semantically dark", he has little reason to insist that the manipulation of symbols done according to certain rules can never produce semantic phenomena, especially since people are still ill-informed and limited only by understanding the language. the level of common sense of those semantic and mental phenomena that need to be explained. Instead of taking advantage of the understanding of these things, Searle in his reasoning freely uses the lack of such an understanding in people.

Having expressed our criticisms of Searle's reasoning, let's return to the question of whether a classical AI program has a real chance to solve the problem of the conscious mind and create a thinking machine. We believe that the prospects here are not bright, but our opinion is based on reasons that are fundamentally different from those used by Searle. We build on specific failures of the classical AI research program and on a number of lessons that the biological brain has taught us through a new class of computational models that embody some properties of its structure. We have already mentioned the failures of classical AI in solving those problems that are quickly and efficiently solved by the brain. Scientists are gradually coming to the consensus that these failures are due to the properties of the functional architecture of MS machines, which are simply unsuitable for solving the complex tasks before it.

What we need to know is how does the brain achieve the thinking effect? Reverse engineering is a widespread technique in engineering. When a new technical device goes on sale, competitors figure out how it works by taking it apart and trying to guess the principle on which it is based. In the case of the brain, this approach is extraordinarily difficult to implement, because the brain is the most complex thing on the planet. Nevertheless, neurophysiologists have managed to reveal many properties of the brain at various structural levels. Three anatomical features fundamentally distinguish it from the architecture of traditional electronic computers.

First of all, the nervous system is a parallel machine, in the sense that signals are processed simultaneously in millions of different ways. For example, the retina of the eye transmits a complex input signal to the brain not in batches of 8, 16 or 32 elements, like a desktop computer, but in the form of a signal consisting of almost a million individual elements arriving simultaneously at the end of the optic nerve (the lateral geniculate body), after which they also simultaneously, in one step, are processed by the brain. Second, the elementary "processing device" of the brain, the neuron, is relatively simple. Also, its response to an input signal is analog, not digital, in the sense that the frequency of the output signal changes continuously with the input signals.

Thirdly, in the brain, in addition to axons leading from one group of neurons to another, we often find axons leading in the opposite direction. These returning processes allow the brain to modulate the way sensory information is processed. Even more important is the fact that, due to their existence, the brain is a truly dynamic system, in which continuously maintained behavior is characterized by both very high complexity and relative independence from peripheral stimuli. A useful role in studying the mechanisms of operation of real neural networks and the computational properties of parallel architectures was largely played by simplified network models. Consider, for example, a three-layer model consisting of neuron-like elements that have axon-like connections with elements of the next level. The input stimulus reaches the activation threshold of a given input element, which sends a signal of proportional strength along its "axon" to the numerous "synaptic" endings of the elements of the hidden layer. The overall effect is that a particular pattern of activating signals on a set of input elements generates a certain pattern of signals on a set of hidden elements.

The same can be said about the output elements. Similarly, the configuration of the activation signals at the slice of the hidden layer leads to a particular pattern of activation at the slice of the output elements. Summing up, we can say that the network under consideration is a device for converting any large number of possible input vectors (configurations of activating signals) into a uniquely corresponding output vector. This device is designed to calculate a specific function. Which function it evaluates depends on the global configuration of the synaptic weight structure.

Neural networks model the main property of the brain microstructure. In this three-layer network, the input neurons (lower left) process the configuration of firing signals (lower right) and pass them along the weighted connections to the hidden layer. The hidden layer elements sum up their multiple inputs to form a new signal configuration. It is passed to the outer layer, which performs further transformations. In general, the network will transform any input set of signals into the corresponding output, depending on the location and relative strength of the connections between neurons.

There are various procedures for fitting weights, thanks to which one can make a network capable of computing almost any function (ie, any transformation between vectors). In fact, it is possible to implement a function in the network that cannot even be formulated, it is enough just to give it a set of examples showing what entry and exit lares we would like to have. This process, called "learning the network", is done by successively selecting the weights assigned to the links, which continues until the network begins to perform the desired transformations on the input in order to obtain the desired output.

Although this network model greatly simplifies the structure of the brain, it still illustrates several important aspects. First, the parallel architecture provides a huge speed advantage over a traditional computer, since the many synapses at each level perform many small computational operations simultaneously, instead of operating in a very time-consuming sequential mode. This advantage becomes more and more significant as the number of neurons at each level increases. Surprisingly, the speed of information processing does not depend at all on the number of elements involved in the process at each level, nor on the complexity of the function that they calculate. Each level can have four elements, or a hundred million; a synaptic weight configuration can compute simple one-digit sums or solve second-order differential equations. It does not matter. The computation time will be exactly the same.

Secondly, the parallel nature of the system makes it insensitive to small errors and gives it functional stability; the loss of a few links, even a noticeable number of them, has a negligible effect on the overall progress of the transformation performed by the rest of the network.

Thirdly, a parallel system stores a large amount of information in a distributed form, while providing access to any fragment of this information in a time measured in several milliseconds. Information is stored in the form of certain configurations of the weights of individual synaptic connections formed in the process of previous learning. The desired information is "released" as the input vector passes through (and transforms) this link configuration.

Parallel data processing is not ideal for all kinds of computing. When solving problems with a small input vector, but requiring many millions of rapidly recurring recursive calculations, the brain is completely helpless, while classical MS machines demonstrate their best capabilities. This is a very large and important class of computing, so that classical machines will always be needed and even necessary. However, there is an equally wide class of computations for which the architecture of the brain is the best technical solution. These are mainly the calculations that living organisms usually face: recognizing the contours of a predator in a "noisy" environment; instantaneous recall of the correct reaction to his gaze, the way to escape when he approaches or defend when he is attacked; distinguishing between edible and inedible things, between sexual partners and other animals; choice of behavior in a complex and constantly changing physical or social environment; etc.

Finally, it is very important to note that the described parallel system does not manipulate symbols according to structural rules. Rather, symbol manipulation is just one of many other "intelligent" skills that the network may or may not learn. Rule-driven symbol manipulation is not the primary way the network functions. Searle's reasoning is directed against rule-governed MC machines; vector transformation systems of the type we have described thus fall outside the scope of his Chinese room argument, even if it were valid, which we have other, independent reasons to doubt.

Searle is aware of parallel processors, but, in his opinion, they will also be devoid of real semantic content. To illustrate their inevitable inferiority in this regard, he describes a second thought experiment, this time with a Chinese gym filled with people organized in a parallel network. The further course of his reasoning is similar to the reasoning in the case of the Chinese room.

In our opinion, this second example is not as successful and convincing as the first. First of all, the fact that not a single element in the system understands Chinese plays any role, because the same is true in relation to the human nervous system: not a single neuron in my brain understands English, although the brain as a whole understands . Searle goes on to say that his model (one person per neuron plus one quick-footed boy per synaptic connection) would require at least 1014 people, since the human brain contains 1011 neurons, each with an average of 103 connections. . Thus, his system would require the population of 10,000 worlds such as our Earth. Obviously, the gym is far from being able to accommodate a more or less adequate model.

On the other hand, if such a system could still be assembled, on the appropriate cosmic scale, with all the connections accurately modeled, we would have a huge, slow, strangely designed, but still functioning brain. In this case, of course, it is natural to expect that with the right input he will think, and not vice versa, that he is not capable of it. It cannot be guaranteed that the operation of such a system will represent real thinking, since the theory of vector processing may not adequately reflect the operation of the brain. But in the same way, we have no a priori guarantee that she will not think. Searle once again erroneously identifies the current limits of his own (or reader's) imagination with the limits of objective reality.

Brain

The brain is a kind of computer, although most of its properties are still unknown. It is far from easy to characterize the brain as a computer, and such an attempt should not be taken too lightly. The brain does compute functions, but not in the same way as in applied tasks solved by classical artificial intelligence. When we talk about a machine as a computer, we don't mean a sequential digital computer that needs to be programmed and that has a clear separation between software and hardware; nor do we mean that this computer manipulates symbols or follows certain rules. The brain is a computer of a fundamentally different kind.

How the brain captures the semantic content of information is not yet known, but it is clear that this problem goes far beyond linguistics and is not limited to humans as a species. A small patch of fresh earth means, to both man and coyote, that there is a gopher somewhere nearby; an echo with certain spectral characteristics indicates the presence of a moth to a bat. To develop a theory of meaning formation, we need to know more about how neurons encode and transform sensory signals, the neural basis of memory, learning and emotion, and the relationship between these factors and the motor system. A neurophysiology-based theory of understanding of meaning may even require our intuitions, which now seem so unshakable to us and which Searle uses so freely in his reasoning. Such revisions are not uncommon in the history of science.

Can science create artificial intelligence using what is known about the nervous system? We see no fundamental obstacles on this path. Searle allegedly agrees, but with a caveat: "Any other system capable of generating intelligence must have causal properties (at least) equivalent to the corresponding properties of the brain." At the end of the article, we will consider this statement. We believe that Searle does not argue that a successful AI system must necessarily have all the causal properties of the brain, such as the ability to smell rotting, the ability to carry viruses, the ability to turn yellow under the action of horseradish peroxidase, etc. Require full compliance would be like asking an artificial aircraft to be able to lay eggs.

He probably had in mind only the requirement that an artificial mind should have all the causal properties that, as he put it, belong to a conscious mind. However, which ones exactly? And here we are again back to the dispute about what belongs to the conscious mind and what does not. This is just the place to argue, but the truth in this case should be found out empirically - try and see what happens. Since we know so little about what exactly the thought process and semantics are, any certainty about what properties are relevant here would be premature. Searle hints several times that every level, including biochemistry, must be represented in any machine that claims to be artificial intelligence. Obviously, this is too strong a requirement. An artificial brain can achieve the same effect without using biochemical mechanisms.

This possibility was demonstrated in the studies of K. Mead at the California Institute of Technology. Mead and his colleagues used analog microelectronic devices to create an artificial retina and an artificial cochlea. (In animals, the retina and cochlea are not just transducers: there is complex parallel processing going on in both systems.) These devices are no longer simple models in a minicomputer that Searle chuckles at; they are real information processing elements that respond in real time to real signals: light in the case of the retina and sound in the case of the cochlea. The device diagrams are based on the known anatomical and physiological properties of the cat retina and barn owl cochlea, and their output is extremely close to the known outputs of the organs they model.

These microcircuits do not use any neurotransmitters, therefore neurotransmitters do not appear to be necessary to achieve the desired results. Of course, we cannot say that the artificial retina sees something, since its output does not go to the artificial thalamus or cerebral cortex, etc. Whether it is possible to build a whole artificial brain using the Mead program is not yet known, but at present We have no evidence that the absence of biochemical mechanisms in the system makes this approach unrealistic.

The nervous system spans a whole range of organization, from neurotransmitter molecules (below) to the entire brain and spinal cord. Intermediate levels contain individual neurons and neural circuits, such as those that implement the selectivity of perception of visual stimuli (in the center), and systems consisting of many circuits, similar to those that serve the functions of speech (top right). Only through research can one establish how closely an artificial system is able to reproduce biological systems that have a mind.

Like Searle, we reject the Turing test as a sufficient criterion for the presence of a conscious mind. On one level, we have similar reasons for doing this: we agree that it is very important how a function defined by input-output is implemented; it is important that the correct processes take place in the machine. At another level, we are guided by completely different considerations. Searle bases his position on the presence or absence of semantic content on intuitions of common sense. Our point of view is based on the specific failures of classical MS machines and the specific merits of machines whose architecture is closer to the structure of the brain. A comparison of these different types of machines shows that some computational strategies have a huge and decisive advantage over others in regard to typical mental tasks. These advantages, established empirically, do not cause any doubts. Obviously, the brain systematically takes advantage of these computational advantages. However, it is by no means necessarily the only physical system capable of taking advantage of them. The idea of ​​creating artificial intelligence in a non-biological, but essentially parallel machine remains very tempting and quite promising.

Artificial intelligence is a field of science that designs machines, computers, and hardware with intelligence ranging from the simplest to the humanoid. Although the concept of intelligent machines originated in ancient Greek mythology, the modern history of artificial intelligence began with the development of computers. The term was coined in 1956 at the first artificial intelligence conference.

Decades later, scientists continue to explore the still elusive glimpses of machine intelligence, even though the question "can a machine think?" still arouses the widest debate.


It is worth noting that, contrary to popular belief, not all carriers of artificial intelligence are humanoid robots or fantastic operating systems with the voice of Scarlett Johansson. Let's go through the basic skills inherent in AI.

Problem solving

One of the basic qualities of AI is the ability to solve problems. To give the machine this ability, scientists have equipped it with algorithms that mimic human thinking and use the concepts of probability, economics and statistics.

Approaches include models inspired by neural networks in the brain, the power of machine learning and pattern recognition, and statistical approaches that use mathematical tools and languages ​​to solve problems.

Machine learning

Another basic point of AI is the ability of a machine to learn. So far, there is no single approach whereby a computer can be programmed to receive information, acquire knowledge, and tailor behavior accordingly - rather, there are a number of approaches based on algorithms.

One of the important methods of machine learning is the so-called deep learning, an AI method based on neural theory and consisting of intricate layers of interconnected nodes. While Apple's Siri is one example of deep learning in action, Google recently acquired DeepMind, a startup that specializes in advanced AI learning algorithms; Netflix is ​​also investing in deep learning.

language processing

Natural Language Processing (NLP) gives a machine the ability to read and understand human language, enabling human-machine communication.

Such systems enable computers to translate and communicate through signal processing, parsing, semantic analysis, and pragmatics (language in context).

Movement and perception

The type of intelligence associated with movement and perception is closely related to robotics, which gives the machine not only cognitive, but also sensory intelligence. This is made possible by navigation input, localization technology, and sensors such as cameras, microphones, sonars, and object recognition. In recent years, we have seen these technologies already in many robots, ocean and space rovers.

social intelligence

Emotional and social skills represent another advanced level of artificial intelligence that allows the machine to take on even more human qualities. SEMAINE, for example, aims to give machines such social skills through what it calls SAL, or "artificial sensory listener." This is an advanced dialogue system, if it can be completed, it will be able to perceive the facial expression, look and voice of a person, adjusting accordingly.

Creation

The ability to think and act creatively is a distinctive human trait that many consider superior to the abilities of computers. However, as an aspect of human intelligence, creativity can be applied to artificial intelligence as well.

It is said that machines can be given the opportunity to produce valuable and innovative ideas through three models: combinational, exploration, and transformational. How exactly this will be implemented - we will see in the future. After all, the AARON machine is already producing museum-grade works of art.

Improvisation as a human activity is “a prototype of creative behavior,” says Shelly Carson, a member of the psychology department at Harvard University. In her book Your Creative Brain, she writes that at a basic level, each of us improvises, as there are many situations in life that require it. For example, on the road, you need to instantly make the only right decision in order to avoid a collision. In this case, a person refers to his experience. But creative improvisation is something more, it generates new unexpected ideas.

Aaron's painting



Robot AARON, created by renowned artist Gorald Cohen. His invention, at its lowest level, calculated algorithms for creating lines and shapes from which drawings were made. Later, a more advanced robot artist named Action Jackson who painted paintings similar to the works of Jackson Pollock. And although the debate about the artistic value of such works does not subside until now, the fact remains that robots can create.

Moreover, some modern forms of artificial intelligence, it would seem, can achieve great success. For example, Siri for iPhone not only processes natural human speech, but also adapts to each user individually, studying his character and habits; and IBM's "Watson" supercomputer won a million dollars in "His Game". Wouldn't such perfect machines be able to handle improvisation?