Biographies Characteristics Analysis

Artificial intelligence. Use of AI in public administration

Artificial intelligence (AI, English: Artificial intelligence, AI) - the science and technology of creating intelligent machines, especially intelligent computer programs. AI is related to the similar task of using computers to understand human intelligence, but is not necessarily limited to biologically plausible methods.

What is artificial intelligence

Intelligence(from Lat. intellectus - sensation, perception, understanding, understanding, concept, reason), or mind - a quality of the psyche consisting of the ability to adapt to new situations, the ability to learn and remember based on experience, understand and apply abstract concepts and use one’s knowledge for environmental management. Intelligence is the general ability to cognition and solve difficulties, which unites all human cognitive abilities: sensation, perception, memory, representation, thinking, imagination.

In the early 1980s. Computational scientists Barr and Fajgenbaum proposed the following definition of artificial intelligence (AI):


Later, a number of algorithms and software systems began to be classified as AI, the distinctive property of which is that they can solve some problems in the same way as a person thinking about their solution would do.

The main properties of AI are understanding language, learning and the ability to think and, importantly, act.

AI is a complex of related technologies and processes that are developing qualitatively and rapidly, for example:

  • natural language text processing
  • expert systems
  • virtual agents (chatbots and virtual assistants)
  • recommendation systems.

Technological directions of AI. Deloitte data

AI Research

  • Main article: Artificial Intelligence Research

Standardization in AI

2018: Development of standards in the field of quantum communications, AI and smart city

On December 6, 2018, the Technical Committee “Cyber-Physical Systems” based on RVC together with the Regional Engineering Center “SafeNet” began developing a set of standards for the markets of the National Technology Initiative (NTI) and the digital economy. By March 2019, it is planned to develop technical standardization documents in the field of quantum communications, and, RVC reported. Read more.

Impact of artificial intelligence

Risk to the development of human civilization

Impact on the economy and business

  • The impact of artificial intelligence technologies on the economy and business

Impact on the labor market

Artificial Intelligence Bias

At the heart of everything that is the practice of AI ( Machine translate, speech recognition, natural language text processing, computer vision, car driving automation and much more) lies deep learning. It is a subset of machine learning, characterized by the use of neural network models, which can be said to mimic the workings of the brain, so it would be a stretch to classify them as AI. Any neural network model is trained on large data sets, so it acquires some “skills,” but how it uses them remains unclear to its creators, which ultimately becomes one of the most important problems for many deep learning applications. The reason is that such a model works with images formally, without any understanding of what it does. Is such a system AI and can systems built on machine learning be trusted? The implications of the answer to the last question extend beyond the scientific laboratory. Therefore, media attention to the phenomenon called AI bias has noticeably intensified. It can be translated as “AI bias” or “AI bias”. Read more.

Artificial Intelligence Technology Market

AI market in Russia

Global AI market

Areas of application of AI

The areas of application of AI are quite wide and cover both familiar technologies and emerging new areas that are far from mass application, in other words, this is the entire range of solutions, from vacuum cleaners to space stations. You can divide all their diversity according to the criterion of key points of development.

AI is not a monolithic subject area. Moreover, some technological areas of AI appear as new sub-sectors of the economy and separate entities, while simultaneously serving most areas in the economy.

Main commercial applications of artificial intelligence technologies

The development of the use of AI leads to the adaptation of technologies in classical sectors of the economy along the entire value chain and transforms them, leading to the algorithmization of almost all functionality, from logistics to company management.

Using AI for Defense and Military Affairs

Use in education

Using AI in business

AI in the electric power industry

  • At the design level: improved forecasting of generation and demand for energy resources, assessment of the reliability of power generating equipment, automation of increased generation when demand surges.
  • At the production level: optimization of preventive maintenance of equipment, increasing generation efficiency, reducing losses, preventing theft of energy resources.
  • At the promotion level: optimization of pricing depending on the time of day and dynamic billing.
  • At the level of service provision: automatic selection of the most profitable supplier, detailed consumption statistics, automated customer service, optimization of energy consumption taking into account the customer’s habits and behavior.

AI in manufacturing

  • At the design level: increasing the efficiency of new product development, automated supplier assessment and analysis of spare parts requirements.
  • At the production level: improving the process of completing tasks, automating assembly lines, reducing the number of errors, reducing delivery times for raw materials.
  • At the promotion level: forecasting the volume of support and maintenance services, pricing management.
  • At the service delivery level: improving fleet route planning Vehicle, demand for fleet resources, improving the quality of training of service engineers.

AI in banks

  • Pattern recognition - used incl. to recognize customers in branches and convey specialized offers to them.

Main commercial areas of application of artificial intelligence technologies in banks

AI in transport

  • The auto industry is on the verge of a revolution: 5 challenges of the era of unmanned driving

AI in logistics

AI in brewing

Use of AI in public administration

AI in forensics

  • Pattern recognition - used incl. to identify criminals in public spaces.
  • In May 2018, it became known that the Dutch police were using artificial intelligence to investigate complex crimes.

According to The Next Web, law enforcement agencies began digitizing more than 1,500 reports and 30 million pages related to unsolved cases. Materials from 1988 onwards, in which the crime was not solved for at least three years, and the offender was sentenced to more than 12 years in prison, are transferred into computer format.

Solve a complex crime in a day. Police are adopting AI

Once all the content is digitized, it will be connected to a machine learning system that will analyze the records and decide which cases use the most reliable evidence. This should reduce the time it takes to process cases and solve past and future crimes from several weeks to one day.

Artificial intelligence will categorize cases according to their “solvability” and indicate possible results of DNA testing. The plan is then to automate analysis in other areas of forensics, and perhaps even expand into areas such as social science and testimony.

In addition, as one of the system developers, Jeroen Hammer, said, API functions for partners may be released in the future.


The Dutch police have a special unit that specializes in developing new technologies to solve crimes. It was he who created the AI ​​system for quick search criminals based on evidence.

AI in the judiciary

Developments in the field of artificial intelligence will help radically change the judicial system, making it fairer and free from corruption schemes. This opinion was expressed in the summer of 2017 by Vladimir Krylov, Doctor of Technical Sciences, technical consultant at Artezio.

The scientist believes that existing solutions in the field of AI can be successfully applied in various spheres of the economy and public life. The expert points out that AI is successfully used in medicine, but in the future it can completely change the judicial system.

“Looking at news reports every day about developments in the field of AI, you are only amazed at the inexhaustible imagination and fruitfulness of researchers and developers in this field. Messages about scientific research are constantly interspersed with publications about new products breaking into the market and reports of amazing results obtained through the use of AI in various fields. If we talk about expected events, accompanied by noticeable hype in the media, in which AI will again become the hero of the news, then I probably won’t risk making technological forecasts. I can assume that the next event will be the appearance somewhere of an extremely competent court in the form of artificial intelligence, fair and incorruptible. This will happen, apparently, in 2020-2025. And the processes that will take place in this court will lead to unexpected reflections and the desire of many people to transfer to AI most of the processes of managing human society.”

The scientist recognizes the use of artificial intelligence in the judicial system as a “logical step” to develop legislative equality and justice. Machine intelligence is not subject to corruption and emotions, can strictly adhere to the legislative framework and make decisions taking into account many factors, including data that characterize the parties to the dispute. By analogy with the medical field, robot judges can operate with big data from government service repositories. It can be assumed that machine intelligence will be able to quickly process data and take into account significantly more factors than a human judge.

Expert psychologists, however, believe that the absence of an emotional component when considering court cases will negatively affect the quality of the decision. The verdict of a machine court may be too straightforward, not taking into account the importance of people’s feelings and moods.

Painting

In 2015, the Google team tested neural networks about the ability to create images yourself. Then artificial intelligence was trained using a large number of different pictures. However, when the machine was “asked” to depict something on its own, it turned out that it interpreted the world around us in a somewhat strange way. For example, for the task of drawing dumbbells, the developers received an image in which the metal was connected by human hands. This probably happened due to the fact that during the training stage, the analyzed pictures with dumbbells contained hands, and the neural network interpreted this incorrectly.

On February 26, 2016, at a special auction in San Francisco, Google representatives raised about $98 thousand from psychedelic paintings created by artificial intelligence. These funds were donated to charity. One of the most successful pictures of the car is presented below.

A painting painted by Google's artificial intelligence.

Artificial intelligence is one of the most popular topics in the technology world lately. Minds such as Elon Musk, Stephen Hawking and Steve Wozniak are seriously concerned about AI research and argue that its creation puts us in mortal danger. At the same time, science fiction and Hollywood films have given rise to many misconceptions around AI. Are we really in danger and what inaccuracies are we making when we imagine the destruction of Skynet Earth, general unemployment, or, on the contrary, prosperity and carefreeness? Gizmodo has looked into human myths about artificial intelligence. Here is the full translation of his article.

It has been called the most important test of machine intelligence since Deep Blue defeated Garry Kasparov in a chess match 20 years ago. Google AlphaGo defeated grandmaster Lee Sedol at the Go tournament with a crushing score of 4:1, showing how seriously artificial intelligence (AI) has advanced. The fateful day when machines will finally surpass humans in intelligence has never seemed so close. But we seem to be no closer to understanding the consequences of this epoch-making event.

In fact, we cling to serious and even dangerous misconceptions about artificial intelligence. Last year, SpaceX founder Elon Musk warned that AI could take over the world. His words caused a storm of comments, both opponents and supporters of this opinion. For such a future monumental event, there is a surprising amount of disagreement as to whether it will happen and, if so, in what form. This is especially troubling given the incredible benefits humanity could gain from AI and the potential risks. Unlike other human inventions, AI has the potential to change humanity or destroy us.

It's hard to know what to believe. But thanks to early work by computer scientists, neuroscientists, and AI theorists, a clearer picture is beginning to emerge. Here are some common misconceptions and myths about artificial intelligence.

Myth #1: “We will never create AI with intelligence comparable to humans”

Reality: We already have computers that have equaled or exceeded human capabilities at chess, Go, stock trading, and conversation. Computers and the algorithms that run them can only get better. It's only a matter of time before they surpass humans at any task.

New York University research psychologist Gary Marcus said that “literally everyone” who works in AI believes that machines will eventually beat us: “The only real difference between the enthusiasts and the skeptics is the timing estimates.” Futurists like Ray Kurzweil believe this could happen within a few decades; others say it will take centuries.

AI skeptics are not convincing when they say that this is an unsolvable technological problem, and there is something unique about the nature of the biological brain. Our brains are biological machines - they exist in real world and adhere to the basic laws of physics. There is nothing unknowable about them.

Myth #2: “Artificial intelligence will have consciousness”

Reality: Most imagine that machine intelligence will be conscious and think the way humans think. Moreover, critics like Microsoft co-founder Paul Allen believe that we cannot yet achieve artificial general intelligence (capable of solving any mental problem a human can solve) because we lack a scientific theory of consciousness. But as Imperial College London cognitive robotics specialist Murray Shanahan says, we shouldn't equate the two concepts.

“Consciousness is certainly amazing and important thing, but I don't believe it's necessary for human-level artificial intelligence. To be more precise, we use the word “consciousness” to refer to several psychological and cognitive attributes that a person “comes with,” explains the scientist.

It is possible to imagine a smart machine that lacks one or more of these features. Ultimately, we may create incredibly intelligent AI that is unable to perceive the world subjectively and consciously. Shanahan argues that mind and consciousness can be combined in a machine, but we must not forget that these are two different concepts.

Just because a machine passes the Turing Test, in which it is indistinguishable from a human, does not mean it is conscious. To us, advanced AI may appear conscious, but it will be no more self-aware than a rock or a calculator.

Myth #3: “We shouldn’t be afraid of AI”

Reality: In January, Facebook founder Mark Zuckerberg said we shouldn't be afraid of AI because it will do an incredible amount of good things for the world. He's half right. We will benefit enormously from AI, from self-driving cars to the creation of new drugs, but there is no guarantee that every AI implementation will be benign.

A highly intelligent system can know everything about a specific task, such as solving a vexing financial problem or hacking a system. enemy defense. But outside the boundaries of these specializations, it will be deeply ignorant and unconscious. Google's DeepMind system is an expert in Go, but it has no ability or reason to explore areas outside its specialization.

Many of these systems may not be subject to security considerations. A good example is the complex and powerful Stuxnet virus, a militarized worm developed by the Israeli and US militaries to infiltrate and sabotage Iranian nuclear power plants. This virus somehow (deliberately or accidentally) infected a Russian nuclear power plant.

Another example is the Flame program, used for cyber espionage in the Middle East. It's easy to imagine future versions of Stuxnet or Flame going beyond their intended purpose and causing massive harm to sensitive infrastructure. (To be clear, these viruses are not AI, but in the future they may have it, hence the concern).

The Flame virus was used for cyber espionage in the Middle East. Photo: Wired

Myth #4: “Artificial superintelligence will be too smart to make mistakes”

Reality: AI researcher and founder of Surfing Samurai Robots Richard Lucimore believes that most scenarios doomsday related to AI are inconsistent. They are always built on the assumption that the AI ​​is saying: “I know that the destruction of humanity is caused by a failure in my design, but I am forced to do it anyway.” Lucimore says that if an AI behaves like this, reasoning about our destruction, then such logical contradictions will haunt it all its life. This in turn degrades his knowledge base and makes him too stupid to create dangerous situation. The scientist also argues that people who say: “AI can only do what it is programmed to do” are just as mistaken as their colleagues at the dawn of the computer era. Back then, people used this phrase to argue that computers were not capable of demonstrating the slightest flexibility.

Peter Macintyre and Stuart Armstrong, who work at the Future of Humanity Institute at Oxford University, disagree with Lucimore. They argue that AI is largely bound by how it is programmed. McIntyre and Armstrong believe that AI will not be able to make mistakes or be too stupid to not know what we expect from it.

“By definition, artificial superintelligence (ASI) is a subject with intelligence significantly greater than that of the best human brain in any field of knowledge. He will know exactly what we wanted him to do,” says McIntyre. Both scientists believe that AI will only do what it is programmed to do. But if he becomes smart enough, he will understand how different this is from the spirit of the law or the intentions of people.

McIntyre compared the future situation of humans and AI to the current human-mouse interaction. The mouse's goal is to seek food and shelter. But it often conflicts with the desire of a person who wants his animal to run around freely. “We're smart enough to understand some of the mice's goals. So the ASI will also understand our desires, but be indifferent to them,” says the scientist.

As the plot of the movie Ex Machina shows, it will be extremely difficult for a person to hold onto a smarter AI

Myth #5: “A simple patch will solve the problem of AI control”

Reality: By creating artificial intelligence smarter than humans, we will face a problem known as the “control problem.” Futurists and AI theorists fall into a state of complete confusion if you ask them how we will contain and limit ASI if one appears. Or how to make sure that he will be friendly towards people. Recently, researchers at the Georgia Institute of Technology naively suggested that AI could adopt human values ​​and social rules by reading simple stories. In reality, it will be much more difficult.

“There have been a lot of simple tricks proposed that could ‘solve’ the whole AI control problem,” says Armstrong. Examples included programming an ASI so that its purpose was to please people, or so that it simply functioned as a tool in the hands of a person. Another option is to integrate the concepts of love or respect into the source code. To prevent AI from adopting a simplistic, one-sided view of the world, it has been proposed to program it to value intellectual, cultural and social diversity.

But these solutions are too simple, like an attempt to cram all the complexity of human likes and dislikes into one superficial definition. Try, for example, to come up with a clear, logical, and workable definition of “respect.” This is extremely difficult.

The machines in The Matrix could easily destroy humanity

Myth #6: “Artificial intelligence will destroy us”

Reality: There is no guarantee that AI will destroy us, or that we will not be able to find a way to control it. As AI theorist Eliezer Yudkowsky said, “AI neither loves nor hates you, but you are made of atoms that it can use for other purposes.”

In his book “Artificial Intelligence. Stages. Threats. Strategies,” Oxford philosopher Nick Bostrom wrote that true artificial superintelligence, once it emerges, will pose greater risks than any other human invention. Prominent minds like Elon Musk, Bill Gates and Stephen Hawking (the latter of whom warned that AI could be our “worst mistake in history”) have also expressed concern.

McIntyre said that most of the purposes that an ASI might have are good reasons get rid of people.

“AI can predict, quite correctly, that we don't want it to maximize the profits of a particular company, no matter what the cost to customers. environment and animals. Therefore, he has a strong incentive to ensure that he is not interrupted, interfered with, turned off, or changed in his goals, since this would prevent his original goals from being achieved,” McIntyre argues.

Unless the ASI's goals closely mirror our own, it will have a good reason to prevent us from stopping it. Considering that his level of intelligence significantly exceeds ours, there is nothing we can do about it.

No one knows what form AI will take or how it might threaten humanity. As Musk noted, artificial intelligence can be used to control, regulate and monitor other AI. Or it may be imbued with human values ​​or an overriding desire to be friendly to people.

Myth #7: “Artificial superintelligence will be friendly”

Reality: The philosopher Immanuel Kant believed that reason was strongly correlated with morality. Neuroscientist David Chalmers in his study “The Singularity: Philosophical analysis” took Kant’s famous idea and applied it to the emerging artificial superintelligence.

If this is true...we can expect an intellectual explosion to lead to a moral explosion. We can then expect that the emerging ASI systems will be super-moral as well as super-intelligent, which allows us to expect good quality from them.

But the idea that advanced AI will be enlightened and kind is not very plausible at its core. As Armstrong noted, there are many smart war criminals. The connection between intelligence and morality does not seem to exist among humans, so he questions the operation of this principle among other intelligent forms.

“Intelligent people who behave immorally can cause pain on a much larger scale than their dumber counterparts. Reasonableness simply gives them the opportunity to be bad with great intelligence, it does not turn them into good people,” says Armstrong.

As MacIntyre explained, a subject's ability to achieve a goal is not relevant to whether the goal is reasonable to begin with. “We will be very lucky if our AIs are uniquely gifted and their level of morality increases along with their intelligence. Relying on luck is not the best approach for something that could shape our future,” he says.

Myth #8: “The risks of AI and robotics are equal”

Reality: This is a particularly common mistake perpetuated by uncritical media and Hollywood films like The Terminator.

If an artificial superintelligence like Skynet really wanted to destroy humanity, it wouldn't use androids with six-barreled machine guns. It would be much more effective to send a biological plague or nanotechnological gray goo. Or simply destroy the atmosphere.

Artificial intelligence is potentially dangerous not because it can affect the development of robotics, but because of how its appearance will affect the world in general.

Myth #9: “The portrayal of AI in science fiction is an accurate representation of the future.”

Many kinds of minds. Image: Eliezer Yudkowsky

Of course, authors and futurists have used science fiction to make fantastic predictions, but the event horizon that ASI establishes is a completely different story. Moreover, the non-human nature of AI makes it impossible for us to know, and therefore predict, its nature and form.

To amuse us stupid humans, science fiction depicts most AIs as being similar to us. “There is a spectrum of all possible minds. Even among humans, you are quite different from your neighbor, but that variation is nothing compared to all the minds that can exist,” says McIntyre.

Most science fiction does not have to be scientifically accurate to tell a compelling story. The conflict usually unfolds between heroes of similar strength. “Imagine how boring a story would be where an AI with no consciousness, joy or hatred, ended humanity without any resistance to achieve an uninteresting goal,” Armstrong narrates, yawning.

Hundreds of robots work at the Tesla factory

Myth #10: “It’s terrible that AI will take all our jobs.”

Reality: The ability of AI to automate much of what we do and its potential to destroy humanity are two very different things. But according to Martin Ford, author of The Dawn of the Robots: Technology and the Threat of a Jobless Future, they are often viewed as a whole. It's good to think about the distant future of AI, as long as it doesn't distract us from the challenges we'll face in the coming decades. Chief among them is mass automation.

No one doubts that artificial intelligence will replace many existing jobs, from factory worker to the upper echelons of white-collar workers. Some experts predict that half of all US jobs are at risk of automation in the near future.

But this does not mean that we cannot cope with the shock. In general, getting rid of most of our work, both physical and mental, is a quasi-utopian goal for our species.

“AI will destroy a lot of jobs within a couple of decades, but that's not a bad thing,” Miller says. Self-driving cars will replace truck drivers, which will reduce delivery costs and, as a result, make many products cheaper. “If you are a truck driver and make a living, you will lose, but everyone else, on the contrary, will be able to buy more goods for the same salary. And the money they save will be spent on other goods and services that will create new jobs for people,” says Miller.

In all likelihood, artificial intelligence will create new opportunities for producing goods, freeing people to do other things. Advances in AI will be accompanied by advances in other areas, especially manufacturing. In the future, it will become easier, not harder, for us to meet our basic needs.

Artificial intelligence

Artificial intelligence is a branch of computer science that studies the possibility of providing intelligent reasoning and action using computer systems and other artificial devices. In most cases, the algorithm for solving the problem is unknown in advance.

There is no exact definition of this science, since the question of the nature and status of human intelligence has not been resolved in philosophy. There is also no exact criterion for computers to achieve “intelligence,” although at the dawn of artificial intelligence a number of hypotheses were proposed, for example, the Turing test or the Newell-Simon hypothesis. On this moment There are many approaches to both understanding the AI ​​problem and creating intelligent systems.

Thus, one of the classifications identifies two approaches to AI development:

top-down, semiotic - creation of symbolic systems that model high-level mental processes: thinking, reasoning, speech, emotions, creativity, etc.;

bottom-up, biological - the study of neural networks and evolutionary computations that model intelligent behavior based on smaller "non-intelligent" elements.

This science is related to psychology, neurophysiology, transhumanism and others. Like all computer sciences, it uses mathematics. Philosophy and robotics are of particular importance to her.

Artificial intelligence is a very young field of research, which was launched in 1956. Its historical path resembles a sinusoid, each “takeoff” of which was initiated by some new idea. IN currently its development is in decline, giving way to the application of already achieved results in other areas of science, industry, business and even everyday life.

Study approaches

There are different approaches to building AI systems. At the moment, there are 4 quite different approaches:

1. Logical approach. The basis for the logical approach is Boolean algebra. Every programmer is familiar with it and with logical operators from the time he mastered the IF operator. Boolean algebra received its further development in the form of predicate calculus - in which it was expanded by introducing subject symbols, relations between them, quantifiers of existence and universality. Almost every AI system built on a logical principle is a theorem proving machine. In this case, the source data is stored in the database in the form of axioms, logical inference rules as relationships between them. In addition, each such machine has a target generation unit, and the inference system tries to prove this goal as a theorem. If the goal is proven, then tracing the applied rules allows us to obtain a chain of actions necessary to achieve the goal (such a system is known as expert systems). The power of such a system is determined by the capabilities of the goal generator and the theorem proving machine. A relatively new direction, such as fuzzy logic, allows the logical approach to achieve greater expressiveness. Its main difference is that the truthfulness of a statement can take, in addition to yes/no (1/0), also intermediate values ​​- I don’t know (0.5), the patient is more likely alive than dead (0.75), the patient is more likely dead than alive ( 0.25). This approach is more similar to human thinking, since it rarely answers questions with only yes or no.

2. By structural approach we mean here attempts to build AI by modeling the structure of the human brain. One of the first such attempts was Frank Rosenblatt's perceptron. The main modeled structural unit in perceptrons (as in most other brain modeling options) is the neuron. Later, other models arose, which are known to most under the term neural networks (NN). These models differ in the structure of individual neurons, in the topology of connections between them, and in learning algorithms. Among the most well-known NN options now are NNs with backpropagation of errors, Hopfield networks, and stochastic neural networks. In a broader sense, this approach is known as Connectivism.

3. Evolutionary approach. When building AI systems using this approach, the main attention is paid to building the initial model and the rules by which it can change (evolve). Moreover, the model can be compiled using a variety of methods, it can be a neural network, a set of logical rules, or any other model. After that, we turn on the computer and, based on checking the models, it selects the best of them, on the basis of which new models are generated according to a variety of rules. Among evolutionary algorithms, the genetic algorithm is considered classic.

4. Simulation approach. This approach is classic for cybernetics with one of its basic concepts black box. The object whose behavior is simulated is precisely a “black box”. It doesn’t matter to us what it and the model have inside and how it functions, the main thing is that our model behaves exactly the same in similar situations. Thus, another human property is modeled here - the ability to copy what others do, without going into detail about why this is needed. Often this ability saves him a lot of time, especially early in his life.

Within the framework of hybrid intelligent systems, they are trying to combine these areas. Expert inference rules can be generated by neural networks, and generative rules are obtained using statistical learning.

A promising new approach called intelligence amplification views the achievement of AI through evolutionary development as a side effect of technology enhancing human intelligence.

Research directions

Analyzing the history of AI, we can highlight such a broad area as reasoning modeling. For many years, the development of this science has moved precisely along this path, and now it is one of the most developed areas in modern AI. Modeling reasoning involves the creation of symbolic systems, the input of which is a certain problem, and the output requires its solution. As a rule, the proposed task has already been formalized, i.e., translated into mathematical form, but either does not have a solution algorithm, or it is too complex, time-consuming, etc. This area includes: proof of theorems, decision making and game theory, planning and dispatching, forecasting.

An important area is natural language processing, which involves analyzing the capabilities of understanding, processing and generating texts in “human” language. In particular, the problem of machine translation of texts from one language to another has not yet been solved. In the modern world, the development of information retrieval methods plays an important role. By its nature, the original Turing test is related to this direction.

According to many scientists, an important property of intelligence is the ability to learn. Thus, knowledge engineering comes to the fore, combining the tasks of obtaining knowledge from simple information, its systematization and use. Advances in this area affect almost every other area of ​​AI research. Here, too, two important subareas cannot be overlooked. The first of them - machine learning - concerns the process of independent acquisition of knowledge by an intelligent system in the process of its operation. The second is associated with the creation of expert systems - programs that use specialized knowledge bases to obtain reliable conclusions on any problem.

There are great and interesting achievements in the field of modeling biological systems. Strictly speaking, this can include several independent directions. Neural networks are used to solve fuzzy and complex problems, such as geometric shape recognition or object clustering. The genetic approach is based on the idea that an algorithm can become more efficient if it borrows better characteristics from other algorithms (“parents”). A relatively new approach, where the task is to create an autonomous program - an agent that interacts with the external environment, is called the agent approach. And if you properly force a lot of “not very intelligent” agents to interact together, you can get “ant” intelligence.

Pattern recognition problems are already partially solved in other areas. This includes character recognition, handwritten text, speech, and text analysis. Particularly worth mentioning is computer vision, which is related to machine learning and robotics.

In general, robotics and artificial intelligence are often associated with each other. The integration of these two sciences, the creation of intelligent robots, can be considered another area of ​​AI.

Machine creativity stands apart, due to the fact that the nature of human creativity is even less studied than the nature of intelligence. Nevertheless, this area exists, and here the problems of writing music, literary works (often poetry or fairy tales) by computer are posed. artistic creativity.

Finally, there are many applications of artificial intelligence, each of which forms an almost independent field. Examples include programming intelligence in computer games, nonlinear control, and intelligent security systems.

It can be seen that many areas of research overlap. This is typical for any science. But in artificial intelligence, the relationship between seemingly different areas is especially strong, and this is associated with the philosophical debate about strong and weak AI.

At the beginning of the 17th century, Rene Descartes suggested that an animal is a kind of complex mechanism, thereby formulating a mechanistic theory. In 1623, Wilhelm Schickard built the first mechanical digital computer, followed by machines by Blaise Pascal (1643) and Leibniz (1671). Leibniz was also the first to describe the modern binary number system, although before him many great scientists were periodically interested in this system. In the 19th century, Charles Babbage and Ada Lovelace worked on a programmable mechanical computer.

In 1910-1913 Bertrand Russell and A. N. Whitehead published Principia Mathematica, which revolutionized formal logic. In 1941, Konrad Zuse built the first working software-controlled computer. Warren McCulloch and Walter Pitts published A Logical Calculus of the Ideas Immanent in Nervous Activity in 1943, which laid the foundation for neural networks.

Current state of affairs

At the moment (2008) in the creation of artificial intelligence (in the original sense of the word, expert systems and chess programs do not belong here) there is a shortage of ideas. Almost all approaches have been tried, but none have led to the emergence of artificial intelligence. research group it never came up.

Some of the most impressive civilian AI systems are:

Deep Blue - defeated the world chess champion. (The match between Kasparov and supercomputers did not bring satisfaction to either computer scientists or chess players, and the system was not recognized by Kasparov, although the original compact chess programs are an integral element of chess creativity. Then the IBM line of supercomputers appeared in the brute force projects BluGene (molecular modeling) and modeling of the pyramidal cell system in Swiss Blue Brain Center. This story- an example of the intricate and secretive relationship between AI, business, and national strategic objectives.)

Mycin was one of the early expert systems that could diagnose a small set of diseases, often as accurately as doctors.

20q is a project based on AI ideas, based on the classic game “20 Questions”. It became very popular after appearing on the Internet on the website 20q.net.

Speech recognition. Systems such as ViaVoice are capable of serving consumers.

Robots compete in a simplified form of football in the annual RoboCup tournament.

Application of AI

Banks use artificial intelligence systems (AI) in insurance activities (actuarial mathematics) when playing on the stock exchange and property management. In August 2001, robots beat humans in an impromptu trading competition (BBC News, 2001). Pattern recognition methods (including both more complex and specialized and neural networks) are widely used in optical and acoustic recognition (including text and speech), medical diagnostics, spam filters, in air defense systems (target identification), and also to ensure a number of other national security tasks.

Computer game developers are forced to use AI of varying degrees of sophistication. Standard tasks of AI in games are finding a path in two-dimensional or three-dimensional space, simulating the behavior of a combat unit, calculating the correct economic strategy, and so on.

Prospects for AI

Two directions of AI development are visible:

the first is to solve problems associated with bringing specialized AI systems closer to human capabilities and their integration, which is realized by human nature.

the second is the creation of Artificial Intelligence, which represents the integration of already created AI systems into unified system capable of solving humanity's problems.

Connections with other sciences

Artificial intelligence is closely related to transhumanism. And together with neurophysiology and cognitive psychology, it forms a more general science called cognitive science. Philosophy plays a special role in artificial intelligence.

Philosophical questions

The science of “creating artificial intelligence” could not help but attract the attention of philosophers. With the advent of the first intelligent systems, fundamental questions about man and knowledge, and partly about the world order, were raised. On the one hand, they are inextricably linked with this science, and on the other, they introduce some chaos into it. Among AI researchers, there is still no dominant point of view on the criteria of intelligence, the systematization of goals and tasks to be solved, there is not even a strict definition of science.

Can a machine think?

The most heated debate in the philosophy of artificial intelligence is the question of the possibility of thinking created by human hands. The question “Can a machine think?”, which prompted researchers to create the science of simulating the human mind, was posed by Alan Turing in 1950. The two main points of view on this issue are called the hypotheses of strong and weak artificial intelligence.

The term “strong artificial intelligence” was introduced by John Searle, and the approach is characterized in his words:

“Moreover, such a program would not just be a model of the mind; she in literally the words itself will be reason, in the same sense in which human reason is reason."

In contrast, proponents of weak AI prefer to view programs only as tools that allow them to solve certain problems that do not require the full range of human cognitive abilities.

In his thought experiment"The Chinese Room", John Searle shows that passing the Turing test is not a criterion for a machine to have a genuine thought process.

Thinking is the process of processing information stored in memory: analysis, synthesis and self-programming.

A similar position is taken by Roger Penrose, who in his book “The King's New Mind” argues for the impossibility of obtaining the thinking process on the basis of formal systems.

There are different points of view on this issue. The analytical approach involves the analysis of a person’s higher nervous activity to the lowest, indivisible level (the function of higher nervous activity, an elementary reaction to external irritants (stimuli), irritation of the synapses of a set of neurons connected by function) and the subsequent reproduction of these functions.

Some experts mistake the ability of rational, motivated choice in conditions of lack of information for intelligence. That is, an intellectual program is simply considered to be that program of activity (not necessarily implemented on modern computers) that can choose from a certain set of alternatives, for example, where to go in the case of “you will go left ...”, “you will go right ...”, “you will go straight ...”

Science of knowledge

Also, epistemology - the science of knowledge within the framework of philosophy - is closely related to the problems of artificial intelligence. Philosophers working on this topic are grappling with questions similar to those faced by AI engineers about how best to represent and use knowledge and information.

Attitudes towards AI in society

AI and religion

Among followers of Abrahamic religions, there are several points of view on the possibility of creating AI based on a structural approach.

According to one of them, the brain, whose work the systems are trying to imitate, in their opinion, does not participate in the thinking process, is not the source of consciousness and any other mental activity. Creating AI based on a structured approach is impossible.

According to another point of view, the brain is involved in the thinking process, but in the form of a “transmitter” of information from the soul. The brain is responsible for such “simple” functions as unconditioned reflexes, response to pain, etc. Creating AI based on a structural approach is possible if the system being designed can perform “transfer” functions.

Both positions do not correspond to the data of modern science, because the concept of soul is not considered by modern science as a scientific category.

According to many Buddhists, AI is possible. Thus, the spiritual leader Dalai Lama XIV does not exclude the possibility of the existence of consciousness on a computer basis.

Raelites actively support developments in the field of artificial intelligence.

AI and science fiction

In science fiction literature, AI is most often depicted as a force that attempts to overthrow human power (Omnius, HAL 9000, Skynet, Colossus, The Matrix, and the Replicant) or a serving humanoid (C-3PO, Data, KITT, and KARR, Bicentennial Man). The inevitability of domination of the world by AI that has gotten out of control is disputed by such science fiction writers as Isaac Asimov and Kevin Warwick.

A curious vision of the future is presented in the novel "The Turing Selection" by science fiction writer Harry Garrison and scientist Marvin Minsky. The authors discuss the topic of the loss of humanity in a person into whose brain a computer was implanted, and the acquisition of humanity by an AI machine into whose memory information from the human brain was copied.

Some science fiction writers, such as Vernor Vinge, have also speculated on the implications of the emergence of AI, which is likely to cause dramatic changes in society. This period is called the technological singularity.

Artificial intelligence

Artificial intelligence[English] Artificial intelligence (AI)] is a branch of computer science that studies the possibility of providing intelligent reasoning and action using computer systems and other artificial devices.
In most cases, the algorithm for solving the problem is unknown in advance.
The first research related to artificial intelligence was undertaken almost immediately after the appearance of the first computers.
In 1910-13 Bertrand Russell and Alfred North Whitehead published The Principles of Mathematics, which revolutionized formal logic. In 1931, Kurt Gödel showed that a sufficiently complex formal system contains statements that, nevertheless, cannot be proven or disproved within the framework of this system. Thus, an AI system that establishes the truth of all statements by deducing them from axioms cannot prove those statements. Since humans can “see” the truth of such statements, AI has come to be seen as an afterthought. In 1941, Konrad Zuse built the first working software-controlled computer. Warren McCulloch and Walter Pitts published A Logical Calculus of the Ideas Immanent in Nervous Activity in 1943, which laid the foundation for neural networks.
In 1954, American researcher A. Newel decided to write a program for playing chess. He shared this idea with RAND Corporation (www.rand.org) analysts J. Show and H. Simon, who offered Newell their help. As a theoretical basis for such a program, it was decided to use the method proposed in 1950 by Claude Shannon, the founder of information theory. A precise formalization of this method was carried out by Alan Turing. He modeled it by hand. A group of Dutch psychologists led by A. de Groot, who studied the playing styles of outstanding chess players, was involved in the work. After two years of joint work, this team created the IPL1 programming language - apparently the first symbolic language for processing lists. Soon the first program was written, which can be attributed to achievements in the field of artificial intelligence. This was the program "Logic-Theorist" (1956), designed for automatic proof of theorems in propositional calculus.
The actual program for playing chess, NSS, was completed in 1957. Its work was based on so-called heuristics (rules that allow one to make a choice in the absence of precise theoretical grounds) and descriptions of goals. The control algorithm tried to reduce the differences between assessments of the current situation and assessments of the goal or one of the subgoals.
In 1960, the same group, based on the principles used in NSS, wrote a program that its creators called GPS (General Problem Solver) - a universal problem solver. GPS could handle a number of puzzles, calculate indefinite integrals, and solve some other problems. These results have attracted the attention of computing scientists. Programs have appeared for automatically proving theorems from planimetry and solving algebraic problems (formulated in English language).
John McCarty from Stanford was interested mathematical foundations these results and symbolic calculations in general. As a result, in 1963 he developed the LISP language (LISP, from List Processing), which was based on the use of a single list representation for programs and data, the use of expressions to define functions, and bracket syntax.
Logicians have also begun to show interest in research in the field of artificial intelligence. Also in 1964, the work of Leningrad logician Sergei Maslov, “An inverse method for establishing deducibility in classical predicate calculus,” was published, in which he first proposed a method for automatically searching for proofs of theorems in predicate calculus.
A year later (in 1965), J.A. Robinson's work appeared in the United States, devoted to a slightly different method for automatically searching for proofs of theorems in first-order predicate calculus. This method was called the resolution method and served as the starting point for the creation of a new programming language with a built-in inference procedure - the Prolog language (PROLOG) in 1971.
In 1966, in the USSR, Valentin Turchin developed the recursive function language Refal, intended to describe languages ​​and different types their processing. Although it was conceived as an algorithmic metalanguage, for the user it was, like LISP and Prolog, a language for processing symbolic information.
At the end of the 60s. the first game programs appeared, systems for basic text analysis and solutions to some mathematical problems(geometry, integral calculus). In complex search problems that arose in this case, the number of options tried was sharply reduced by using all kinds of heuristics and " common sense". This approach began to be called heuristic programming. Further development of heuristic programming followed the path of complicating the algorithms and improving heuristics. However, it soon became clear that there was a certain limit beyond which no improvements in heuristics and complications of the algorithm would improve the quality of the system and, most importantly, not will expand its capabilities.A program that plays chess will never play checkers or card games.
Gradually, researchers began to understand that all previously created programs lacked the most important thing - knowledge in the relevant field. Specialists, solving problems, achieve high results, thanks to your knowledge and experience; If programs access knowledge and apply it, they too will achieve high quality work.
This understanding, which emerged in the early 70s, essentially meant a qualitative leap in work on artificial intelligence.
Fundamental considerations in this regard were expressed in 1977 at the 5th Joint Conference on Artificial Intelligence by the American scientist E. Feigenbaum.
Already by the mid-70s. The first applied intelligent systems appear that use various methods of representing knowledge to solve problems - expert systems. One of the first was the DENDRAL expert system, developed at Stanford University and designed to generate formulas chemical compounds based spectral analysis. Currently, DENDRAL is supplied to customers along with a spectrometer. The MYCIN system is intended for the diagnosis and treatment of infectious blood diseases. The PROSPECTOR system predicts mineral deposits. There is information that with its help molybdenum deposits were discovered, the value of which exceeds $100 million. The water quality assessment system, implemented on the basis of the Russian SIMER + MIR technology several years ago, causes the maximum permissible concentrations of pollutants in the Moscow River in the Serebryany Bor area to be exceeded. The CASNET system is designed to diagnose and select treatment strategies for glaucoma, etc.
Currently, the development and implementation of expert systems has become an independent engineering field. Scientific research is concentrated in a number of areas, some of which are listed below.
The theory does not clearly define what exactly is considered necessary and sufficient conditions for achieving intellectuality. Although there are a number of hypotheses on this score, for example, the Newell-Simon hypothesis. Typically, the implementation of intelligent systems is approached precisely from the point of view of modeling human intelligence. Thus, within artificial intelligence there are two main directions:
■ symbolic (semiotic, top-down) is based on modeling high-level human thinking processes, on the representation and use of knowledge;
■ neuro-cybernetic (neural network, bottom-up) is based on the modeling of individual low-level brain structures (neurons).
Thus, the ultimate goal of artificial intelligence is to build a computer intelligent system that would have a level of efficiency in solving informal problems comparable to or superior to a human one.
The most commonly used programming paradigms when building artificial intelligence systems are functional programming and logic programming. They differ from traditional structural and object-oriented approaches to the development of program logic by nonlinear derivation of solutions and low-level tools for supporting the analysis and synthesis of data structures.
We can distinguish two scientific schools with different approaches to the problem of AI: Conventional AI and Computational AI.
In conventional AI Machine self-learning methods based on formalism and statistical analysis are mainly used.
Conventional AI methods:
■ Expert systems: programs that, acting according to certain rules, process a large amount of information and, as a result, issue a conclusion based on it.
■ Case-based reasoning.
■ Bayesian networks - This statistical method detecting patterns in data. For this purpose, primary information is used, contained either in network structures or in databases
■ Behavioral approach: a modular method of building AI systems, in which the system is divided into several relatively autonomous behavioral programs that are launched depending on changes in the external environment.
Computational AI involves iterative development and training (for example, selection of parameters in a connectivity network). Learning is empirically based and is associated with non-symbolic AI and soft computing.
Basic methods of computational AI:
■ Neural networks: systems with excellent recognition capabilities.
■ Fuzzy systems: techniques for reasoning under conditions of uncertainty (widely used in modern industrial and consumer control systems)
■ Evolutionary calculations: it uses concepts traditionally related to biology such as population, mutation, and natural selection to create better solutions to a problem. These methods are divided into evolutionary algorithms (for example, genetic algorithms) and swarm intelligence methods (for example, ant algorithm).
Within the framework of hybrid intelligent systems, they are trying to combine these two directions. Expert inference rules can be generated by neural networks, and generative rules are obtained using statistical learning.
Promising directions of artificial intelligence.
CBR (case-based reasoning) methods are already used in a variety of applied problems - in medicine, project management, for analyzing and reorganizing the environment, for developing consumer goods taking into account the preferences of different consumer groups, etc. We should expect applications of CBR methods for tasks of intelligent information retrieval, e-commerce (offering goods, creating virtual sales agencies), planning behavior in dynamic environments, layout, design, and synthesis of programs.
In addition, we should expect an increasing influence of ideas and methods (AI) on machine analysis of natural language texts (AT). This influence is likely to affect semantic analysis and related methods of syntactic analysis - in this area it will manifest itself in taking into account the world model in the final stages of semantic analysis and using knowledge of the domain and situational information to reduce searches by more early stages(for example, when constructing parse trees).
The second “communication channel” between AI and AT is the use of machine learning methods in AT; the third “channel” is the use of reasoning based on precedents and reasoning based on argumentation to solve some AT problems, for example, problems of reducing noise and increasing the degree of search relevance.
To one of the most important and promising directions Today, artificial intelligence should include tasks of automatic behavior planning. The scope of application of automatic planning methods includes a wide variety of devices with a high degree of autonomy and goal-directed behavior, from household appliances to unmanned spacecraft for deep space exploration.

Sources used
1. Stuart Russell, Peter Norvig “Artificial Intelligence: A Modern Approach (AIMA)”, 2nd edition: Trans. from English - M.: Williams Publishing House, 2005. - 1424 pp. with illus.
2. George F. Luger "Artificial Intelligence: Strategies and Methods of Solution", 4th edition: Trans. from English - M.: Williams Publishing House, 2004.
3. Gennady Osipov, President of the Russian Association of Artificial Intelligence, permanent member of the European Coordination Committee on Artificial Intelligence (ECCAI), Doctor of Physical and Mathematical Sciences, Professor "Artificial Intelligence: State of Research and Looking to the Future."

Artificial intelligence

Artificial intelligence(AI, from the English. Artificial intelligence, AI) - the science and technology of creating intelligent machines, especially intelligent computer programs.

AI is related to the similar task of using computers to understand human intelligence, but is not necessarily limited to biologically plausible methods.

AI is a scientific direction that develops methods that allow an electronic computer to solve intellectual problems if they are solved by a person. The term “artificial intelligence” refers to the functionality of a machine to solve human problems. Artificial intelligence is aimed at increasing the efficiency of various forms of human mental work.

The most common form of artificial intelligence is a computer programmed to respond on a specific topic. Such "expert systems" have the human ability to perform the analytical work of an expert. A similar word processor can detect spelling errors and can be “taught” new words. To this scientific discipline closely adjacent to another, the subject of which is sometimes called " artificial life". She deals with the intellect more low level. For example, a robot can be programmed to navigate in fog, i.e. give it the ability to physically interact with the environment.

The term “artificial intelligence” was first proposed at a seminar with the same name at Dartsmouth College in the USA in 1956. Subsequently, various scientists gave the following definitions of artificial intelligence:

AI is a branch of computer science that is associated with the automation of intelligent behavior;

AI is the science of computation that makes perception, inference and action possible;

AI is an information technology associated with the processes of inference, learning and perception.

The history of artificial intelligence as a new scientific direction begins in the middle of the 20th century. By this time, many prerequisites for its origin had already been formed: among philosophers there had long been debate about the nature of man and the process of cognition of the world; neurophysiologists and psychologists had developed a number of theories regarding the work human brain and thinking, economists and mathematicians asked questions of optimal calculations and presentation of knowledge about the world in a formalized form; finally, the foundation of the mathematical theory of calculations - the theory of algorithms - was born and the first computers were created.

The main problem of artificial intelligence is the development of methods for representing and processing knowledge.

Artificial intelligence programs include:

Game programs (stochastic, computer games);

Natural language programs - machine translation, text generation, speech processing;

Recognition programs - recognition of handwriting, images, cards;

Programs for creating and analyzing graphics, paintings, and musical works.

The following areas of artificial intelligence are distinguished:

Expert systems;

Neural networks;

Natural language systems;

Evolutionary methods and genetic algorithms;

Fuzzy sets;

Knowledge extraction systems.

Expert systems are focused on solving specific problems.

Neural networks implement neural network algorithms.

Are divided into:

Networks general purpose, which support about 30 neural network algorithms and are customized to solve specific problems;

Object-oriented - used for character recognition, production management, forecasting situations in foreign exchange markets,

Hybrid - used in conjunction with certain software (Excel, Access, Lotus).

Natural language (NL) systems are divided into:

Natural language interface software products in the database (representation of natural language queries into SQL queries);

Natural language search in texts, content scanning of texts (used in Internet search engines, for example, Google);

Scalable speech recognition tools (portable simultaneous interpreters);

Components of speech processing as service tools software(Windows XP OS).

Fuzzy sets - implement logical relationships between data. These software products are used to manage economic objects, build expert systems and decision support systems.

Genetic algorithms are methods for analyzing data that cannot be analyzed by standard methods. As a rule, they are used to process large amounts of information and build predictive models. Used in scientific purposes in simulation modeling.

Knowledge extraction systems - used to process data from information repositories.

Some of the most famous AI systems are:

Deep Blue- defeated the world chess champion. The match between Kasparov and the supercomputer did not bring satisfaction to either computer scientists or chess players, and the system was not recognized by Kasparov. IBM's line of supercomputers then manifested itself in the brute force projects BluGene (molecular modeling) and the pyramidal cell system modeling at the Swiss Blue Brain Center.

Watson- a promising development by IBM, capable of perceiving human speech and performing a probabilistic search using a large number of algorithms. To demonstrate its work, Watson took part in the American game “Jeopardy!”, an analogue of “Custom Game” in Russia, where the system managed to win both games.

MYCIN- one of the early expert systems that could diagnose a small set of diseases, often as accurately as doctors.

20Q- a project based on AI ideas, based on the classic game "20 Questions". It became very popular after appearing on the Internet on the website 20q.net.

Speech recognition. Systems such as ViaVoice are capable of serving consumers.

Robots compete in a simplified form of football in the annual RoboCup tournament.

Banks use artificial intelligence systems (AI) in insurance activities (actuarial mathematics), when playing on the stock exchange and in property management. Pattern recognition methods (including both more complex and specialized and neural networks) are widely used in optical and acoustic recognition (including text and speech), medical diagnostics, spam filters, in air defense systems (target identification), as well as to ensure a number of other national security tasks.

Computer game developers use AI to varying degrees of sophistication. This forms the concept of "Game Artificial Intelligence". Standard tasks of AI in games are finding a path in two-dimensional or three-dimensional space, simulating the behavior of a combat unit, calculating the correct economic strategy, and so on.

The largest scientific and research centers in the field of artificial intelligence:

United States of America (Massachusetts Institute of Technology);

Germany (German Research Center on artificial intelligence);

Japan ( National Institute modern industrial science and technology (AIST));

Russia (Scientific Council on Artificial Intelligence Methodology of the Russian Academy of Sciences).

Today, thanks to advances in the field of artificial intelligence, a large number of scientific developments have been created that significantly simplify people's lives. Speech or scanned text recognition, solving computationally complex problems in a short time and much more - all this has become available thanks to the development of artificial intelligence.

Replacing a human specialist with artificial intelligence systems, in particular expert systems, of course, where this is permissible, can significantly speed up and reduce the cost of the production process. Artificial intelligence systems are always objective and the results of their work do not depend on the momentary mood and a number of other subjective factors that are inherent in a person. But, despite all of the above, one should not harbor dubious illusions and hope that in the near future human labor will be replaced by the work of artificial intelligence. Experience shows that today artificial intelligence systems achieve the best results when functioning together with humans. After all, it is man, unlike artificial intelligence, who knows how to think outside the box and creatively, which allowed him to develop and move forward throughout his era.

Sources used

1. www.aiportal.ru

3. ru.wikipedia.org

New evolutionary strategy for humanity

Points out: “The problem is that so far we cannot generally determine which computational procedures we want to call intelligent. We understand some mechanisms of intelligence and do not understand others. Therefore, intelligence within this science refers only to the computational component of the ability to achieve goals in the world."

At the same time, there is a point of view according to which intelligence can only be a biological phenomenon.

As the chairman of the St. Petersburg branch of the Russian Association of Artificial Intelligence T. A. Gavrilova points out, in English the phrase artificial intelligence does not have that slightly fantastic anthropomorphic overtones that it acquired in the rather unsuccessful Russian translation. Word intelligence means “the ability to reason rationally,” and not at all “intelligence,” for which there is English equivalent intelligence .

Participants of the Russian Association of Artificial Intelligence give the following definitions of artificial intelligence:

One of the particular definitions of intelligence, common to a person and a “machine,” can be formulated as follows: “Intelligence is the ability of a system to create programs (primarily heuristic) during self-learning to solve problems of a certain class of complexity and solve these problems.”

The simplest electronics are often called artificial intelligence to indicate the presence of sensors and automatic selection of operating modes. The word artificial in this case means that you should not expect the system to be able to find a new mode of operation in a situation not foreseen by the developers.

Prerequisites for the development of artificial intelligence science

The history of artificial intelligence as a new scientific direction begins in the middle of the 20th century. By this time, many prerequisites for its origin had already been formed: among philosophers there had long been debates about the nature of man and the process of understanding the world, neurophysiologists and psychologists had developed a number of theories regarding the work of the human brain and thinking, economists and mathematicians asked questions about optimal calculations and the presentation of knowledge about the world in in a formalized form; finally, the foundation of the mathematical theory of calculations - the theory of algorithms - was born and the first computers were created.

The capabilities of new machines in terms of computing speed turned out to be greater than human ones, so the question crept into the scientific community: what are the limits of computer capabilities and will machines reach the level of human development? In 1950, one of the pioneers in the field of computing, English scientist Alan Turing, wrote an article entitled “Can a Machine Think?” , which describes a procedure by which it will be possible to determine the moment when a machine becomes equal to a person in terms of intelligence, called the Turing test.

History of the development of artificial intelligence in the USSR and Russia

In the USSR, work in the field of artificial intelligence began in the 1960s. A number of pioneering studies were carried out at Moscow University and the Academy of Sciences, led by Veniamin Pushkin and D. A. Pospelov.

In 1964, the work of Leningrad logician Sergei Maslov, “The Inverse Method for Establishing Derivability in Classical Predicate Calculus,” was published, in which he was the first to propose a method for automatically searching for proofs of theorems in predicate calculus.

Until the 1970s in the USSR, all AI research was carried out within the framework of cybernetics. According to D. A. Pospelov, the sciences “computer science” and “cybernetics” were mixed at that time due to a number of academic disputes. Only in the late 1970s in the USSR they began to talk about the scientific direction “artificial intelligence” as a branch of computer science. At the same time, computer science itself was born, subordinating its ancestor “cybernetics”. In the late 1970s, an explanatory dictionary on artificial intelligence, a three-volume reference book on artificial intelligence and encyclopedic Dictionary in computer science, in which the sections “Cybernetics” and “Artificial Intelligence” are included, along with other sections, in computer science. The term “computer science” became widespread in the 1980s, and the term “cybernetics” gradually disappeared from circulation, remaining only in the names of those institutions that arose during the era of the “cybernetic boom” of the late 1950s - early 1960s. This view of artificial intelligence, cybernetics and computer science is not shared by everyone. This is due to the fact that in the West the boundaries of these sciences are somewhat different.

Approaches and directions

Approaches to understanding the problem

There is no single answer to the question of what artificial intelligence does. Almost every author who writes a book about AI starts from some definition, considering the achievements of this science in its light.

  • descending (English) Top-Down AI), semiotic - the creation of expert systems, knowledge bases and logical inference systems that simulate high-level mental processes: thinking, reasoning, speech, emotions, creativity, etc.;
  • ascending (English) Bottom-Up AI), biological - the study of neural networks and evolutionary computations that model intelligent behavior based on biological elements, as well as the creation of corresponding computing systems, such as a neurocomputer or biocomputer.

The latter approach, strictly speaking, does not belong to the science of AI in the sense given by John McCarthy - they are united only by a common final goal.

The Turing Test and the Intuitive Approach

An empirical test was proposed by Alan Turing in the article " Computing machines and reason" (English) Computing Machinery and Intelligence ), published in 1950 in the philosophical journal " Mind" The purpose of this test is to determine the possibility of artificial thinking close to human.

The standard interpretation of this test is as follows: " A person interacts with one computer and one person. Based on the answers to the questions, he must determine who he is talking to: a person or a computer program. The purpose of a computer program is to mislead a person into making the wrong choice." All test participants cannot see each other.

  • The most general approach assumes that AI will be able to exhibit human-like behavior in normal situations. This idea is a generalization of the Turing test approach, which states that a machine will become intelligent when it is able to carry on a conversation with an ordinary person, and he will not be able to understand that he is talking to the machine (the conversation is carried out by correspondence).
  • Science fiction writers often propose another approach: AI will arise when a machine is capable of feeling and creating. So, Andrew Martin's owner from "Bicentennial Man" begins to treat him like a person when he creates a toy according to his own design. And Data from Star Trek, being capable of communication and learning, dreams of gaining emotions and intuition.

However, the latter approach hardly stands up to criticism upon closer examination. For example, it is not difficult to create a mechanism that will evaluate certain parameters of the external or internal environment and respond to their unfavorable values. About such a system we can say that it has feelings (“pain” is a reaction to the triggering of a shock sensor, “hunger” is a reaction to a low battery charge, etc.). And the clusters created by Kohonen cards and many other products of “intelligent” systems can be considered a type of creativity.

Symbolic approach

Historically, the symbolic approach was the first in the era of digital machines, since it was after the creation of Lisp, the first symbolic computing language, that its author became confident in the ability to practically begin to implement these means of intelligence. The symbolic approach allows you to operate with weakly formalized representations and their meanings.

The success and efficiency of solving new problems depends on the ability to isolate only essential information, which requires flexibility in abstraction methods. Whereas a regular program sets its own way of interpreting data, which is why its work looks biased and purely mechanical. In this case, the intellectual problem is solved only by a person, an analyst or a programmer, without being able to entrust this to a machine. As a result, a single abstraction model, a system of constructive entities and algorithms, is created. And flexibility and versatility results in significant resource costs for non- typical tasks, that is, the system returns from intelligence to brute force.

The main feature of symbolic computing is the creation of new rules during program execution. Whereas the capabilities of non-intelligent systems end just before the ability to at least identify newly emerging difficulties. Moreover, these difficulties are not solved and, finally, the computer does not improve such abilities on its own.

The disadvantage of the symbolic approach is that such open possibilities are perceived by unprepared people as a lack of tools. This rather cultural problem is partly solved by logic programming.

Logical approach

A logical approach to creating artificial intelligence systems is aimed at creating expert systems with logical models of knowledge bases using a predicate language.

The language and logic programming system Prolog was adopted as a training model for artificial intelligence systems in the 1980s. Knowledge bases written in the Prolog language represent sets of facts and rules of logical inference written in the language of logical predicates.

The logical model of knowledge bases allows you to record not only specific information and data in the form of facts in the Prolog language, but also generalized information using rules and procedures of logical inference, including logical rules for defining concepts that express certain knowledge as specific and generalized information.

In general, research into problems of artificial intelligence within the framework of a logical approach to the design of knowledge bases and expert systems is aimed at the creation, development and operation of intelligent information systems, including issues of teaching students and schoolchildren, as well as training users and developers of such intelligent information systems.

Agent-based approach

The latest approach, developed since the early 1990s, is called agent-based approach, or approach based on the use of intelligent (rational) agents. According to this approach, intelligence is the computational part (roughly speaking, planning) of the ability to achieve the goals set for an intelligent machine. Such a machine itself will be an intelligent agent, perceiving the world around it using sensors, and capable of influencing objects in the environment using actuators.

This approach focuses on those methods and algorithms that will help the intelligent agent survive in the environment while performing its task. Thus, path finding and decision making algorithms are studied much more carefully here.

Hybrid approach

Main article: Hybrid approach

Hybrid approach assumes that only the synergistic combination of neural and symbolic models achieves the full range of cognitive and computational capabilities. For example, expert inference rules can be generated by neural networks, and generative rules are obtained using statistical learning. Proponents of this approach believe that hybrid information systems will be significantly stronger than the sum different concepts separately.

Research models and methods

Symbolic modeling of thought processes

Main article: Modeling Reasoning

Analyzing the history of AI, we can identify such a broad area as modeling reasoning. For many years, the development of this science has moved precisely along this path, and now it is one of the most developed areas in modern AI. Modeling reasoning involves the creation of symbolic systems, the input of which is a certain problem, and the output requires its solution. As a rule, the proposed problem has already been formalized, that is, translated into mathematical form, but either does not have a solution algorithm, or it is too complex, time-consuming, etc. This area includes: proving theorems, decision making and game theory, planning and dispatching , forecasting .

Working with Natural Languages

An important direction is natural language processing, within which the analysis of the capabilities of understanding, processing and generating texts in “human” language is carried out. Within this direction, the goal is to process natural language in such a way that one would be able to acquire knowledge independently by reading existing text, available on the Internet. Some direct applications of natural language processing include information retrieval (including deep text mining) and machine translation.

Representation and use of knowledge

Direction knowledge engineering combines the tasks of obtaining knowledge from simple information, their systematization and use. This direction is historically associated with the creation expert systems- programs that use specialized knowledge bases to obtain reliable conclusions on any problem.

Producing knowledge from data is one of the basic problems of data mining. There are various approaches to solving this problem, including those based on neural network technology, using neural network verbalization procedures.

Machine learning

Issues machine learning concerns the process independent acquisition of knowledge by an intelligent system in the process of its operation. This direction has been central since the very beginning of the development of AI. In 1956, at the Dartmund Summer Conference, Ray Solomonoff wrote a paper on a probabilistic unsupervised learning machine, calling it an "Inductive Inference Engine".

Robotics

Main article: Intelligent Robotics

Machine creativity

Main article: Machine creativity

The nature of human creativity is even less studied than the nature of intelligence. Nevertheless, this area exists, and the problems of computer writing music, literary works (often poetry or fairy tales), and artistic creation are posed here. Creating realistic images is widely used in the film and gaming industries.

The study of problems of technical creativity of artificial intelligence systems stands out separately. The theory of solving inventive problems, proposed in 1946 by G. S. Altshuller, marked the beginning of such research.

Adding this capability to any intelligent system allows you to very clearly demonstrate what exactly the system perceives and how it understands it. By adding noise instead of missing information or filtering noise with knowledge available in the system, abstract knowledge is produced into concrete images that are easily perceived by a person, this is especially useful for intuitive and low-value knowledge, the verification of which in a formal form requires significant mental effort.

Other areas of research

Finally, there are many applications of artificial intelligence, each of which forms an almost independent field. Examples include programming intelligence in computer games, nonlinear control, and intelligent information security systems.

It can be seen that many areas of research overlap. This is typical for any science. But in artificial intelligence, the relationship between seemingly different areas is especially strong, and this is associated with the philosophical debate about strong and weak AI.

Modern artificial intelligence

Two directions of AI development can be distinguished:

  • solving problems associated with bringing specialized AI systems closer to human capabilities and their integration, which is realized by human nature ( see Intelligence Enhancement);
  • the creation of artificial intelligence, representing the integration of already created AI systems into a single system capable of solving the problems of humanity ( see Strong and weak artificial intelligence).

But at the moment, the field of artificial intelligence is seeing the involvement of many subject areas, having a practical relationship to AI rather than a fundamental one. Many approaches have been tested, but no research group has yet approached the emergence of artificial intelligence. Below are just some of the most famous developments in the field of AI.

Application

RoboCup Tournament

Some of the most famous AI systems are:

Banks use artificial intelligence systems (AI) in insurance activities (actuarial mathematics), when playing on the stock exchange and in property management. Pattern recognition methods (including both more complex and specialized and neural networks) are widely used in optical and acoustic recognition (including text and speech), medical diagnostics, spam filters, in air defense systems (target identification), as well as to ensure a number of other national security tasks.

Psychology and cognitive science

Cognitive modeling methodology is designed to analyze and make decisions in ill-defined situations. It was proposed by Axelrod.

It is based on modeling the subjective ideas of experts about the situation and includes: a methodology for structuring the situation: a model for representing the expert’s knowledge in the form of a signed digraph (cognitive map) (F, W), where F is the set of factors of the situation, W is the set of cause-and-effect relationships between the factors of the situation ; methods of situation analysis. Currently, the methodology of cognitive modeling is developing in the direction of improving the apparatus for analyzing and modeling the situation. Models for forecasting the development of the situation are proposed here; methods for solving inverse problems.

Philosophy

The science of “creating artificial intelligence” could not help but attract the attention of philosophers. With the advent of the first intelligent systems, fundamental questions about man and knowledge, and partly about the world order, were raised.

Philosophical problems of creating artificial intelligence can be divided into two groups, relatively speaking, “before and after the development of AI.” The first group answers the question: “What is AI, is it possible to create it, and, if possible, how to do it?” The second group (ethics of artificial intelligence) asks the question: “What are the consequences of creating AI for humanity?”

The term “strong artificial intelligence” was introduced by John Searle, and the approach is characterized in his words:

Moreover, such a program would not simply be a model of the mind; she, in the literal sense of the word, herself will be the mind, in the same sense in which the human mind is the mind.

At the same time, it is necessary to understand whether a “pure artificial” mind (“metamind”) is possible, understanding and deciding real problems and, at the same time, devoid of emotions characteristic of a person and necessary for his individual survival.

In contrast, proponents of weak AI prefer to view programs only as tools that allow them to solve certain problems that do not require the full range of human cognitive abilities.

Ethics

Science fiction

The topic of AI is considered from different angles in the works of Robert Heinlein: the hypothesis of the emergence of self-awareness of AI when the structure becomes more complex beyond a certain critical level and there is interaction with the outside world and other carriers of intelligence (“The Moon Is a Harsh Mistress”, “Time Enough For Love”, characters Mycroft, Dora and Aya in the “History of the Future” series), problems of AI development after hypothetical self-awareness and some social and ethical issues (“Friday”). The socio-psychological problems of human interaction with AI are also considered in Philip K. Dick’s novel “Do Androids Dream of Electric Sheep? ", also known for the film adaptation of Blade Runner.

The works of science fiction writer and philosopher Stanislaw Lem describe and largely anticipate the creation of virtual reality, artificial intelligence, nanorobots and many other problems of the philosophy of artificial intelligence. Particularly worth noting is the futurology of the Sum of Technology. In addition, in the adventures of Iyon the Quiet, the relationship between living beings and machines is repeatedly described: the rebellion of the on-board computer with subsequent unexpected events(11 journey), adaptation of robots to human society(“Washing tragedy” from “Memoirs of Ijon the Quiet”), building absolute order on the planet by processing living inhabitants (24th journey), inventions of Corcoran and Diagoras (“Memories of Ijon the Quiet”), a psychiatric clinic for robots (“Memoirs of Ijon the Quiet” "). In addition, there is a whole series of novels and stories Cyberiad, where almost all the characters are robots, who are distant descendants of robots that escaped from people (they call people pallids and consider them mythical creatures).

Movies

Since almost the 60s, along with the writing of science fiction stories and novellas, films about artificial intelligence have been made. Many stories by authors recognized throughout the world are filmed and become classics of the genre, others become a milestone in the development of science fiction cinema, for example, The Terminator and The Matrix.

see also

Notes

  1. FAQ from John McCarthy, 2007
  2. M. Andrew. Real life and artificial intelligence // “Artificial Intelligence News”, RAAI, 2000
  3. Gavrilova T. A. Khoroshevsky V. F. Knowledge bases of intelligent systems: Textbook for universities
  4. Averkin A. N., Gaase-Rapoport M. G., Pospelov D. A. Explanatory dictionary on artificial intelligence. - M.: Radio and Communications, 1992. - 256 p.
  5. G. S. Osipov. Artificial Intelligence: State of Research and Looking to the Future
  6. Ilyasov F.N. Artificial and natural intelligence // News of the Academy of Sciences of the Turkmen SSR, series of social sciences. 1986. No. 6. P. 46-54.
  7. Alan Turing, Can Machines Think?
  8. Intelligent machines by S. N. Korsakov
  9. D. A. Pospelov. The formation of computer science in Russia
  10. On the history of cybernetics in the USSR. Essay one, Essay two
  11. Jack Copeland. What is Artificial Intelligence? 2000
  12. Alan Turing, “Computing Machinery and Intelligence,” Mind, vol. LIX, no. 236, October 1950, pp. 433-460.
  13. Natural language processing:
  14. Natural language processing applications include information retrieval (including text mining and machine translation):
  15. Gorban P. A. Neural network extraction of knowledge from data and computer psychoanalysis
  16. Machine learning:
  17. Alan Turing discussed it as a central topic as early as 1950, in his classic paper Computing Machinery and Intelligence. ()
  18. (pdf scanned copy of the original) (version published in 1957, An Inductive Inference Machine, "IRE Convention Record, Section on Information Theory, Part 2, pp. 56-62)
  19. Robotics:
  20. , pp. 916–932
  21. , pp. 908–915
  22. Blue Brain Project - Artificial Brain
  23. Mild-Mannered Watson Skewers Human Opponents on Jeopardy
  24. 20Q.net Inc
  25. Axelrod R. The Structure of Decision: Cognitive Maps of Political Elites. - Princeton. University Press, 1976
  26. John Searle. Is the mind of the brain a computer program?
  27. Penrose R. The new mind of the king. About computers, thinking and the laws of physics. - M.: URSS, 2005. - ISBN 5-354-00993-6
  28. AI as a global risk factor
  29. ...will lead you into Eternal Life
  30. http://www.rc.edu.ru/rc/s8/intellect/rc_intellect_zaharov_2009.pdf Orthodox view on the problem of artificial intelligence
  31. Harry Harrison. Turing choice. - M.: Eksmo-Press, 1999. - 480 p. - ISBN 5-04-002906-3

Literature

  • The computer learns and reasons (part 1) // The computer gains intelligence = Artificial Intelligence Computer Images / ed. V. L. Stefanyuk. - Moscow: Mir, 1990. - 240 p. - 100,000 copies. - ISBN 5-03-001277-X (Russian); ISBN 705409155 (English)
  • Devyatkov V.V. Artificial intelligence systems / Ch. ed. I. B. Fedorov. - M.: Publishing house of MSTU im. N. E. Bauman, 2001. - 352 p. - (Informatics at a technical university). - 3000 copies. - ISBN 5-7038-1727-7
  • Korsakov S.N. Outlining a new way of research using machines that compare ideas / Ed. A.S. Mikhailova. - M.: MEPhI, 2009. - 44 p. - 200 copies. -