The term artificial intelligence is attributed to John McCarthy, a computer scientist, and researcher in the field of science Cognitive. In 1956 organized the first academic conference on the subject. The road starts a little earlier and among the founding fathers. It is common to name Vannevar Bush (Vannevar Bush,). As early as 1945 proposed a system that would increase human knowledge and understanding. And Alan Turing. In 1950 wrote an article on the ability of machines to simulate humans and their ability to perform. He is also considered among the founding fathers of Intelligent actions such as playing chess.
He is Marvin Lee Minsky. Who was trained as a doctor of mathematics and was involved in research, inventions, And many developments in the field? It was Minsky who coined the popular definition of artificial intelligence:
“Make the machine behave in a way that would have been considered intelligent had a person behaved that way.”
At the beginning of the study of artificial intelligence, the “symbolic” paradigm ruled, which sought to replicate high-level human thinking. Over the years it has been replaced by a paradigm “Connective”. Which seeks to mimic the biological basis of human cognition through artificial neural networks. In the 20th century, however, these paradigms did not materialize Expectations. That was dependent on them beyond theoretical or laboratory demonstrations. Led to what is known as the “winter of artificial intelligence”. In which research and investment in the arena were reduced This is for long periods.
In the last decade, due to advances in computer science research as well as in hardware and software development In the areas of computing and communications, along with a breakthrough in cloud computing and huge data, is possible Significant advances in artificial intelligence and sub-fields such as machine learning And deep neural networks (these concepts will be reviewed in detail below) It is also claimed that the progress in neural networks is so profound that This field is considered almost synonymous with the term artificial intelligence.
Most common applications
Most common applications in artificial intelligence today belong to a so-called subfield Machine learning, which includes statistical algorithms that seek to mimic human tasks’ Cognition by deriving rules regarding them, by analyzing large amounts of data. The algorithm actually “trains” on existing information and during training creates a kind of model His own statistic, in order to perform the same task in the future on new data in them Not encountered before.
Artificial intelligence belongs to a broader field of data science, and indeed it is needed Many data in order to operate effectively. This is currently the need for huge data (Big Data,) which provides a regular flow of data necessary for deriving significant insights with the help of learning algorithms. But artificial intelligence does not just depend on huge data, Rather it is one of the only effective means of extracting value and knowledge from such a quantity of data, For their analysis, extremely powerful algorithms are required.
Much of the work of the founding fathers of artificial intelligence is the theoretical basis of Machine learning algorithms, which are used in many contemporary systems These systems belong to what And enable operations including image recognition and autonomous driving.
Which is commonly called narrow artificial intelligence or weak artificial intelligence, even though sometimes These are advanced applications. This concept refers to algorithms designed to handle This concept With a collection of specific problems including for example games, image recognition, or navigation. Different from general artificial intelligence, which refers to a system capable of using intelligence General artificial intelligence does not yet exist, right At the human level in a wide variety of tasks.
To write these lines, opinions are divided as to the ability to create such at least in the two decades Coming. The artificial intelligence that is evolving today is primarily deep learning applications. While this technology belongs to the category of narrow artificial intelligence, it allows A more accurate form of computerized learning and accessible more widespread commercial use of applications Artificial intelligence.
Historical Background: The First Three Waves In Artificial Intelligence

It is common to divide the development of artificial intelligence into three waves, based on surgery Development of the capabilities of the field. Military Agency for Advanced Research Projects ) Hereinafter DARPA (11 in the United States Department of Defense) DoD) is one of the factors The world leaders in the development of artificial intelligence for security needs. The agency defines Artificial intelligence as a “programmed ability to process information,” 12 however alongside this simple definition There, too, the subject is divided into three waves, with each wave being characterized on an ‘intelligence scale Scale Intelligence Notional, in which each of the four capabilities is measured.
The following are similar to the dimensions of human intelligence:
- Perceiving – the ability to discern world events;
- Learning – Learning the ability to learn things and adapt to different situations;
- Abstracting -) The ability to take knowledge discovered at a certain level and discard or apply it on another level;
- Reasoning – The ability to reason reasonably or make decisions about things logically.
First Wave – Description of ‘Handmade Knowledge’

The first wave is based on ‘Handcrafted knowledge‘, in which experts gathered Knowledge that exists in a particular subject and is characterized within the framework of laws that could adapt to the computer, which in turn Could have studied the implications of these laws.13 To this generation of artificial intelligence belongs.
For example, logistics software for planning operations such as shipments, tax calculation software, and software That managed to play chess against humans. Many computer programs and applications which it is made Use today are based on the first wave of artificial intelligence, and the perceptions that stand At its core are still relevant to narrow uses, as is the case in many applications In smartphones or software such as Office. This generation of artificial intelligence Does not excel in learning, but according to DARPA it continues to develop and achieve Many, for example in the areas of cyber protection and is still very relevant even today. In terms of DARPA, the products of the first wave have a medium sensing ability and an ability to explain causality In very narrow aspects, however, they are incapable of learning and have very little ability to deal with uncertainty.
The Second Wave – ‘Categorization’: Statistical Learning

The second wave is ‘categorization‘ based on statistical learning. Capabilities are used More advanced that were made possible thanks to the beginning of the use of machine learning, in which The algorithms rely on statistical learning based on replica data. In this wave, unlike the wave Previously, the computer was not taught fixed rules but developed statistical models For different problems, then “train” the algorithms on many examples, until they reach the desired level of accuracy. It is the products of this wave that enable voice recognition or face recognition In mobile phones, for example, “bots” that provide customer service through correspondence In online chats.
As part of this wave, the beginning of autonomous driving was made possible, as demonstrated in the framework Challenges at DARPA in 2004 and 2005. However, this generation does not have the artificial intelligence Ability to understand the rules of causality behind the actions it performs, therefore It can be tilted or manipulated. Due to a development recorded in recent years were added
Many uses for this generation of artificial intelligence, including systems for analysis or translation of Text, ‘personal assistant’ software on a smartphone, and also the ability to play challenging games such as The Chinese strategy game ‘Go’. The second wave is characterized, according to DARPA, by the ability to catalog things According to nuances and predictability, however, the products lack contextual capabilities and have capabilities Minimalism of reasoning makes sense.
Third Wave – Explanations: Contextual Adaptation

The third wave is an explanatory wave that is currently in the development stages and seeks to enable ‘adaptation Contextual‘. The wave-based algorithms or systems It is trying to formulate models for themselves that will explain certain topics. Darpa expects that Systems that will be built around contextual models will teach themselves. How different models need to be understood. This will allow systems to use models for explanations and receipts Decisions regarding various issues and issues. These capabilities differ significantly Most of the algorithm.
Which today operate as a “black box” and create a challenge in explanatory (Explainability) regarding how they came to conclusions). A topic which will be expanded in the section Dealing with challenges. (This way, these capabilities will allow the information to be used in an abstract way and transmitted). A step forward, however today the capabilities of these systems are still limited.
Hope is That the products of this wave will be more “human” and. Among other things, will allow communication in natural language, and will be able to teach and train themselves. Such as self-trained alpha-go software Thousands of Go games against itself. And will also include information gathering capabilities from a number of different sources. And formulating well-explained conclusions.18 According to the agency. This wave should greatly improve the Capabilities in the areas of sensing, learning, and reasoning. But the products will still continue to be my own Only mediocre capabilities in the field of abstraction.
Technologies
Technologies that are already beginning to appear today in the context of this wave are. For example technologies Advanced “smart assistants”. In their ability to assist with second-generation technologies, Including Siri and Alexa. Another example is the Duplex software, which is capable of ordering Queues. For example a place in a barbershop or restaurant (while conducting a coherent voice conversation with a Human service representative. Aside from the tasks that this software can perform autonomously. It also Knows how to identify and signal to its user about tasks. It is unable to perform on its own.
Apart from the division into waves, in general, artificial intelligence is a concept. That refers to the generality Of hardware or software or a combination of them. Which are capable of displaying visible behavior Intelligent. However, almost every study or article from recent years deals with intelligence Artificiality. And especially artificial intelligence and national security open with the fact that. It does not exist today One agreed-upon definition for the term artificial intelligence.
The problem with formulating one agreed-upon definition of artificial intelligence stems from two reasons Principles:
- The first is the different and varied approaches to field research.
- The second is The basic difficulty in defining or agreeing on the definition of the term ‘intelligence’.
Intelligence is due to Boundaries. That has not yet been breached in the study of neuroscience (and also philosophy). And therefore ability is limited Examine these concepts in relation to machines or project them on them. Despite this difficulty will be examined Various options for setting. And a set will be displayed that will be used for the discussion of this essay and the recommendations. To policies formulated on its basis.