A brief history of artificial intelligence development

109 0 3 0

Throughout the development history of human evolution, in a sense, it is the process of continuously summarizing the experience and deducting. The origins of artificial intelligence can be traced back to the 17th century or even earlier, when the definition of artificial intelligence was based on inference. The discussion of modern artificial intelligence began in the 1940s, when the Second World War broke out. More and more machines began to replace the manuals. People also wondered when machines could replace human beings to think. Then, to what extent the machine has been done is to have human intelligence, which requires a standard to judge. Turing, the pioneer of computer science, used the most straightforward words to describe artificial intelligence, which is the Turing test.

 

What is the Turing test? In 1950, computer scientist Alan Mathison Turing published a paper entitled “Computer and Intelligent”. The article did not point out what artificial intelligence is defined, but the artificial intelligence test method. Here is the details about the method: experiment a test and a machine claiming to have human intelligence; when testing, the tester is separated from the test, and the tester only asks (Any problem can be) the test through some devices (such as a keyboard). After asking some questions, if the tester can correctly distinguish who is the person and who is the machine, then the machine does not pass the Turing test; if the tester does not distinguish who is the machine, who is, then the machine has human intelligence.

 

Another important milestone in artificial intelligence is the Dartmouth Conference held in 1956. At this conference, participants came up with the theory that any other feature of learning or intelligence can be accurately described so that the machine can simulate it. In other words, artificial intelligence needs to go through three stages of feature extraction, model training and data prediction. Participants at the conference later proved to be the 20th most outstanding computer scientists in the world, and most of them have won the Turing Award. This conference has promoted the development of artificial intelligence in a broader field.the next 20 years, humans have made breakthroughs in the research of artificial intelligence, especially related statistical algorithms (such as neural network algorithms).

 

Of course, artificial intelligence has encountered many challenges in the development process. After 20 years of rapid development, in the 1970s, with the gradual maturity of theoretical algorithms, the development of artificial intelligence encountered bottlenecks in computing resources; because with the exponential growth of computational complexity, the large machine can’t afford all this in the 1970s. At the same time, the Internet was still in the laboratory stage, and it was just getting started in terms of data accumulation. In other words, artificial intelligence is limited by the computing power and the lack of data for a long time.

 

In the 21st century, the Internet has shown a spurt of development, and more and more images and text dataon the web. The Internet has become a big data warehouse. Many forward-looking companies or individuals have turned their attention to the data mining field. Everyone started to use the line-by-line formula and code to mine the value behind the data. The protagonist of these codes and formulas is the machine learning algorithm. The accumulation of data is like a piece of fertile land that requires machine learning algorithms to cultivate on it.

 

The open source distributed computing architecture represented by Hadoop provides distributed computing technology support for more enterprises, while high-efficiency deep learning architectures such as Caffe and Tensorflow are adopted by many enterprises to improve the algorithm model. The application of artificial intelligence has also become popular and gradually integrated into our lives. It is worth mentioning that in 2016, AlphaGo defeated the top human chess player in anshow, which brings the artificial intelligence industry to a new height.success not only validates the practicality of deep learning algorithms, but once again confirms the fact that humans are no longer the only carriers of intelligence. Any machine can generate intelligence as long as it can receive, store and analyze information.

 

Looking at the development history of artificial intelligence, it is a history of continuous interpretation of the analysis method and collection of past experiences. Before the advent of the machine, human beings can only judge things on the basis ofthrough the sharing of others and their own practice. This cognition of external things is limited by the human brain and knowledge. However, the machine is different from the human brain, it can absorb all the information, and can analyze, summarize and deduct the data regardless of days and nights, thus forming artificial intelligence. What is certain is that with the development of human society, the accumulation of data and the iteration of algorithms will further promote the development of the entire artificial intelligence.

 

The engine behind artificial intelligence is a machine learning algorithm. Machine learning is a multidisciplinary research-oriented discipline involving biology, statistics, computers, and so on. The machine learning algorithm is currently doing this: abstracting the scenes in life into mathematical formulas, and relying on the machine’s super-computing power to generate models through iterations and deductions for predicting or classifying new problems. It can also be said that the history of artificial intelligence is accompanied by the evolutionary history of machine learning algorithms.

 

Artificial intelligence is a science and a computer technology. Artificial intelligence is inspired by how people use their nervous systems and body organs to gain knowledge in perception, learning, inference, and action. However, in general, the operating mechanisms of the two are quite different.

 

In the 21st century, AI has made a series of mainstream technologies possible, which has had a profound impact on human daily life. Deep learning in AI—a form of machine learning based on multi-layered neural network that has made it a reality that mobile or kitchen devices can understand human language, and its algorithms can be used in a series of patterns-dependent recognition applications. Natural language processing, knowledge representation, and inference have allowed machines to beatand bring new capabilities to web search. Although very shocking, these technologies are highly confined to specific tasks, and each application has to undergo several years of refined research and a very cautious and unique construction process. In similar directional applications, AI technology will be expected to grow tremendously in the future, such as in, medical diagnosis, and intelligent customer service.

 

The current effort is to train robots to interact with the surrounding environmenta daily and predictable way. Interacting with the environment necessarily requires the development of robotic maneuverability, which is the direction that current researchers are paying attention to.

 

The revolution in deep learning has only just begun to affect robotics, in large part because it is much more difficult to mark data sets than other fields because of training robots.


Most of the traditional machine learning focuses on pattern mining, and the key point of reinforcement learning is decision-making. This is a technology that helps AI to learn more deeply, allowingto further understand the real world and make better responses.

 

The framework of reinforcement learning as a sequential decision-making experience-driving has been proposed for decades, but such methods have not achieved great success in practice, mainly due to the representativeness of the sample space. However, the emergence of deep learning has injected a “booster” into reinforcement learning.

 

Reinforcement learning can help reduce the requirements of marked data sets, but it requires the system to safely explore the policy space to avoid damaging the system itself or others’ errors.

 

The progress of machine perception reliability, includes computer vision, strength and touch, most of which will be driven by machine learning, and machine learning will continue to be a key force in advancing the function evolution of robots.

  • x
  • convention:

Login and enjoy all the member benefits

Login and enjoy all the member benefits

Login