Andrey Homich believes that machines can solve many more tasks, but people cannot give the machines the necessary algorithms

Machine learning is an attempt to simulate intelligence through training of computer algorithms. The history of the artificial intelligence development began with the Turing machine. It has been proven that a simple device can calculate all that is susceptible to calculation. There was one problem: how to teach the device to calculate something that will be useful for a person.

In the 1960s – 1970s, when research directions competing for funding emerged, there were attempts to simulate intelligence as a strict logical reasoning, knowledge base appeared, the intensive development of programming paradigms began, the frame-systems of Marvin Minsky (1975), fuzzy logic of Lofti Zadeh (1965), the perceptron of Frank Rosenblatt.

Today, our interlocutor is Andrey Homich, the specialist in machine learning, software engineer. His main specialization is software for computing systems and distributed systems. Recently, he has been working on: database problems, the complexity of computing, numerical methods of optimization, applied statistics and artificial intelligence, metaphysics and Philosophy.

What are the results of development of artificial intelligence and machine learning today?

Two directions developed. The first is the knowledge bases and advising systems in medicine and other narrow fields. It was impossible to create a strong artificial intelligence, but there are some useful side effects.

The second area is the formal neural networks that have nothing in common with the natural learning organisms. In fact, they represent the methods of nonlinear approximation with a particular class of functions. They find limited application, and always in conjunction with other technologies in the tasks of pattern recognition. This is digitization of texts, face detection, classification of objects.

What is your vision of the current state of affairs in the field of artificial intelligence?

The fields of knowledge bases and production systems actually came to a halt in development. The programming technology development is also stopped. I think that I have a reason for this view, as a specialist of languages from Assembler and C, up to the modern formal ones like Java and Golang. The machine can solve many more tasks, but people cannot give the machines the necessary algorithms.

The last major innovations in the machine learning are the convolutional network (Yann LeCun, 1988) and the concurrent training, that was applied for the first time in the Kohonen networks (Teuvo Kohonen Kalevi, 1988). There is a deadlock in all fields since the 1990s of the last century with the result that is very far from the initially declared objectives.

With cheaper computing power opportunity was created to use some demanding optimization methods like evolutionary algorithms, Boltzmann machine method. In fact due to the “brute force” we have somewhat expanded the scope of application of the technologies of the second half of the last century. All the rest is pure marketing, the art of sales promotion of the old in a new bright packaging.

In the past, we have developed together modular neural networks and evolutionary-genetic algorithms. What was your further experience in the use of technology of machine learning and artificial intelligence?

In practical applications, it was mainly credit scoring for banks, optimization of advertising campaigns, forecasting of financial markets, analysis of text data. In particular, the formal neural networks technology for the classification of bank clients by bankruptcy risk. I also applied in practice the technology for the creation of databases from partially ordered texts.

What are your ideas for the development of machine learning and artificial intelligence?

The idea is to model complexes of neurons. For example, use the working hypothesis that the complexes, such as neocortex columns in the brain of higher animals including humans, are the best level of intelligence simulation. Such complexes may be presented as relatively universal mini-bio-computers. Let them have a small memory and non complicated software, but they can play the role of learning Turing bio-machines. The network of such mini-computers removes the limitations of individual network node.

“The Formal neuron” only allows calculating the weighted sum of signals and apply a simple nonlinear converter to it. In contrast to the “formal neuron”, we can afford much more functionality for our new computing module. For example, we can create a Smart “Chinese room”.

The philosopher John Searle introduced “Chinese room” notion in 1980. He offered the mental experiment designed to prove the inability of the machine to understand. In short, it is a room with two windows. Chinese texts are received in one window, and texts from the room go out by the other window. Inside the room there is a man, who does not know the Chinese language, but is supplied with instructions to replace some characters with others, to obtain a new text, which he then gives out. With the right instructions, it may seem to outside observers that the person inside the room has a meaningful dialogue, but in fact, he is simply mechanically following the instructions.

The paradox is that we see the way out from the AI deadlock in this experiment. The fact is that we state intellect as purely utilitarian property of the organism. The intellect, the ability to learn and to react to situations in variable ways is necessary for the organism for the sake of prosperity in its population.

“Understanding” is not required for this task. One has to find food, avoid the dangers, and reproduce. The solution of these challenges is the creation of environment for the organism, from an anthill to a human metropolis and space bases. In the process of evolution nature discovered, by chance, the intellect as a useful tool for an individual and indirectly for its population. It created the intellect from what was at hand: the cells, the molecular compounds, chemical reactions, and physical phenomena.

Thus, intellect is strictly material and is likely susceptible to study without the need to descend to the level of quantum mechanics. The “Chinese room” becomes equivalent to the Turing machine in the process of its formalization. If there is something that you cannot consider as the “Chinese room”, then you generally cannot calculate it.

We slightly changed the original experiment. We do not give ready-made instructions to the “tenant of the Chinese room”. He has notepad so he can record instructions himself, non-stitched pencil, and eraser. He can only get confirmation from the outside for the “correct” actions. Deliveries of what is necessary, as in reality, may be delayed, disruptions are possible due to unknown factors.

What are the results of these experiments?

We spent more time to create a prototype software. Thanks to this we were able to solve the problem almost immediately, which is very expensive to solve by formal reasoning systems, machine learning systems. The task was to restore gaps in the character series with complex dependencies, for example, when an element of a series depends on the combination of hundreds or more adjacent characters. This is similar to the complexity of the symbolic sequences of natural language.

Theoretically speaking, of course you can solve this problem using the conventional machine learning. The only question is the availability and cost of resources, which are data, time, energy, money and the availability of highly skilled and narrow-field experts.

We are ready to solve a larger range of tasks and with greater commercial component in the future. For example, optimization of advertising campaigns in social networks, the establishment of advising systems, automatically and individually providing digital content for a consumer. The ultimate goal is the creation of a single industrial technology of artificial intelligence.

There are plenty of platforms for machine learning designed now. What is missing in all these solutions?

There is an increasing demand for autonomous self-learning systems. But the creation of “brain” for such systems remains on the level of “art”, at the individual exclusive solutions level. The industry requires the industrial level.

What are the perspectives for machine learning, what will happen next in your opinion?

Where is the mistake of previous researchers? In my opinion, some of them used too high level of detail (formal systems and knowledge bases). Others used too low level of detail, the level of individual cells, neurons.

There is a need to retreat a little bit back to the starting positions, think critically about the experience gained and compare with the accumulated data of biological sciences.

Interview: Ivan Stepanyan

Read more: Modern science and engineering with Ivan Stepanyan ...