We talk to startups and investors, you get the value.
Together with the guest of the next issue of the audio podcast “Like Clockwork”, the CEO of the EORA group of companies engaged in the development and implementation of solutions based on machine learning (natural language processing, machine vision, Data Science), Roman Doronin, we will dive into the history of artificial intelligence, learn about the main areas of its applications and future growth.
We talk to startups and investors, you get the value.
Thanks to movies like Terminator, people think that artificial intelligence is an evil that can destroy humanity. But in fact, what the cinema shows can happen in the real world with a very low probability.
In 1954, biologists prepared a squid and extracted a brain cell called a “neuron”. After studying its structure and composition, dendrites and an axon were discovered. Dendrites are receiving processes that collect impulses from other neurons and transmit them to the body of the neuron, and the axon is the only transmitting process, through which the impulse goes from the cell body to another neuron.
In the course of the study, the scientists asked themselves whether it is possible to solve intellectual problems of a perfect level by repeating the architecture of the neuron — after all, this is what drives human progress along with laziness. Accordingly, the theory of neural networks began to develop precisely from the moment when scientists tried to reproduce the processes occurring in our brains.
In 1956, the Dartmoor Seminar was organized, where the best scientific minds, including the founders of IBM, gathered. The workshop was organized to decide on the future of cars and how to make them smarter. The event lasted for 2 months, and it was assumed that during this time, scientists would teach the machine to speak, count and understand natural speech: 7 large steps were identified for this, but none of them were achieved. It is believed that this is when the history of artificial intelligence began.
All myths appear where there is a mistake. The essence of the mistake for artificial intelligence lies in the general misconception that it should be created in the image and likeness of the human brain — accordingly, this will lead to competition with humans in the future. Another theory also says that the machines will self-learn.
The truth is that a machine cannot learn on its own, even when we talk about clever machine learning — “reinforcement machine learning” — a method similar to artificial intelligence, since the program learns on its own. This year, this technique has shown outstanding results and is already applicable today in any limited environment that contains a goal and can be fully described. This means that “reinforcement machine learning” can be used in games and drones. In my opinion, this type of training will have other applications. However, the theory that artificial systems can learn on their own is a myth.
It is also a fiction that they can be similar to people and evolve in the same way. Because for a neural network, even in a single model that can look for holes in oil pipelines, photoshop a face in photos, talk, and simultaneously predict the weather, it will require huge computing power.
The following myth suggests that the system can independently go beyond the scope of the task. To put it in a simple way: simply, it is a mistake to think that Alice will help your child complete their English homework. The system can only help indirectly because it can translate words, but it will not be able to complete the task completely — because it is not designed for this purpose.
First, we need to decide what we mean by this term.
Artificial intelligence is not an anthropomorphic human — like creature that learns on its own. This is a complex computer program that is based on the principle of “machine learning”. It is a huge technological branch that gives the machine a large amount of data and teaches it to find patterns in them: for example, to distinguish a cat from a dog, to predict something, and to predict based on the history of events. So, by loading all the data into the car, starting from flights, weather, and ending with news, you can predict the delay of the plane.
The next term in the concept of artificial intelligence is “machine vision”, which is similar in technology and functionality to human eyes. Artificial intelligence also refers to “natural language processing and processing”. The moment you “Google” something or ask a voice assistant, the system turns your request into text and then sends it to servers that can give you relevant information. For example, the same Google has long been searching not for keywords, but for the meanings that are embedded in your query.
Data science also refers to artificial intelligence: it works with data, interprets it, builds forecasts based on databases, identifies dependencies in them, and so on.
As a result, data science, “machine vision” and “processing and natural language processing” are three areas, each of which is called artificial intelligence.
Machine hearing can also be attributed to the three main areas of artificial intelligence since it is based on”machine learning”. In fact, each person’s phone has microphones, such peculiar “ears”. And if computer vision is very widely developed, then machine hearing can be much more advanced.
Machine hearing is an underappreciated technology because with this direction you can determine where a person is, as well as distinguish people, identify positive or negative emotions, and understand the meaning and context by the tone of the voice. As, for example, there are people who can only detect breakdowns or the type of cartridges of a particular machine gun by sound.
I think that scientists will start moving more in this direction because voice assistants simply need “machine hearing” as a more complex system.
In my opinion, artificial intelligence will be in demand everywhere. It should be noted that chatbots are now at the peak of their popularity. Voice assistants such as Alice, Siri, Google, and others are all complex chatbots because they work with text.
If we are talking about business solutions, we need automation and communication. For example, in the Dodo Pizza contact center, a robot answers a third of calls. And this is a very serious help in scaling a larger company, as well as in the speed of service.
Now people are so spoiled that they want to get a response to complaints and other appeals, anywhere, anytime, and with maximum speed. Our company EORA has found a solution for this. At the moment, all communication can not be automated — because there are always difficult questions with a trick. We have automated what is possible, and in other moments our employees come to the rescue. Channels, accounts in social networks, including feed and comments, sites (for example, Banks.ru) are “stacked” in one place, after which a special model ranks the message queue by several parameters:
Negative comments are processed within 5–6 hours, and positive comments are processed within 12 hours. The client sees that the story on Instagram is answered in a matter of minutes, and if his review was initially negative, then a quick reaction immediately increases the loyalty of the person.
Also, data science is used very widely in banking topics for making forecasts and is used as a recommendation system in online stores, when the site suggests products to customers based on their preferences.
Finally, one of the most widespread areas of artificial intelligence is “machine vision”. An example of use is when monitoring compliance with the speed limit on the roads: artificial intelligence uses a camera to record the number of the car, “punches” it through the database, finds the owner and forms an invoice for paying a speeding fine.
Any text recognition is “machine vision”. Every time a person looks into the camera of their phone, there is a segmentation of the face, clothing and background — which is why such good photos are obtained. As soon as the user opens the camera in the phone, it starts shooting continuously. Even when using the mask on Instagram, a special neural network segments a person’s face, predicts the turns of his head to move along with the mask in real-time.
At the moment, one of the most dangerous is the deep face technology. Using it, you can present any information on behalf of, for example, a well-known politician. To understand what I’m talking about, just type “deepfake politics” in the browser bar. There are videos where the presidents of the countries speak, if not on their own, then in a very similar voice and move their lips quite credibly.
Deepfake combines two concepts: Deep learning, that is, training neural networks, and Fake. This is fake, synthetically produced media content with the superimposition of faces and voices, usually of famous people, on photo and video materials of various contents using artificial intelligence technologies.
Also, worth mentioning is the relatively recent GPT-3 algorithm introduced by the OpenAI laboratory, which can generate credible news and texts in English that are almost indistinguishable from the human level — for this reason, OpenAI does not give full access to the model, because it is afraid that the technology can be used for disinformation.
However, the moment when the machines will write texts for us is still very far away. Of course, they already know how to generate them, but when you ask the machine to write a text about specific events — the result will consist exclusively of facts and is absolutely impossible to read.
However, there is a problem in another area. For example, an Internet portal from Voronezh uses a neural network that can endlessly generate news that is close to complete absurdity. After that, the editors post untruthful materials on the web, which people will readily believe since there is no fact-checking. And this can lead to the appearance of fake news in an uncontrolled amount.
Yes, work is underway to create such a neural network. However, this is a struggle that can become endless at one point: while one puts out an algorithm that can be used to calculate deepfake, the other immediately develops an algorithm that will bypass it. And every day it will only progress.
Facebook has grants for the development of an anti-deepfake system in any direction, whether it is text, voice or text generation. But today, how to deal with this technology is not completely clear. And since the whole world lives in the era of fake news and deep fake-people who have facts and verified information are literally worth their weight in gold.
If you remember the story of the Trump election, they used data science in its purest form: his team divided people into many groups, identified the signs of these microgroups, and developed their own appeals and materials for each of them — so they shook the system.
But it is precisely by observing social phenomena that it becomes more and more noticeable how the line of normality is erased. The person is not used to being in the current environment, that is why the number of psychological problems has increased many times. Society has been relatively stable, or at least predictably stable, for quite a long time, and now technology has overtaken all visible speeds. And every day the world is accelerating more and more, so people don’t really understand how to continue to live.
For example, take the dissonance of people who look at themselves and their capabilities, and then scroll through the feed on Instagram. After that, they look at their life again and fall into depression, because they don’t know what to do with it. According to them, the whole world is living well, and they are living poorly, although this is a common cognitive error.
In fact, the prospect is generally quite attractive. Artificial intelligence, like a snowball, flies very fast. We don’t know how to control it yet. We are not used to living in it, and we are going somewhere. It’s hard to say whether this is positive enough or not. From the negative, we can say that humanity has the resource to self-destruct — and this is really terrifying.
Otherwise, we follow the path of healthy laziness — this means that the entire routine will be automated. And in my opinion, creative professions will not go away, but on the contrary, they will only flourish, because there will be more time and opportunities.
Artificial intelligence will allow you to slightly improve some processes, so that people will free up time. At the time when there is enough data, it will be possible to increase longevity. Also, due to the optimal construction of routes and complex logistics, it will be possible to further reduce ticket prices. And the generation of new molecules will lead to the invention of universal drugs that will be ideally suited for a person.
We definitely have a more exciting future ahead of us than other generations that have lived up to this time, because they didn’t have even approximately the opportunities that we have now.
You can listen to the full version of the audio podcast here.