A DOG IN A PICTURE
If you show a dog to a little child saying: ‘it is a dog’, the one would easily recognize a dog for the rest of his or her life. It is due to the fantastic abilities of the human brain. How? Nobody knows. It just happens.
It is getting more complicated if we deal with machines or computers. There is no straightforward algorithm you can feed into a computer so that the computer would recognize a dog in a picture. Computers needed a vast number of dog photos to define a recognition pattern. Did Google hire photographers to make as many dog images as possible? No, we could not be more wrong here.
So, those millions but millions of pictures were fed into the computers. By comparing them with each other detail by detail, pixel by pixel, computers have learned what the distinctive features of a dog were. After examining many images, setting up algorithms, a machine would know how to recognize a dog and how to recognize that it is not a dog on a picture. The process is called deep learning of AI and is based on artificial neural networks. Artificial neural networks are told to simulate the processes in the human brain. It is, however, not some sophisticated biotechnology. Artificial neural networks are just layers of multiple functions with inputs and outputs. Just programming and computation.
Unlike algorithms, however, artificial neural networks do not require access to all knowledge about the issue. By comparing the inputs, they can independently see solutions that the programmer did not foresee. The AI trainers indeed did not always understand what computers did in the process.
But the result of this work was evident. Computers have learned to recognize dogs in a picture. It was seemingly only a longer-lasting process and more time and asset consuming than in the case of a little child. But it happened. Machines have on their own learned how to recognize a dog in the picture. The human help was only about delivering a sufficient number of images and feeding them into the computers.
And it was just the beginning.
FEEDING AND TRAINING THE AI
Today, of course, still photos collected from Internet users are used to train artificial intelligence. It is not only about recognizing animals or things. It is also about recognizing emotions. Many people post photos of themselves and their friends with a hashtag happy or sad. Photo libraries may be used by AI trainers to pick up images they might need, as well. Photos are labeled appropriately and fed into computers. The work on artificial intelligence recognizing emotions is already that much advanced that a bot conducting a recruitment interview can identify if someone cheats or overexaggerates while being interviewed.
This is not all, of course. We know, for example, that past medical records of patients diagnosed with specific diseases are used so that AI can recognize the disease. It applies to traditional medical diagnostics, for example, based on x-rays, but also to some other more innovative diagnostics. For example, MIT researchers, in cooperation with Google, have been able to teach artificial intelligence to recognize by a person’s voice sample whether they will be susceptible to dementia or Alzheimer’s disease in the future. Voice samples of people who already have these diseases or those who do not have them were fed into computers. And computers found common features of voice modulation of people susceptible to these diseases and people who are not.
And yet, some unexpected methods had been invented in the process. It seems that today a machine can recognize the susceptibility to Parkinson’s disease by measuring hip movements of a person walking. If the chart of moving hips is an even sine wave, then such a person will not be susceptible to Parkinson’s. However, if this distribution is uneven, some movements have more significant deviations than others or are more spread out over time, then the probability that such a person will get Parkinson’s in the future increases. This is a feature that has only revealed itself to doctors after the machine drew attention to this property in some process of artificial intelligence training. Then the trainers of artificial intelligence, together with doctors, decided to devote more time to this topic and use artificial intelligence to investigate only this phenomenon. A unique app on the smartphone tracks how a person walks, and these data are later analyzed in the AI training.
Automotive companies that are working on autonomous cars have already installed appropriate equipment and software in many traditional vehicles to collect data on various road situations. Data and patterns on how drivers had reacted to the road situations are collected and later fed into artificial intelligence training systems. The main aim of it is that in the future artificial intelligence can respond to similar situations like a well-trained experienced driver and maybe even better. Until sufficient data on specific road situations is collected, it will not be possible to create a fully autonomous car. But before it happens miles, but miles must be driven by vehicles able to collect data. Just look at declarations by Tesla. They say the onboard computers are designed so that in the future if software updates are available, they can automatically offer new autonomous functionalities. Today Tesla cars’ autonomous functionalities are limited. The onboard devices are mainly used to collect data, and information that summed up for all Tesla cars currently in use will make the software working better. It is a kind of crowdsourcing among Tesla car users.
And finally, when it comes to robotics, data collected by various robots sent to the cloud is increasingly used to make robots learn how to react in specific situations. Until now, industrial robots were caged, so separated entirely from human workers, and worked basing only on algorithms programmed into the software. Currently, the effort is focused on teaching robots to simulate human action as well as to cooperate with human workers shoulder by shoulder. These are so-called co-robots. Also, the data collected by sensors with which robots are equipped is transferred to the cloud. From there, it is used to further train artificial intelligence, controlling and steering those robots. Another option is when a person the so-called robot pilot using a joystick steers a robotic arm to pick up things, and this way, machines learn those movements. Eventually, machines will become autonomous. The pilots will no longer be needed or will teach machines to perform more complicated tasks.
HOW IT WORKS
An artificial neural network is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons. An artificial neuron that receives a signal then processes it and can signal neurons connected to it. The neurons are typically organized into multiple layers. Neurons of one layer connect only to neurons of the immediately preceding and immediately following layers. The layer that receives external data is the input layer. The layer that produces the ultimate result is the output layer. In between, they are zero or more hidden layers. (Wikipedia)
Incoming impulses (inputs) are transferred in a domino effect from one neuron or better to say node to the other one. It is not a one-time process. Each time new input is added, the artificial network adapts in line with the unique experience. It takes thousands but thousands of data sets (inputs) to train a machine. It is like in the case of a human. With seeing more and more x-rays defined with cancer, a young doctor gathers experience. With time and more and more x-rays examined, the young doctor becomes more experienced and makes diagnoses with more surety. The same happens with neural networks. However, the capacity to remember all cases is better in the case of computers. What is more, a computer may see details earlier, not trackable by a human brain. This way, the artificial neural network may outperform even the most experienced doctors. But before algorithms set by the artificial neural networks can recognize an unlabelled x-ray, the accuracy ratio must be checked by experienced doctors and approved.
Here we come to the idea of reinforced learning. It is not only that the machine is recognizing patterns on its own. The artificial intelligence trainer is assessing the outcomes telling the computer whether the result is correct or not. If the result is not accurate, the machine would know that it should never ever make the same mistake. The natural network algorithms are also, in this case, adjusted so that the machine does not make a mistake anymore.
This technique is also used when we deal with processes that are not about 0 vs. 1 input. An example of reinforced learning is feeding a computer with more than 1000 movies to teach a machine to converse with a human. The computer is scanning and learning all dialogues from all the films. The AI trainer is picking up a dialog with the computer. The computer has thousands but thousands of dialogues to choose from. But the AI trainer starts the conversation of his or her own. It is not repeating any of the dialogues from any of those films. If the computer responds well, so the response is logical, or it is merely a response that continues the conversation thread started by the trainer, the machine is rewarded with a high score. It knows the answer was all right. If the response is false or wrong, the score given by the trainer is low. The computer will not repeat this kind of response anymore. The algorithms in the neural network are adapted correspondingly.
Photos by: Leah Kelley, Pixabay, Linkedin Sales Network