Artificial intelligence, knowledge, wisdom, and emotions. Part one


You just got an e-mail. The news is bad. You are worried. The little thing that accompanies you around your apartment sprints quickly to your kitchen. On coming back, it serves you hot dark chocolate, only some milk. The little thing got it right. The hot chocolate will make you feel better. The cute little thing …

What did I say? Cute? Yes, it was cute that somebody took care …


THE ARTIFICIAL INTELLIGENCE AND EMOTIONS

Leaving for a moment the sensorimotor issues in robot R&D, the scene is not that impossible at all. But ‘cute’ and ‘taking care’ is an elegant description, barely reflecting that what really happened. We just crossed a thin line between computation and psychology.

high angle photo of robot

A cute little thing, is it not?

For the last couple of years, social media users uploaded billions of pictures of people in a different state of mind, either happy or sad or angry, carefully hashtagging them. The tech people had enough resources to teach the machines to recognize a happy, sad, or angry face. It is no longer a secret that psychologists all around the world help AI trainers to refine their work, like, for example, defining what mimics and other traits accompany lying and deceit or come behind thoughts of somebody with the so-called poker face. (More on AI training >>>).

It is, of course, separate work done by different people in different parts of the world. So technically, different psychological abilities of machines are still not combined with each other in a single computer. We can only ask: For how long? If the deep learning results are available in the cloud, it is only a matter of time as machines tap any psychological knowledge fed initially into any computer in the world.


Bumblebee, Paramount

A scene from Bumblebee by Paramount. Is Bumblebee getting emotional or just follows an algorithm?

So, the ‘cute little thing’ can recognize our state of mind, whether we are sad, happy, furious, or worried. The rest is only simple algorithms on how to behave in a given situation. It is enough that some smart tech people assisted by psychologists feed the computers with a list of all possible means on how to cope with sadness. Ther lacking measures would be retrieved from an online science library. Serving hot chocolate or chocolate ice cream to stimulate endorphins, the happiness hormone, would be quite high on this list. The other solution to a problem would be making (simulating) sad eyes and patting the sad person’s head … like Bumblebee does in this picture. Watching the film, I was continually humanizing this one, but some deeper scrutiny on how this kind of behavior would be possible for a transformer robot gives back a more reasonable explanation.

Is the ‘cute little thing’ emotionally intelligent? Is this empathy? No, for sure, it isn’t. The cute little thing was told on how to recognize different states of mind and was given algorithms on how to react in each situation. That what we can realize as empathy is just computation. The learning process could be pretty much the same as in the case of feeding a computer with 1000 films, some psychochemical library and cooking recipes library, and later use reinforced learning to train proper reactions and responses. We could just ask whether the same is not happening in the human brain of a person perceived by others as compassionate and benevolent … No, it is not philosophy. It is the basics of psychology.


monk surrounded by children

A monk giving alms to the poor. Does the picture say anything about his personality?

You can teach a young person, you can teach a child how to behave compassionately. You can teach him or her how to present him or herself as benevolent. Of course, for many people being compassionate and benevolent may lay deep within the personality. But it is not something you cannot learn. Look at this picture. We see a monk, probably a Buddhist monk who gives alms to the poor. Does this gesture tell us anything about his personality? No, rather not. We just see a monk doing his duty. You may maybe need some personality traits to become a Buddhist monk, but looking at this picture, we do not really know if he does it because he thinks it is his calling or just knows that a monk should help the poor and this is one of his daily duties set by someone else.

Of course, if we get to know a person better, we would be able to recognize whether it is personality or it is just a superficial behavior. Still, it is kind of behavior that it is teachable for a human, and it will be teachable or programable in case of a machine. It will not be genuine, but would it matter in case of the ‘cute little thing’? Do we psychologize in each and every situation to come behind our interlocutor’s motivations?


THE ARTIFICIAL INTELLIGENCE AND INTELLIGENCE

battle black blur board game

Does playing according to preset chess rules consumes entirely that what intelligent mind can do?

Chess. A game for intelligent people who do not only know the rules but above all have learned to think in blocks. Seeing a specific pattern on the chessboard, they either know by heart what the outcome would be after the next 10 or 20 moves, or they can quite quickly imagine that in various scenarios. They either read about previous chess plays and have learned how the best chess players had coped with different chest patterns on the board, or they are that good that they can invent their own patterns and moves. And intuition is not the best advisor here. You have to be able to quickly predict the next dozen moves and possible results.

It is no longer a secret that machines today can beat the best chess players in the world. But it took time till machines have learned to quickly analyze various patterns and possible scenarios. They needed vast input of historical chess games move by move. Today, it was already proven by AI techs that moves carried out by the machines were not only repetition of that what happened on chess boards throughout the world earlier in history. Machines are able to make their own moves, never ever done by human chess players. Data on historical games already fed into computers was sufficient that machines outperform the best human players.

young male bookworm reading old book in library

Is reading books only about rules of the game to learn?

But let us now turn around the table and look at the other picture. Does a chess player have to read that many books on chess games and strategies? Do all books contain only well-described moves and results? Is book content only zeroes vs. ones. No, certainly not. Books often indeed contain content that we can learn straight forward. Zeros ore ones. But books are not only detailed about instructions on how to do things. Cookbooks maybe are. But books are also about things we can read between the lines. What, for example, with books about events that we can interpret as we see them to ourselves? What with thoughts that push us to change the way we see and comprehend things happening around us?

One of the standard ways to teach people how others think and interpret what they see during psychology classes is to ask people to participate in those classes to tell a story about what they see in the picture. Believe me, sometimes those stories are more contradictory than you could have imagined. Things that for one person are soothing, for another one might be sad, and for yet another, they are about hope. And all of them look at precisely the same picture. Such a picture could be simply not only described but also interpreted by a machine that saw 1000 movies and remembers their content. Still, interpretation is one thing and inspiration to tell the own story is another.

So, is the cognitive intelligence of a chess player enough to go intelligently through his or her life? Is it possible to predict the moves of our counterparts in daily life, like it is possible to predict the moves on the chessboard?

In chess, we have the rules of the game. In real life, there are some codes of conduct, but in fact, there are no real rules of the life game. We sometimes think that we play by the rules, while actually being in the game that plays by different rules. The catch is to pick up what actually happens and or adjust or take their own stance. And often, it is beyond just knowing the rules in different games and merely combining them.


THE CONCEPT OF ARTIFICIAL INTELLIGENCE

I am not a philosopher. I barely know of theories and ideas about artificial intelligence. But there are two basic definitions that I find useful when talking about current and possible future abilities of artificial intelligence. One is about technicalities. The other one about psychology.

We talk narrow artificial intelligence when referring to software designed by humans that is capable of performing specific tasks within parameters set up by humans. So, algorithms either defined by human hand or even delivered from a deep learning process are applied in processes or circumstances defined by human designers. The general artificial intelligence is about machines that could perform any human task. It is some kind of self-ability of software or algorithms to apply data, information, insights obtained in one field in other fields without any creative human asking the machines to do so. It is among others about machines being creative. About perception and changing context.

Another term useful here is the singularity. It is about machines that would be as smart as people or even smarter like a kind of superintelligence. Superintelligence is defined as a technologically created cognitive capacity far beyond that possible for humans. AI training specialists claim to develop proper software will take till 2030, and later to feed computers with appropriate algorithms, it will take another 10 years till 2040.

But is the pathway from narrow artificial intelligence to fully-fledged general artificial intelligence a pathway from data, through information, to knowledge and ultimately to wisdom? Do machines need to be wise to outperform a human? Do they need to be able to read between the lines? Do machines need to be equipped with features we refer to as to knowledge? Would trespassing the threshold of the general artificial intelligence mean automatically mean that machines would be as humans? Does singularity mean that machines would be self-aware? Or just could outperform humans in cognitive thinking still being far away from that what makes a human? Does a machine at all have to be like a human?


Let us look at another approach that compares artificial intelligence to kinds of intelligence specific to humans – cognitive, emotional, and social intelligence.

Analytical artificial intelligence would be consistent with cognitive intelligence or just analytical skills. The latter would be comparable to the narrow artificial intelligence, ability to analyze upon data delivered, including AI making the own moves or delivering own conclusions. We see well this ability as machines outperform doctors in some diagnostical fields or as machines beat the best human chess players. In business, that can be solutions proposed by machines while planning daily routes taken by human couriers delivering us our shopping orders. We already know machines are planning to find higher efficiency levels than human planners. Another example is AI assistant lawyers that quickly find all possible alternative solutions in a lawsuit making human assistant lawyers dispensable.

Yet, analytics is as well about defining the data sets and looking for data and information that would allow analyzing an underlying phenomenon. In the human world, a specialist would be somebody who knows how to do things, knows the analytical tools, and is fluent in applying them. An expert analyst would see a bigger picture that ultimately ends with the ability to apply or even create and adopt an alternative approach, sometimes even from a completely different field. The latter features would instead refer to the notion of the general artificial intelligence rather than to narrow artificial intelligence. But, an expert would not only know the tools. He or she would know the reasons why he or she is applying them, as well as what are the shortcomings of the methodology. A critical approach would be the outcome. Could general artificial intelligence compete with that? Is knowing why to do within the general artificial intelligence framework, or is it still only how to do? And, here we tackle the first time the line between data or information and knowledge.

Human inspired artificial intelligence would be consistent with cognitive intelligence but also, at least to some extent, with emotional intelligence. We already said that it is possible to feed a machine with enough psychological data to make the machine copy human emotions or appropriately respond to most situations requiring that what we call emotional intelligence. But emotional intelligence is by far not only about perceiving and showing emotions. Machines are smart. And machines can, in a smart way, perceive and copy human reactions.

woman wearing red hat and sunglasses

The Oracle in Matrix was a wise one. Still, it was a shine of wisdom created by the machines …

But is being smart that what makes the world? What about motivation and persistence and other so-called drivers? Emotional intelligence is the ability to change a status quo, turning the table, or breaking the wheel. You could say artificial intelligence already changed the status quo. No, it was not the AI. These were humans, out of ambitions, greed, or even laziness. Kinds of emotional intelligence a machine would hardly possess.

And finally, there is humanized artificial intelligence self-aware and self-conscious, corresponding to the so-called social intelligence. Machines with deep insight, able to weight between goods and bads. Compassionate. Benevolent. Making the right choices …


Photos by: Alex Knight, Paramount Pictures, Suraphat Nuea-on, Pixabay, Andrea Piacquadio, Nashua Volquez