Artificial intelligence, knowledge, wisdom, and emotions


You just got an e-mail. The news is bad. You are worried. The little thing that accompanies you around your apartment sprints quickly to your kitchen. On coming back, it serves you hot dark chocolate, only some milk. The little thing got it right. The hot chocolate will make you feel better. The cute little thing …

What did I say? Cute? Yes, it was cute that somebody took care …

high angle photo of robot

 


THE ARTIFICIAL INTELLIGENCE AND EMOTIONS

Leaving for a moment the sensorimotor issues in robot R&D, the scene is not that impossible at all. But ‘cute’ and ‘taking care’ is a fine description, barely reflecting that what really happened. We just crossed a thin line between computation and psychology.

To better understand what this ‘cute little thing’ just did, we must come back to the simple question of how a machine can recognize a dog in a picture. If you show a dog to a little child saying ‘it is a dog’, the one would easily recognize a dog for the rest of his or her life. It is due to the fantastic abilities of the human brain. How? Nobody knows. It just happens.

It is getting more complicated if we deal with machines or computers. There is no straightforward algorithm you can feed into a computer so that the computer would recognize a dog in a picture. Computers needed a vast number of dog photos to define a recognition pattern. It was actually social media that helped out. Millions of Internet users have for years uploaded, among others, pictures of dogs into the net using a dog hashtag. Those millions but millions of pictures were fed into the computers. By comparing them with each other detail by detail, pixel by pixel, computers have learned what the distinctive features of a dog were. The IT people did not always understand what computers did in the process. But the result of this work was evident. Computers have learned to recognize dogs in a picture. We call this process ‘machine learning’ or ‘deep learning’. The analytical process the machines went through is comparable with that of the human brain. It is seemingly only longer-lasting and more time and asset consuming than in the case of a little child. But it happened. Machines have learned how to recognize a dog in the picture, making this on their own. The human help was only about delivering a sufficient number of pictures and feeding them into the computers.

What has recognizing a dog in a picture to do with a ‘cute little thing’ knowing our state of mind and bringing us the hot chocolate when we are seemingly sad. The answer is plain and simple. For the last couple of years, social media users uploaded billions of pictures of people in a different state of mind, either happy or sad or angry, carefully hashtagging them. The IT people had enough resources to teach the machines to recognize a happy, sad, or angry face.

We can, for sure, assume that some other psychological knowledge was already fed into computers. It is no longer a secret that MIT people have taught computers to recognize illnesses upon voice samples. Or, that the bots can make a job interview knowing whether the candidate is sincere or he or she is lying or exaggerating.


It is, of course, separate work done by different people in different parts of the world. So technically, different psychological abilities of machines are still not combined with each other in a single computer. We can only ask: For how long? If the deep learning results are available in the cloud, it is only a matter of time as machines tap any psychological knowledge fed initially into any computer in the world.

Already today, we have machines in the same system learning from the experience of other machines. Impossible? Just look at how Tesla teaches cars to self-drive. The onboard device in one car shares the experience of different traffic situations with the cloud letting other vehicles to learn from this experience. OK, in practice, the process is today still a bit more complicated than that. Tesla people still have to help out. But for how long before it is just oversight?

Bumblebee, Paramount

A scene from Bumblebee by Paramount. Is Bumblebee getting emotional or just follows an algorithm?

So, the ‘cute little thing’ can recognize our state of mind, whether we are sad, happy, furious, or worried. The rest is only simple algorithms on how to behave in a given situation. It is enough that some smart IT people assisted by psychologists feed the computers with a list of all possible means on how to cope with sadness. Ther lacking measures would be retrieved from an online science library. Serving hot chocolate or chocolate ice cream to stimulate endorphins, the happiness hormone, would be quite high on this list. The other solution to a problem would be making (simulating) sad eyes and patting the sad person’s head …

Is the ‘cute little thing’ emotionally intelligent? Is this empathy? No, for sure, it isn’t. The cute little thing was told on how to recognize different states of mind and was given algorithms on how to react in each situation. That what we can recognize as empathy is just computation. We could just ask whether the same is not happening in the human brain of a person perceived by others as compassionate and benevolent … No, it is not philosophy. It is the basics of psychology.


THE CONCEPT OF ARTIFICIAL INTELLIGENCE

I am not a philosopher. I barely know of theories and ideas about artificial intelligence. But there are two basic definitions that I find useful when talking about current and possible future abilities of artificial intelligence. One is about technicalities. The other one about psychology.

We talk narrow artificial intelligence when referring to software designed by humans that is capable of performing specific tasks within parameters set up by humans. So, algorithms either defined by human hand or even delivered from a deep learning process are applied in processes or circumstances defined by human designers. The general artificial intelligence is about machines that could perform any human task. It is some kind of self-ability of software or algorithms to apply data, information, insights obtained in one field in other fields without any creative human asking the machines to do so. It is among others about machines being creative. About perception and changing context.

But is the pathway from narrow artificial intelligence to fully-fledged general artificial intelligence a pathway from data, through information, to knowledge and ultimately to wisdom? Do machines need to be wise to outperform a human? Do machines need to be equipped with features we refer to as to knowledge?

Let us look at another approach that compares artificial intelligence to kinds of intelligence specific to humans – cognitive, emotional, and social intelligence.

Analytical artificial intelligence would be consistent with cognitive intelligence (analytical skills). The latter would be comparable to the narrow artificial intelligence, ability to analyze upon data delivered. And yet, analytics is as well about defining the data sets and looking for data and information that would allow analyzing an underlying phenomenon. In the human world, a specialist would be somebody who knows how to do, knows the analytical tools, and is fluent in applying them. An expert would not only know the tools, but also the reasons why he or she is applying them, as well as what are the shortcomings of the methodology. A critical approach would be the outcome. The other feature of an expert analyst is seeing a bigger picture that ultimately ends with the ability to apply or even create and adopt an alternative approach. The latter features would instead refer to the notion of the general artificial intelligence rather than to narrow artificial intelligence.

woman wearing red hat and sunglasses

Ability to act wisely. Was Oracle in Matrix a wise one or a software? For those who do not remember: she was a software made to act as a wise one. And was able to act as a wise one for a quite a long time before she was exposed. Another AI specialty that might be gained in a deep learning process.

Human inspired artificial intelligence would be consistent with cognitive intelligence but also, at least to some extent, with emotional intelligence. We already said that it is possible to feed a machine with enough psychological data to make the machine copy human emotions or appropriately reacting to most situations requiring that what we call emotional intelligence. But emotional intelligence is by far not only about perceiving and showing emotions. Machines are smart. But is being smart that what makes the world? What about motivation and persistence and other so-called drivers? Emotional intelligence is the ability to change a status quo, turning the table, or breaking the wheel. You could say artificial intelligence already changed the status quo. No, it was not the AI. These were humans, out of ambitions, greed, or even laziness. Kinds of emotional intelligence a machine would hardly possess.

And finally, there is humanized artificial intelligence self-aware and self-conscious, corresponding to the so-called social intelligence. Machines with deep insight, able to weigh between goods and bads. Compassionate. Benevolent. Able to act wisely. Data desperately soak advice of Picard, whom he considered a wise man. Still, some of his decisions seemed to be a 0 versus 1 choice …

Talking the machines displacing or augmenting human performance, we need to ask ourselves where we are today and where we would be in the foreseeable future on the pathway between narrow and general artificial intelligence and cognitive and humanized intelligence. Must a machine be fully-humanized to be a competitive species ready to take over? Would another set of features with a focus on cognition and ability to interact with humans on the intellectual and emotional level not be enough?


IS CREATIVITY REPLACEABLE

The line between narrow and general artificial intelligence could be crossed by machines only when the machines would become creative or at least a sort of creative. So let us think about what makes us humans creative. 

It, for sure, being fluent in generating many possible solutions to solve a problem. These can be solutions that had already been defined and applied in the underlying context. But what of solutions applied in some other fields that could be applied there where we are? Did fluency in physics make German physicists take over in predictions of traffic jamming from traffic engineers from there where they did not manage? For sure, not. At some point, somebody had to have an idea of applying molecular physics to describe traffic flows. Was it a Ph.D. student drawing similarities between a molecule and his or her car getting into and later leaving some spontaneous traffic jam? Or just a physicist and a traffic engineer met to grab a bite and just talked or better to say carefully listened to each other, open to the interlocutor’s ideas? Listening carefully and being open to other people’s ideas makes creative people being flexible. It is decisive to shifting with ease from one type of problem solving-strategy to another.

Generating solutions from within those already defined is not that impossible for algorithms and robots in the foreseeable future. If a machine is working in a separate environment and makes only that what it was programmed for, the number of solutions is limited. The ability of a machine to generate more solutions would increase when we connect it to some online library. But yet, the R&D robotics engineers are currently working on AI machines learning from the experience of other AI machines by connecting and exchanging experience in the open space. Tapping extensive resources in the open-access space makes the machines already more efficient than us, humans, in finding solutions already applied somewhere. But are the machines able to jump from one field of science to another? They are still not. Besides still missing all-around access and connectivity, the main issue is how to define a problem to make machines look elsewhere. Problem-solving is not about solving itself. It is about asking a question that is, by definition, not limiting the solutions possible to apply. It is about being open to different solutions. If we make machines define the problem on a higher level of abstraction, and later tap an extensive interdisciplinary library, shortlisting alternative solutions would not be complicated. The question is only whether we would leave the final choice to the machines or leave it to a wiser human. A  value judgment might be needed if some decisive factors cannot be explained by any parameter, thus being beyond artificial intelligence capabilities. 

adorable blur bookcase books

What makes a human to be creative? It is fluency, openness, and flexibility that are programmable. But is the ability to deliver original solutions replaceable? Have machines enough personality and motivation to be original?

But creativity reaches far beyond fluency or flexibility. Fluency and flexibility are needed to refine things. That what brings the world forward is originality. It is seeing unique or different solutions to a problem. It is seeing problems or needs requiring some action. Originality is about inventions. I can easily imagine artificial intelligence inventing a new drug curing a disease by matching a mixture of ingredients, either natural or synthetic, with disease analyzed in a deep learning process. The results would be difficult to obtain in a conventional laboratory. But I can’t imagine the machine inventing a new way to tap an energy source. Could you imagine a machine inventing a solar panel? The artificial intelligence could have helped to define basic parameters, make the necessary basic calculations defined by the inventors. But not invent it from scratch. The inventions are about having an idea either by sheer accident or out of some motivation, no matter whether greed or laziness or curiosity.

Still, an idea and motivation could not be enough. What the many breaking through inventions required was the willingness to do, to take the risk, and overcome obstacles. Could you imagine artificial intelligence with such a personality?


Photos by: Alex Knight, Paramount Pictures, Pixabay, Nashua Volquez