Artificial intelligence, knowledge, wisdom, and emotions. Part two

Artificial intelligence is smart, that smart that in some fields it already outperforms humans. But it is still and for a long time will be only algorithms and keeping within rules of the game. Emotions can be recognized and expressed without empathy, based on just recognition, and tapping the psychological library on how to appropriately respond. The same is with acting as a wise one. The Oracle in Matrix was the best example. Acting as a wise one was based on the same principle as pretending emotional responses. If a question was too demanding, the answer was left wide open to interpretation. Typical responses taught to cope with questions we do not want to answer straightforward. Teachable. Apprehendable.

Talking the machines displacing or augmenting human performance, we need to ask ourselves where we are today and where we would be in the foreseeable future on the pathway between narrow and general artificial intelligence and cognitive and humanized intelligence. Must a machine be fully humanized to be a competitive species ready to take over? Would another set of features with a focus on cognition and ability to interact with humans on the intellectual and emotional level not be enough? What is the future of humanity in this respect?


R&D on autonomous cars is already quite advanced. The artificial intelligence is trained to respond to different traffic situations. It is still too slow, and too limited in observing that what happens all around the car. One of the major problems is the lack of predictability of traffic participants’ behavior who often do not play by the rules set for the traffic >>>. With increasing computation capabilities, they will, however, not be an obstacle. The main problem of autonomous cars today is about decisions, the so-called right decisions while facing an impossible choice, or the so-called no-win scenario. Star Trek fans call it Kobayashi Maru.

Citing Wikipedia here, the Kobayashi Maru is a training exercise in Star Trek designed to test the character of Starfleet Academy cadets in a no-win scenario. The primary goal of this exercise is to rescue the civilian vessel called Kobayashi Maru in a simulated battle with the Klingons. The disabled ship is located in the Klingon Neutral Zone, and any Starfleet ship entering the zone would cause an interstellar border incident with all the possible consequences. The approaching cadet crew must decide whether to attempt the rescue of the Kobayashi Maru crew – endangering their own ship and lives – or leave the Kobayashi Maru to certain destruction and crew to brutal interrogations. If the cadet chooses to attempt a rescue, the simulation is designed to guarantee that the cadet’s ship enters a situation that he or she will have absolutely no chance of winning, escaping, negotiating, or even surviving. And Klingons are quite brutal folks. Now, think of programming AI with a zero-one choice.

In the case of autonomous cars the impossible situation might be as in a split second you have to decide whether to crash into a massive tree with a high probability of death or crash into a group of several children. They are just crossing the road and condemn them to certain death or disability for the rest of their lives. Now add a scenario that a driver is accompanied by his pregnant wife, who is in the ninth month of pregnancy.

As far as I can recall the Kobayashi Maru was intended not to check the choice the cadet starship captain would do, but his or her ability to make such a choice with a consequence to live with it till the end of his or her life. A programmed machine in an autonomous vehicle would not have a problem here but think of people who programmed it with a deliberate choice or politicians who would have accepted the corresponding law allowing for such choices. Think of the psychological pressure on those people, lawsuits against them, and all other possible consequences.


From the tests on how a machine could beat even the best human chess players, we know that a machine is able not only to combine the historical moves that were fed in the process of AI teaching into computers, but it can design or create its own moves. We can say that in this respect, the machine shows creativity. Or at least a sort of creativity.

So, let us think about what makes us humans creative. Is it only thinking of an alternative solution to those that had been already invented, but all within the defined rules of the game. Or is it something else or more?

It, for sure, being fluent in generating many possible solutions to solve a problem. These can be solutions that had already been defined and applied in the underlying context. But what of solutions applied in some other fields that could be applied there where we are? Did fluency in physics make German physicists take over in predictions of traffic jamming from traffic engineers from there where they did not manage? For sure, not. At some point, somebody had to have an idea of applying molecular physics to describe traffic flows. Was it a Ph.D. student drawing similarities between a molecule and his or her car getting into and later leaving some spontaneous traffic jam? Or just a physicist and a traffic engineer met to grab a bite and just talked or better to say carefully listened to each other, open to the interlocutor’s ideas? Listening carefully and being open to other people’s ideas makes creative people being flexible. It is decisive to shifting with ease from one type of problem solving-strategy to anotherGenerating solutions from within those already defined is not that impossible for algorithms and robots in the foreseeable future. The singularity is feasible by 2040.

If a machine is working in a separate environment and makes only that what it was programmed for, the number of solutions is limited. The ability of a machine to generate more solutions would increase when we connect it to some online library. But yet, the R&D robotics engineers are currently working on AI machines learning from the experience of other AI machines by connecting and exchanging experience in the cloud. Tapping extensive resources in the open-access cloud makes the machines already more efficient than us, humans, in finding solutions already applied somewhere. But are the machines able to jump from one field of science to another? They are still not. Besides still missing all-around access and connectivity, the main issue is how to define a problem to make machines look elsewhere. Problem-solving is not about solving itself. It is about asking a question that is, by definition, not limiting the solutions possible to apply. It is about being open to different solutions. If we make machines define the problem on a higher level of abstraction, and later tap an extensive interdisciplinary library, shortlisting alternative solutions would not be complicated. The question is only whether we would leave the final choice to the machines or leave it to a wiser human. A  value judgment might be needed if some decisive factors cannot be explained by any parameter, thus being beyond artificial intelligence capabilities. 

But creativity reaches far beyond fluency or flexibility. Fluency and flexibility are needed to refine things. That what brings the world forward is originality. It is seeing problems or needs requiring some action. It sees unique or different solutions to a problem. It is, in particular, searching where no one else has ever searched. Originality is about inspirations and inventions.

I can easily imagine artificial intelligence inventing a new drug curing a disease by matching a mixture of ingredients, either natural or synthetic, with a disease and its specifics analyzed in a deep learning process. It will still take time and computing power to feed computers with sufficient medical records and test results till machines could, in any case, outperform humans or at least find the perfect combination in a split of a second, which would not be possible for a human. But when it happens, the possible solutions delivered by a computer in a split of a second would be difficult to obtain in a conventional laboratory that quickly. They would probably be obtained by human researchers at some point. The advantage of machines over humans will be about timed needed to do so. But still, there will be limitations in this process. Computers will analyze only those libraries that had been made at their disposal or those they had been asked by a human to tap. Going beyond them will require the human touch.

Let me give you another example. Could you imagine the machine inventing a new way to tap an energy source? That the sun is an energy source that can be retrieved from any scientific library. It is a stated fact. But, could you imagine the machine inventing a solar panel to be installed on a house roof? The artificial intelligence could have helped to define basic parameters, make the necessary calculations determined by the inventors. But not invent it from scratch as it is. Yet again, the creativity of the artificial intelligence lies within frameworks or better to say rules of the game defined by humans. The AI can see unique solutions or can refine solutions within those boundaries, but not beyond. It is trained to do, but not self-aware.

The inventions are about having an idea either by sheer accident or out of some motivation, no matter whether it was greed or laziness or curiosity or some other reason behind. It is often about needs that could be satisfied better than they are today. The ancient Greek philosopher Plato claimed that the necessity is the mother of invention. A need or problem encourages creative efforts to meet the need or solve the problem. Today, the work on artificial intelligence is about satisfying human needs. Human AI trainers encourage AI to solve human problems and make human life easier.

surfer wiping out

Still, a need, an idea, and motivation could not be enough. What the many breaking through inventions required was the willingness to do, to take the risk, and overcome obstacles. Could you imagine artificial intelligence that persistent? Could you imagine a machine that will repair itself after it overheats because a human teacher told it to find the solution at any cost, no matter what? Could you imagine a machine that, in the process, is assisted by other computers that joined with it voluntarily? Could you imagine a machine with such a personality and friends ready to help out when it is in need of help? What did I say, yet again? Personality? Friends? Help?


Through science fiction films like, for example, the already classical Terminator, we got used to the idea that machines could gain their own conscience and treat people as some lower species. This is still science fiction. The artificial intelligence today is not about self-consciousness. It is about what was fed into computers. It was either algorithm written by a human hand or information generated by a human hand and fed into computers so that machines make their own algorithms out of it.

If artificial intelligence were programmed to capture the susceptibility to cancer from the human genome, and thus ensure that people identified as susceptible, could be cured at the early stage of its development, it would greatly help the humanity. But what if the programming is mal intended from scratch? Once in Sparta, weak babies were thrown off a rock into the sea waters for certain death. What if someone came up with the idea to do such research just to immediately get rid of children susceptible to certain diseases or sensitive to certain behaviors in the future?

Other, more tangible examples. If artificial intelligence were able to catch multiple layers of bank laundering transactions, then the world would undoubtedly be able to deal more quickly with drug trafficking or arms trafficking. However, if artificial intelligence is used to catch private accounts of millionaires or billionaires who do not view these accounts thoroughly and collect from them amounts that will not be noticeable to them, then on a global scale someone could program this artificial intelligence even with small amounts taken from individual accounts to give him or her great wealth. Today, in fact, we do not know what cybercriminals are working on.

macbook pro

So, artificial intelligence will be as people program it.

Generally speaking, we have two problems here. First of all, artificial intelligence can be processed in an unacceptable way for the civilized world, e.g., to act negatively towards a group of people. Many people point here to surveillance systems already developed in China. The second type of problem is, of course, cybercrime. Artificial intelligence is clever, actually smarter than man. If it is programmed by someone humanly clever, then this combination can give a staggering result. Let us not be surprised that Elon Musk claims that he is not afraid of artificial intelligence, but what of what man could do with it.

Photos by Miriam Espacio, Pixabay, Donald Tong, Pixabay, Guy Kawasaki,, Federico Orlandi