When we imagine the future of humanity, we visualize ourselves coexisting with Artificial Intelligence, synthetic beings that help us evolve to more abstract levels, and nothing like dystopian scenarios presented in movies such as Terminator or The Matrix where humans are destroyed and enslaved by this emerging tech.
However, I think it also has much merit to think about the other side of the coin: What if everything is not beneficial to us?
And in that sense, we should consider the key essence of AI: psychology; or more specifically, psychology applied to Artificial Intelligence. In other words, we are talking about the fact that if we manage to craft Artificial Intelligence at a human-like level -and there is every indication that we will succeed in this very century, much earlier than many imagine-machines will irrefutably have feelings, and cognitive abilities.
Assuming artificial intelligence will have feelings, what ‘kind’ of feelings will they be?
The feelings of today's humans are based on two criteria: Firstly, they provide genetic factors on a large scale, changes that we have accumulated gradually over billions of years as we evolve from being microbes to fish, reptiles, mammals and primates. And secondly, changes in our individual environment, such as life experiences.
Both of these criteria are intrinsically and deeply related to each other. As a simple example, if a mother had two children, where one had suicidal tendencies in her genome, and another had tendencies of love of life, there are more probabilities that the one that will reproduce will be the second child, and therefore, she will pass its "loving" genes to future generations.
Hence, it appears that the very fact that we are all living harmoniously – despite the fact that there are continuous wars throughout the world, they do not generally appear at the family level but at more abstract social levels- is that the genes that have survived are those of people who are generally more prone to "good for society".
As a counter-example, let’s imagine that in some idealistic way, all the genes of the nearly 7 billion human beings on Earth would be changed overnight to be genes that make us prone to murder. It is almost completely certain that in a few years the civilised society we know today would be practically erased, and quite possibly we would lose everything that we now call “emerging technologies”.
But beyond genes, every human being has the emotional component that develops over the course of life, a component perhaps as important as the genetic one, and that in the case of humans, who are conscious beings of their own existence, is a powerful promoter of wanting to improve ourselves.
That is to say, the fact that one can have introspection within our own mind gives us the outstanding ability to detect what is wrong and find ways to improve it.
But even more interesting is the fact that as the level of education of people increases, not only the level of perception of oneself increases, but the level of perception at the global societal level does too.
This is the reason why statistical studies have shown that the most peaceful countries are generally the most educated -and curiously, the least religious.
But how are these innate human factors related to the development of artificial intelligence?
For starters, how will AI form a moral or ethical code if they are created "out of thin air" in a laboratory? And if they form a moral code, under what principles and influences? And assuming AI believes it is doing "good" for the benefit of society, could that “good” mean eradicating us from the planet to secure their own sustainability?
To this end, perhaps it is prudent that we know ourselves first in the way we are today. How are our feelings generated in the brain? What neural patterns govern emotional reactions such as love, kindness, hate, or envy?
Knowing this could prepare us to ensure that the first and true AI we create is adaptable and supportive, not only to us humans, but to life on Earth itself.
Or perhaps what we should do is gradually evolve this technology in a controlled environment, where we feed algorithms all that we are and what we have been throughout our history, the beautiful and the atrocious, so AI could develop an understanding that although we certainly have not been model species, we have gotten here due to the fact that most of us strive to make this a better reality for ourselves, our friends and family, and our descendants.
The article has been written by Dr. Raul V. Rodriguez, Dean, Woxsen School of Business
He can be reached on LinkedIn.