Should AI robots be endowed with notions of “I” and “mine”?

Purnendu Bhowmik

June 09, 2020

The role of Artificial Intelligence (AI) in today’s world is unprecedented and growing at an astonishing pace. With each passing day, its applications are diversifying across domains ranging from global operations and explorations like military missions, remote-sensing satellites, and space explorations to aspects of day-to-day life like healthcare, drug discovery, commercial industries, education, research, cybersecurity, and much more.

AI refers to a kind of intelligence that is made by humans but driven by machines. It is designed to fulfil the objective of performing tasks that may otherwise be highly complex, time-consuming, and difficult to achieve by human intelligence alone. Although driven by machines, the development of AI involves improvements and upgradations for continuous learning and development. 

At one point in time, robots developed with AI were built to follow individual instructions or commands to make our lives easier. These machines were not expected to possess features like consciousness, self-awareness, emotions, etc., that are characteristic of the human mind. But considering the current time points, AI robots have surpassed these roles, going way beyond just following social orders. They are constantly evolving with time, learning newer ways of working, and communicating with humans as well as with each other. 

Two striking examples of such a scenario are Bob and Alice, the two Facebook AI robots from 2017, which were eventually shut down by the developers after they evolved their own distinct way of communicating with each other. Another example is Sophia, a renowned social humanoid robot developed by the Hong Kong-based Hanson Robotics. Not only was Sophia capable of interacting with humans by imitating human gestures and facial expressions, it could recognize individuals too. It also became the first robot to receive the citizenship of a country (Saudi Arabia) in October 2017. 

If we compare the development pace of AI robots, it’s very evident that they have come a long way, from being just machines that obey human orders, to becoming machines that are given legal personhood. However, there are some grave concerns that call for our immediate attention. Are AI robots becoming more and more human-like? Is it possible for them to grow self-awareness and develop a consciousness of their own? What are the consequences if AI robots become endowed with the notions of “I” or “myself”? 

Considering the current scenario, the implications of conferring the notion of “self-awareness” to AI robots would affect our lives at multiple levels, depending on the areas of application of these robots. The sense of self, which is characteristic of humans, gives us a feeling of achievement, responsibility, superior intelligence to analyse a situation, and implement even last-minute changes. This ability is way above the narrow objective of finishing a mechanical task or achieving a goal. 

It’s likely that enabling this feature can result in a new era of AI robots, that can even surpass humans in their performances, the outcome of which could be multidimensional. On the one hand, we are talking about their roles in performing advanced medical procedures with the highest levels of precision, attempting complex processes like new drug discoveries in much shorter time frames (and with far greater precision), performing complex rescue operations, being compassionate towards kids and the elderly, counselling the emotionally depressed, and contributing towards a better society. 

However, at the same time, we could be moving another step towards a world where robots might take over the human race completely, and instead of being a part of human society, they might consider us unfit for society, resulting in our own extinction. Such scenarios were well-portrayed in sci-fi movies like I,Robot, Terminator etc. Designing AI robots with a sense of “self” might open the door to a different world, where the logic of right or wrong might not fit well in the standards of human understanding. After all, the mere sense of I or myself does not ensure the presence of consciousness, ethical-morality, and discrimination between self and non-self in the AI world. 

No matter how much robots learn and evolve, their fundamental conscience still lies with some predefined laws or rules, such as Isaac Asimov’s Three Laws of Robotics. So long as their acts are in accordance with these laws, self-aware AI robots might develop their sense of dominance, power, and authority, eventually reaffirming the fact that “humans should be in command”. Under no circumstances can we afford to let AI robots change the chain of command and be in control of the system.

The times that we live in – advanced yet powerless in the face of the pandemic; highly evolved yet brought down to our knees by a minuscule yet clearly despotic organism – calls for a balanced approach to handing over the power centre to machines. Nature thrives on balance; we are not just accountable but also the ones to bear the consequences should it be disturbed. How we put AI to best use could be a very pivotal decision for the future of the human race!