Robots have pervaded popular culture since the dawn of the technological revolution; for nearly a century, authors, filmmakers, artists, and conspiracy theorists have prophesied that robots will someday break free from mankind’s control and wreak apocalyptic havoc. The popular TV show Black Mirror tells stories of futuristic tech run amok, from powerful and dangerous brain implants to robotic dogs with incredible killing efficiency. Terminator portrays a post-apocalyptic world destroyed by machines, and Ex Machina tests the limits of human-robot interaction and boundaries of our definition of ‘robot’ in grotesque ways. These examples represent the near-universal fear that robots will become so advanced in the future that they will be able to turn against their creators with unstoppable force. What drives this obsession with the future destructive potential of robots? Is it truly possible for robots to become conscious? To answer these questions and more, we will investigate the possibility of robots becoming self-aware and the implications of potential robotic consciousness through the lens of the scientific and cultural history of robots, the status of AI today, and budding technologies that could change the trajectory of machine learning.
Origins of Robot and Portrayals in the Past
The concept of a robot was conceived long before technology allowed the creation of the autonomous ‘thinking’ machines we call robots today. The combination of mythological motifs of an all-powerful divine being breathing life into inanimate materials, Frankenstein’s warning against humanity attempting to play the role of God, and the advent of industrialization spurred fascination with the animation of technology in the early 20th century (1). The term robot itself, derived from the Czech word robota meaning “forced work” or “compulsory service”, was first used in a play and short story by science fiction author Karel Capek in 1920 (2). In the century that followed, science fiction authors investigated the potential of humanoid robots, androids, and cyborgs. One such author, Isaac Asimov, rose to fame as he published a series of novellas with a common theme: all robots have three unbreakable laws programmed into their ‘brains.’ The laws, used as a basis for countless science fiction works in the decades to follow, are:
First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law (2).
Because Asimov’s robots are unable to think for themselves or weigh future consequences from their actions, many moral complications result from the strict adherence to these laws. Asimov’s grim outlook on robots’ cognitive capacity, combined with the relative rigidity of AI today, still influences popular opinion on whether robots will ever truly match humans’ mental abilities (2).
Robots and AI Today
N.B. For the purposes of this article, a “robot” is defined as a programmable machine that can sense its surroundings in some way, “think” or analyze the information presented to it, decide on a solution or action, and act on the solution in a way that manipulates its physical surroundings.
Contrary to Asimov and Capek’s views but consistent with the origin of the word ‘robot’, artificial intelligence and robots today are primarily beneficial to the advancement of humanity. With superior processing capacity, physical endurance, and preciseness of movements, robots have infiltrated healthcare, agriculture, industry, research ventures, households, transportation, and defense. Already, Intuitive Surgical’s da Vinci Surgical System is performing surgeries at hundreds of hospitals around the United States, Waymo’s autonomous cars are gaining traction in the automobile industry, and NASA’s Robonaut is being sent to the International Space Station to carry out dangerous missions (3). Robots have rescued humans from catastrophic situations, explored the deepest depths of the ocean, and even vacuumed the floors of millions of homes. Humanoid robots and androids have also been making strides in their mimicry and analysis of human emotion. For example, Softbank Robotics’ NAO is an excellent companion to autistic children, and Honda’s 3E robot series focuses on an ‘empathetic’ design for its robots; one day, the 3E-A18 model may soothe distressed children and help regulate their emotions.
As close as we may be to creating robots with their own emotions and consciousness, AI’s proximity to human capabilities has landed many robots in an “uncanny valley”. This refers to the phenomenon that occurs when a human is repulsed by the humanlike qualities of machines (4). We are currently between the age of endearing machines with distinctly mechanical features, such as R2D2 from Star Wars, and terrifyingly realistic androids like Sophia (5). Even Siri could fall into the category of ‘uncanny’—her manner of speaking is very similar to that of a human, but not similar enough that one would mistake her as another human. Our position within the uncanny valley represents our failure to pass the original Turing test: “if an AI machine could fool people into believing it is human in conversation…then it would have reached an important milestone” (6). Even though supercomputers have defeated chess grandmasters and robots like Watson are used to assimilate data quickly and more efficiently than any human, progress in the emotional aspect of computing—coined “affective computing”—is still slow (5).
Even if a robot passes the Turing test with flying colors, is it really thinking like a human? What abilities would AI have to possess in order to truly resemble humans in emotional and empathetic capacity?
Defining Consciousness: Where the Lines Blur
The question what qualifies a consciousness? has been asked by philosophers for millennia, and now, software designers and mechanical engineers are investigating it in their development of AI. Some thinkers focus on the psychological aspect of consciousness, arguing that true consciousness results from self-awareness and ability to reflect on past decisions (7). Others, most notably Christof Koch and David Chalmers, argue that consciousness arrives from experiences and interactions with the outside world combined with an inner sense of purpose; in order for a robot to think and act as freely as a human, it would have to process the sensory information it obtains subjectively and outside of the constraints of an algorithm (8). Others still believe that a machine cannot behave like a human unless it is treated as one and introduced to human culture, including religion; this would allow it to become more than the sum of its mechanical parts and even develop a soul (7). Of course, some pragmatic scientists think that defining consciousness as a guideline for AI is irrelevant, because artificial intelligence will always be artificial. In addition, the mental abilities of humans developed as a result of millions of years of evolution as part of a natural biochemical trajectory, so pragmatists argue that it impossible to mimic this level of complexity in the span of a few decades (1).
From all of the varying definitions and qualifications of consciousness, it is evident that our robots and AI today are nowhere near singularity—the fabled moment at which a machine has its own goals outside of what was programmed into it—but many tech startups and even government organizations today are using new approaches to dig us out of the uncanny valley (4).
How Can We Improve AI?
Today, most AI systems utilize a series of processes generalized under the term “deep learning” to collect and analyze data (9). Deep learning involves recognition of patterns, identification of objects and symbols, and perception of the outside world, but these processes are entirely driven by an algorithm that many engineers criticize for its inflexibility. Project Alexandria attempts to combat the rigidity of AI algorithms by introducing a component of human intelligence that is commonly overlooked: common sense. Drawing on facts, heuristics, and observations, the project is working towards creating AI machines with the fluidity of the human mind and a more flexible approach to solving real-world problems (9). Similarly, the startup Kyndi is building more adaptable AI, focusing on advanced reasoning abilities rather than conventional data consolidation. DARPA (Defense Advanced Research Projects Agency, a branch of the U.S. Department of Defense) is developing the initiative Machine Common Sense with similar goals as Project Alexandria, recognizing the importance of more fluid AI for the future of robotics (9). Although true robotic independence and consciousness remains in the relatively distant future, rapid strides are being taken to eliminate the barriers against making science fiction a reality—so it is worth considering the cultural and ethical implications of conscious AI.
The future of our robots and AI has been speculated by philosophers, scientists, and screenwriters alike. According to the episode “Be Right Back” in Black Mirror, even the most advanced android that easily passes the Turing test cannot possibly mirror a human’s personality and mannerisms—and this failure can result in emotional catastrophe. Ex Machina explores the consequences that could arise from confining highly intelligent robots and using them for research: the android Ava ultimately murders her creator and escapes captivity with a vengeful spirit. AI expert Aleksandra Przegalinska speculates that the best outcome for robots would be the optimization of programming without the potential side effect of gaining consciousness, and the worst-case scenario would resemble the future depicted in The Terminator—a violent rebellion against human oppressors (4). In the event of robots gaining consciousness and not acting out violently, a political divide could arise regarding the ethical treatment of these machines. Regardless of the eventual outcome of our work with AI, politicians and civilians should be aware of the rate of progress being made, as well as the divide between fact and fiction.
Hannah is a first year in Holworthy planning to concentrate in MCB or Neuroscience.
 Ambrosino, Brandon. What Would It Mean for AI to Have a Soul? BBC, BBC, 18 June 2018, www.bbc.com/future/story/20180615-can-artificial-intelligence-have-a-soul-and-religion (accessed Oct. 05, 2018).
 Clarke, R. Asimov’s Laws of Robotics: Implications for Information Technology-Part I. Computer, vol. 26, no. 12, 1993, pp. 53–61.
 Robonaut. NASA, NASA, 24 Sept. 2018, https://robonaut.jsc.nasa.gov/R2/ (accessed Oct. 9, 2018).
 Bricis, Larissa. A Philosopher Predicts How and When Robots Will Destroy Humanity. Techly, 4 Oct. 2017, www.techly.com.au/2017/09/22/philosopher-predicts-robots-will-destroy-humanity/ (accessed 5 Oct. 2018).
 Caughill, Patrick. SophiaBot Asks You to Be Nice So She Can Learn Human Compassion. Futurism, Futurism, 12 June 2017, https://futurism.com/sophiabot-asks-you-to-be-nice-so-she-can-learn-human-compassion (accessed Oct. 10, 2018).
 Ball, Philip. Future – The Truth about the Turing Test. BBC, BBC, 24 July 2015, www.bbc.com/future/story/20150724-the-problem-with-the-turing-test (accessed Oct 04, 2018).
 Robitzski, Dan. Artificial Consciousness: How to Give a Robot a Soul. Futurism, Futurism, 25 June 2018, https://futurism.com/artificial-consciousness (accessed Oct. 11, 2018).
 Robitzski, Dan. The Frustrating Quest to Define Consciousness. Scienceline, 25 June 2017, https://scienceline.org/2017/06/frustrating-quest-define-consciousness/ (accessed Oct. 11, 2018).
 Lohr, Steve. Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So. The New York Times, The New York Times, 20 June 2018, www.nytimes.com/2018/06/20/technology/deep-learning-artificial-intelligence.html (accessed Oct. 10, 2018).
Image credit: Wikimedia Commons