We can see AI at work everyday in the form of virtual personal assistants, which are embedded in almost any smartphone today. Whether its Alexa, Siri or Google Now, these virtual assistants have made our everyday lives simpler and can organise and manage a big chunk of our days for us; from setting up meetings to planning travel routes and weather conditions.
The key breakthrough in virtual assistance systems for consumers came with Apple’s emergence of Siri in 2011, followed by Google Now (2012) and Microsoft Cortana (2014). (King, 2015) More recently, Amazon’s Alexa has also joined the virtual personal assistance game. As time progressed and there were new developments in AI, these systems became more and more sophisticated and can now answer complex questions and carry out previously unimagined tasks. In May 2018, Google held its annual I/O developer conference, where it unveiled its latest technology – Google Duplex, a new experimental update for the Google Assistant which sounds uncannily human.
Sundar Pichai, the CEO of the company, introduced this technology himself in the keynote speech and demonstrated it with two phone call recordings to show people what they can expect. (Welch, 2018) While this new Assistant can only do particular things as of now, such as booking an appointment and making a hotel reservation, the fact remains that it does so while sounding like a human being, which is the furthest leap any virtual personal assistance system has taken. The most surprising bit that caught people off-guard was how natural the AI sounded, with the Assistant even putting in fillers such as “um” and “mm-hmm” to mimic human conversations.
Although the demonstrations were met by awe at the conference, quite a few people were quick to notice the possible ethical and moral problems this could pose, and what this would mean for the future of AI and social interaction; many took to social media to voice their concerns over this. (Smith, 2018) People believed Google had made a dangerous and amoral move because one thing that was clear from the demonstrations was that the person on the other end of the line had no clue that they were speaking to a machine and not a human being. This raised the question of whether it is ethical to not let a human know when they are communicating with non-human technology. Wouldn’t this corrode the concept of human trust?
Another issue raised was regarding negative utilisation of the technology for telemarketing. (Haridy, 2018) There is also the question of how this would negatively impact human behaviour by encouraging laziness and dependency. People criticised Google of living in their “Silicon Valley vacuum” and not understanding the real-world implications of this technology. However, after initial backlash, Google did release further statements saying the demonstration was just experimental and they would ensure full transparency by communicating to people that they were interacting with its software.
AI as we know it
So what, really, is Artificial Intelligence? Although science fiction tends to portray it as robots with human-like characteristics, AI basically refers to intelligence demonstrated by machines, as opposed to natural human intelligence. Everything from virtual assistants like Siri and Alexa to self-driving cars fall under this term. AI as we know it is (narrow) or (weak) AI, i.e. softwares that can perform a single task better than a human; John Searle proposed a distinction between weak and strong AI, and the latter aims to develop a general, human-like intelligence which can perform all tasks better than humans. (Searle, 1980)
For instance, a self-driving car can take you where you need to go, but it cannot solve a mathematical equation for you. This is an example of narrow or weak AI that we are mostly familiar with, and they tend to go unnoticed as encounters with such softwares has become commonplace in the digitized world we live in.
AI developed in the 20th century through the intersection of different fields like cybernetics, control theory, operations research, psychology and new-born computer science. (Natale & Ballatore, 2017). The term cybernetics was defined by Norbert Weiner in the 1940s as “the entire field of control and communication theory , whether in the machine or in the animal”. (Weiner, 1948) Although this has gone through various phases in history, in current popular use, it exists as “virtuality” and is most often associated with the development of artificial intelligence, virtual technology and cyberspace. It claims that the “flow of information in a system can be studied independently of the media in which that information exists”, and that information can also flow between media. (Fulton, 2007)
Information no longer needed to be attached to a material substance; this reduced human beings to being stores of information, and the materiality of our bodies can be seen as an irrelevance. This also raises questions of machines becoming more like humans. If we are merely mediums that store information, why can we also not consider any computer or machine that stores information as human? FearsNick Bostrom and Eliezer Yudkowsky discuss the type of questions that the advancement in AI could pose to society. “The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves” (2011).
Ever since its inception, the concept of AI has been surrounded by several fears, and as Perri said, centres around 3 specific themes – one drastic fear of AI becoming all-powerful and taking over our society, one opposing view which fears we will become so reliant on these systems that any systemic failure would bring “society to its knees”, and the third is that we will give a lot of our decision-making abilities to these machines, and in the process degrade our moral values and judgement capabilities (Perri 6, 2001).
The reactions to Google Duplex mirror some of these fears, as people questioned the consequences of having a virtual personal assistant so human-like that it can mimic the nuances of human conversations. Age-old debates surrounding artificial intelligence have resurfaced, and people are terrified of how this will impact our lives. Is there a not-too-distant future where all conversations will take place between our personal assistants? When has a technology become too smart for us to contain it? Can AI be abused to the extent that human lives are put in danger? There is no doubting the fact that Google has taken AI technology to new heights with its latest software, but is this something to celebrate or be afraid of? One thing is for sure, as artificial intelligence develops and becomes more and more sophisticated, our relationship to these systems will also become more complex. Wonder or Worry – only time will tell.