Advertisement

Robotic Care of Children, the Elderly, and the Sick (with Oren Etzioni)

  • Amitai Etzioni
Open Access
Chapter
Part of the Library of Public Policy and Public Administration book series (LPPP, volume 11)

Abstract

As Artificial Intelligence technology seems poised for a major take-off and changing societal dynamics are creating a high demand for caregivers for elders, children, and those infirmed, AI-based robotic caregivers may well be used much more often. This chapter examines the ethical concerns raised by the use of AI caregivers and concludes that many of these concerns are avoided when AI caregivers operate as partners to human caregivers rather than substitutes. Furthermore, most of the remaining concerns are minor and are faced by human caregivers as well. The chapter then argues that because AI caregivers’ systems are learning systems that could therefore stray from their initial guidelines, a layer of AI-based oversight is necessary to protect patients. Such layers of oversight are already employed across many areas of human service provision, and a similar method could help to ensure that AI caregivers act in ways that are ethical, legal, and in accordance with predetermined guidelines.

17.1 The Demand for Humanoid Robots

Humanoid robots are increasingly used in childcare, eldercare, in psychotherapy, and other kinds of medical care, and as tutors. They are also used as ‘chat bots’ by commercial enterprises. Despite studies that show they are quite effective, most of these kinds of robots are not currently in wide use, mainly because their costs are high. However, there are indications that these costs will decline, a trend familiar from the developments of many other technologies. Moreover, given the rapid aging of several major societies (e.g. Japan, China, and Germany) and their difficulties in retaining a sufficient number of qualified human caregivers, there is a great potential demand for robotic eldercare. Similarly, given the large number of working single parents and two-parent households where both parents work outside the home, there is a great potential demand for robotic childcare. The high cost of medical care also favors the use of robots.

Last but not least, after decades in which the development of AI swung from being overhyped to disappointing, AI seems currently ready for a major take-off. This is revealed by Watson beating masters of chess, Jeopardy, and even Go, and by driverless cars that have already traveled over one million miles in the US with very few incidents.

This chapter will focus on the use of AI caregivers. The chapter first explores several of the ethical concerns that are raised with respect to AI generally as well as to AI caregivers specifically. The chapter then provides an overview of current developments and applications of robotic caregivers, before distinguishing between different kinds of AI. The subsequent three sections consider standards for evaluating AI caregivers, frameworks for AI-human interaction in caregiving, and oversight systems for supervising AI caregivers.

17.2 Challenges

The increased use of robots and their improving intelligence has raised major concerns, which has led several scholars to recommend that their use be regulated, and in some areas even avoided. Several uses, we shall see, are considered outright unethical.

One major line of concern is the fear that robots will outsmart humans and come to dominate humanity, if not destroy it (Bostrom 2014; Hawking et al. 2014). We examined these concerns elsewhere and suggested that while the threats were highly speculative, the human costs of slowing down work would be considerable. The benefits of AI—from car safety to robotic surgeries—are significant, real, and immediate (Etzioni and Etzioni 2016a, b). Hence, these fears are not explored here.

A rather different kind of concern has been raised by a number of social scientists; they apply way beyond robotic caregivers but, we shall see, apply especially to them. The main scholar who articulates this second line of concern is Sherry Turkle of MIT, who has long warned about the ill effects of computer-mediated interactions and relationships and the dark side of all matters concerning the internet. She is deeply concerned that technologies that increase human interactions with machines do so at the expense of human-to-human contact. Turkle explains: “[Sociable technology] promises friendship but can only deliver performances. Do we really want to be in the business of manufacturing friends that will never be friends?” (2010, p. 101). And: “Often, our new digital connections offer the illusion of companionship without the demands of friendship…We are not sure whom to count on. Virtual friendships and worlds offer connection with uncertain claims to commitment. We know this, and yet the emotional charge of the online world is very high” (Turkle 2011). Social scientists refer to this phenomenon as the false sense of community—“pseudo-gemeinschaft.”

Moreover, humanoid robots can and do stray from what their programmers set out for them to do. Microsoft created Tay, a chat bot designed to learn through online conversations. Less than 24 h after being launched, however, Tay began supporting genocide, denying the holocaust (Price 2016), and agreeing with Hitler (Reese 2016). Microsoft had to rush to take the chat bot offline. According to the founder of Unanimous AI, Lois Rosenberg, “When Tay started training on patterns that were input by trolls online, it started using those patterns. This is really no different than a parrot in a seedy bar picking up bad words and repeating them back without knowing what they really mean” (Reese 2016). It is important to distinguish between a bug and a design flaw; Tay did not necessarily malfunction, but functioned according to the instructions given by Microsoft’s programmers, who failed to properly design Tay’s Natural Language Processing filters.

A third line of concerns—and the focus of this chapter—are raised about humanoid robots, those that care for the infirm, young, old, and many others. Some of the best articulations of these concerns are in the work of Noel Sharkey, who has both authored and co-authored with Amanda Sharkey several very carefully crafted and well-documented articles on the subject at hand.1 They find that humanoid robots may violate privacy. “Sometimes conversations about issues concerning the parents, such as abuse or injustice, should be treated in confidence. A robot might not be able to keep such confidences from the parents before reporting the incident to the appropriate authorities” (Sharkey and Sharkey 2010). These robots are equipped “[w]ith massive memory hard drives,” such that “it would be possible to record a child’s entire life. This gives rise to concerns about …who will be allowed access to the recordings?” (Sharkey and Sharkey 2010). Authoritarianism is another concern. To illustrate, Sharkey and Sharkey posit an “extreme case” by asking readers to “imagine a child having doughnuts taken from her because the robot wanted to prevent her from becoming obese” (2010, p. 166). And they hold that it may well be unethical to create a machine that causes people to believe it is capable of genuine emotional engagement, though they note that “[i]t is difficult to take an absolutist ethical approach to questions about robots and deception” (2010, pp. 172–3). Other ethical concerns resulting from robotic eldercare raised by Sharkey and Sharkey include “feelings of objectification and loss of control,” “loss of personal liberty,” and “infantilisation” (2012).

Robert Sparrow, a professor at Monash University, and Linda Sparrow write that “robots are incapable of meeting” the elderly’s “social and emotional needs,” and that using robots to care for them would cause the elderly to experience a “decrease in the amount of human contact,” which “would be detrimental to their well-being” (Sparrow and Sparrow 2006, p. 141). Furthermore, R. Sparrow asserts that robotic pets’ lack of authenticity makes them unethical, and that “[i]t is perverse to respond to the fact that older persons are increasingly socially isolated with the invention of robot pets rather than by multiplying the opportunities for human contact…” (2002, p. 308).

The chapter from here on is dedicated to address the question of whether these humanoid robots should be used as caregivers; if yes—for what kinds of care, how they should be held accountable, and how they best relate to human caregivers. That is, it seeks to provide an ethical and legal evaluation of, and guidelines for, the use of these robots.

Specifically, the chapter first establishes the domain of the kind of robots that are at issue. The chapter next finds that several of the major concerns that apply to one kind of humanoid robot apply much less, if at all, to other kinds. The chapter then suggests that the wrong criteria have been used for the evaluation of these robots and that if properly evaluated they ‘score’ much more favorably. The chapter then specifies ways humanoid robots can be held accountable, both in legal and in ethical terms. The chapter closes with what arguably is the most important consideration for all the issues at hand, the relationship between robots that provide human services and humans who provide human services.

17.3 Introducing AI Caregivers

One common attribute of many but far from all computerized caregivers that draw on AI programs is that they display simulated emotions.2 This display is deemed necessary to cause bonding with human subjects, for them to become emotionally invested in these robots, to trust them, to feel that they are empathetic or sympathetic, and so on (see also Tanka et al. 2007; Turkley et al. 2006; Leyzberg et al. 2011; Gonsior et al. 2011). However, as van Wynsberghe points out, “There is no capability exclusive to all care robots” (2012, p. 409). Instead, care robots can differ in their capability for locomotion; voice, face and emotion recognition; and degree of autonomy. Coeckelbergh (2010) distinguishes between “shallow” and “deep” care: what distinguishes the latter from the former is the kind of feelings that accompany human care. He holds that AI can only provide shallow care because it does not actually care about the patient. Coeckelbergh notes that deep care is not guaranteed even from human caregivers, but that they are at least able to provide it.

The term humanoid robot, used to refer to this kind of caregiver, is misleading because it assumes that the computerized caregivers must have features that make them seem human, for instance simulated faces, legs, and arms. The Merriam-Webster dictionary defines the term humanoid as “having human form or characteristics” (Merriam Webster); masks from preliterate ages, for instance, were said to have humanoid features. In fact, many robots have no such features.

Moreover, evidence shows that human beings can become emotionally invested in inanimate objects that have no anthropomorphic features. An obvious example is a cuddly toy, such as a teddy bear (Sharkey and Sharkey 2010, pp. 161–190). One of our sons could not possibly go to sleep or to the playground without his ‘gaki,’ a well-worn small blanket, and another was attached to “Jack,” a piece of fur he found, even more strongly than his father was attached to his dark blue, white top, convertible Sting Ray. The movie Her captures well the attachment one can form to a voice that emanates from a screen, basically a piece of software. In short, just as one can become addicted to anything (though some materials are more addictive than others), one can also become attached to anything (though if it displays affection, attachment is more likely to take place).

Many, indeed most, of the computerized caregivers are not robots—defined as “a machine that looks like a human being and performs various complex acts (as walking or talking) of a human being” (Merriam Webster). Many are merely software programs that can be made to work on any computer, tablet, or smart phone. For instance, programs that provide computerized psychotherapy (discussed below).

For this key reason we suggest that all AI-enriched programs that provide care and seem affective to those they care for, be included. To include both humanoid robots and the much larger number of these computer caregivers, we shall refer to them as AI caregivers.3 Most have no visible human-like features, make no visible gestures, do not ‘reach out and touch someone,’ but instead use mainly their voices to convey affect. We choose our words carefully: We refer to the presentation of emotions that leads humans who interact with AI caregivers to believe that these machines have emotions. Without this feature, AI caregivers are unable to perform much of their care.

Following this definition has another major benefit: it excludes from the domain under study all programs that provide exclusively or mainly cognitive services. A prime example of these is Google Assistant. It provides answers to questions, gives customized suggestions to fit the user’s preferences, and helps with tasks such as booking flights or making dinner reservations, among other things. Google Assistant presents no emotions; although people can find expression of emotions in anything, there is nothing in Google Assistant that fosters such projections. Other, mainly cognitive services by AI-driven software include Apple’s Siri and Microsoft’s Cortana, which have been designed to reveal a human touch, a rather limited sense of humor, but still do not qualify as AI caregivers because they are used mostly as a source of information. (In short, these programs are not caregivers and hence are not examined further here; online tutors are also mainly cognitive agents and are also not discussed.)

Chat bots constitute a somewhat more complicated case. There is no formal definition of what constitutes a chat bot. However, to the extent that these are mainly interactive, informative agents, they fall into the cognitive category and are not AI caregivers. This is true even if they are given some mannerisms to make them seem friendlier, such as greeting one by one’s first name when one queries them, say, about the best place to have dinner. Other chat bots are designed to display emotions in order to manipulate those they interact with, acting like humans who work in sales.

An extreme position holds that all such interactive relationships between humans and AI caregivers are unethical because, by definition, AI caregivers display emotions that they do not have, and hence the relationships are “false” and “inauthentic.” Robert Sparrow makes an applicable point about robot pets: “If robot pets are designed and manufactured with the intention that they should serve as companions for people, and so that those who interact with them are likely to develop an emotional attachment to them, on the basis of false beliefs about them, this is unethical” (2002). Sharkey and Sharkey offer a more nuanced view; they grant that illusion is a part of Artificial Intelligence, but draw a line between imagination, or a willing suspension of disbelief, and actual belief. Thus, they maintain that AI researchers must be honest and transparent about their designs in order to avoid deceit (2006, pp. 9–19). However, people are exposed to mild forms of ingratiation and false expressions of solicitude by many sales personnel, financial advisers, politicians, and others. The same is true about many people who read and apply the lessons of Dale Carnegie’s How to Win Friends and Influence People. There seems no obvious reason to treat AI caregivers more strictly than humans.4

To the extent these kinds of manipulative AI caregivers (and humans) need to be restrained depends on whether or not they cause harm and the level of that harm, granted manipulation is never ethical. If the harm is minimal, it seems reasonable to rely here on “let the buyer (or listener) beware.” If the harm is considerable, regulations set by law and ethical guidelines should apply to AI caregivers as they do to people (how this can be achieved is discussed below).

Finally, one should note that some manipulation by caregivers, like white lies, is carried out to help those cared for rather than for the benefits of the caregiver. For instance, in medical care when patients seek expressions of hope and are given reassurance, even when there is little hope left. Other cases in point are AI caregivers that cheer on people who lost weight, did more steps than before, or repeated exercises during physical therapy, with quite a bit more enthusiasm than a precision instrument would call for. These are all cases in which a measure of manipulation should be tolerated, as with all white lies.

In summary: any form of deception violates a key ethical precept. Kantians would ban it. Utilitarians would measure the size of the harm it causes versus the size of the gain and find that many AI caregivers score quite well from the viewpoint of those they care for.

17.4 Substitute vs Partner?

As we see it, many deliberations of the ethical issues raised by AI caregivers suffer from a common flaw that one encounters in public discussions of AI and even in several academic ones: These deliberations tend to presume that there is basically one kind of program that is made much more effective and efficient (‘smarter’) because it draws on Artificial Intelligence. Actually there are two very different kinds of AI. One seeks to design software that will reason and form decisions the way people do and better, and thus be able to replace them. The other AI merely seeks to provide smart assistants to human actors. We called them AI: the Partner.

One could instead talk about full versus partial human substitutes, rather than the Mind and the Partner. The rationale for using the term ‘mind’ draws on the fact that several leading AI researchers are trying to build computers that would act like human minds. For instance, in Machines of Loving Grace, John Markoff reports about endeavors such as the Human Brain Project, which used “deep learning neural network techniques” to produce a system able to assemble images similar to the way the brain’s visual cortex functions (2015, p. 153). Henry Markram received one billion euros from the EU for his project to simulate a human brain (Markoff 2015, p. 155). Books such as How to Create a Mind, by Ray Kurzweil (2012), and On Intelligence (2004), by Jeff Hawkins and Sandra Blakeslee, also deal with these efforts. They all seek to duplicate the brain or mind in full, so that the robot will be able to make decisions on its own rather than partner with a human.

When AI caregivers engage in eldercare, they work in conjunction with human personnel, carrying out some tasks on their own (e.g. reminding patients of the time to take medication and chatting with them when they are lonely) but alert the human staff in response to many other conditions (e.g. patient leaving the room). (Molly reminds patients to take their medications, encourages them to stick to their diet, asks them how they are feeling, offers voice support, and, if need be, alerts a physician (Al Jazeera 2014).)

Many, though by no means all, of the ethical concerns raised by AI caregivers emanate from treating them as if they were to be substitute care rather than a partner in providing it. Hence, if one compares, say, an AI caregiver for the elderly to a human nurse, one indeed will find all kind of limitations. One must not seek to rely on an AI caregiver nanny in a crisis—a child breaks a leg, starts a fire, cannot stop the bathtub from overflowing and so on. These nannies ought to be programmed to alert a human for help rather than deal with such situations, and many others that may arise, on their own.

At first, AI caregivers that provide psychotherapy may seem to be a major exception. These AI caregivers have been reported to be widely used and very successful. In the 1960s, MIT professor Joseph Weizenbaum developed a computer program called ELIZA, designed to simulate conversation with a human. The user would type something on a typewriter connected to the computer, and the program would formulate a response and type it back. After the initial version of ELIZA, Weizenbaum made a new version known as DOCTOR after a meeting with Kenneth Colby, a psychiatrist who wanted to use computers in his study of psychotherapy. While Weizenbaum was troubled by the fact that people were quite willing to share their intimate thoughts with a machine, Colby held that people could begin using “computer therapists” (Rheingold 1985, pp. 163–65).

Since then such programs have come a long way. Often cited is MoodGYM, which provides cognitive behavioral therapy online for people with anxiety, depression, and other conditions. According to Tina Rosenberg, “Scores of studies have found that online C.B.T. works as well as conventional face-to-face cognitive behavioral therapy – as long as there is occasional human support or coaching” (Rosenberg 2015). Online programs help people for whom cost, stigma, or access (due to location or time constraints) is a barrier to getting help. Britain, Sweden, and the Netherlands have online cognitive behavioral health programs, and MoodGYM is used by Australia’s national health system. “About 100,000 Australians use it, as do people in 200 countries” (Rosenberg 2015). (Technically these programs may not qualify as AI caregivers but they have the same basic element: bonding with a computer and caregiving. Also, one must expect that in the near future such programs would incorporate AI, like the virtual therapist Ellie developed at the University of Southern California (Bohannon 2015, pp. 250–51).)

On further examination, though, one notes that these programs should not work on their own for a few key reasons. (a) Often when a person is presenting with one symptom (e.g. depression) he or she may have others that these programs are not equipped to deal with. (b) All such treatment ought to be preceded by a physical examination to rule out a physical cause of the patient’s mental concerns (e.g. abnormal thyroid function). (c) Often these treatments work better if combined with medications, which these programs do not prescribe or administer. (d) All such programs should alert a human caregiver and/or authorities if the patients indicate that they may harm themselves or others. On all these important grounds, AI therapists are to be used as partners rather than as care substitutes.

One may point to situations in which for one reason or another only an AI caregiver is available. The human night nurse in a nursing home is dealing with a crisis, the patient might be unable to leave home because of a snow storm, or they are unable to find someone to take care of their children. However, in all these situations the AI caregivers are to act as temporary substitutes, as stand–ins, as partners rather than full substitutes for human caregivers. Sharkey and Sharkey are much more concerned about full-time robotic childcare than their use as part-time partners (Sharkey and Sharkey 2010, p. 185). They express concern that full-time care from robots might cause impaired development in children, which is why they see it only as a “last resort” in places such as Romanian orphanages. Yet they find the notion that “robots are better than nothing” to be dangerous, because it “could lead to a more widespread use of the technology in situations where there is a shortage of funding, and where what is actually needed is more staff and better regulation” (Sharkey and Sharkey 2010, pp. 179–180).

17.5 Goal vs. Comparative Evaluation

In evaluation of AI caregivers, one should draw on comparative and not goal evaluation, though the latter has its place. Goal evaluation compares a given program (whether formulated and executed by humans or by machines) to the one needed to accomplish the goal. Comparative evaluations compare available agents for carrying out the missions to one another. To illustrate the difference between these two kinds of evaluations, it serves to use light bulbs as an example. From a goal viewpoint one seeks a bulb that uses all the energy it receives to produce light and squanders none of it on producing heat. Examined from this viewpoint, all bulbs are dismal failures. Incandescent lamps produce about 2 W of light and 98 W of heat; halogen lamps 3.5 W of light and 96.5 W of heat, and fluorescent lamps produce between 6 and 8 W (MacCargar 2005). In contrast, while one seeks to develop better bulbs, it is evident that for now, there are no really “good” bulbs—instead, in comparing them to one another, one finds that some of them are much more efficient than others.

In the same vein, if one compares AI caregivers to an idealized version of a human caregiver, they all fail miserably. Thus, when Sharkey and Sharkey note that AI nannies may violate the privacy of the children in their care, turn authoritarian, and so on—they are correct. However, if one compared these caregivers to human ones, one notes that human nannies may also violate privacy, may turn authoritarian, and so on. Similarly, David Feil-Seifer and Maja Matarić discuss another ethical dilemma posed by humans becoming emotionally attached to robot caregivers, noting that “if the robot’s effectiveness wanes, its scheduled course of therapy concludes, or it suffers from a hardware or software malfunction, it may be taken away from the user,” causing “distress” as well as “possibly result[ing] in a loss of therapeutic benefits” (2011, p. 27). But human nurses, therapists, and caregivers are at least as likely to disappear from the lives of patients, and some courses of therapy will be scheduled to end regardless of whether they are conducted by a robot or a human.

Hence, instead of giving all caregivers a long list of demerits— while searching for ways of making both kinds better—one should tolerate wide use of AI caregivers as long as they are not inferior to whatever human caregivers are available, with reference to their specific tasks. We next ask how the weaknesses of AI caregivers can be mitigated by the humans they partner with, and vice versa.

17.6 Team Work

Most of the discussions of AI caregivers ask whether or not they could—or should—replace human caregivers. We already indicated that AI caregivers are best considered as partners rather than as substitutes. The next step is to bring to this field the kind of analysis often carried out in other areas, about human-machine interaction and collaboration (Nakajima et al. 2003). It is especially important to focus on how labor is best divided between the human and AI caregivers. For example, AI caregivers are obviously vastly superior to human caregivers when memory and retrieval of information are at issue. Therefore, they are best charged with recalling which medications a patient has taken and their interactions and side-effects. And they can encourage patients to take their medications, as patient noncompliance is a major issue (Shea 2006). AI caregivers can reward people in physical therapy for repeated exercises, and others for physical exercises to maintain or improve health. A study on assistive robots by Maja Matarić et al. found that “patient compliance with the rehabilitation routine was much higher during the experiments with the robot than under the control (no-robot, no prompting) condition” (Matarić et al. 2007; see also Fasola and Matarić 2013). At the same time, human beings are better at reading between the lines, listening not just to what people say but the way they say it, their tone of voice, and at touching; patients are reported to benefit greatly from such contacts.

On many issues AI caregivers are best considered only as the first line of defense; humans are the main one. For example, if an Alzheimer’s patient wandered out of the house or caused a fire on the stove, AI partners are to alert humans rather than—at least for the near future—be programmed to deal with them directly. It seems most of the work involving the details of partnering between human and AI caregivers has yet to be carried out.

17.7 AI Caregivers Need Supervision: Like Humans

Many of the concerns raised about AI caregivers can be handled by regulations and ethical guidelines. Some have suggested that the way to effectuate these controls is to include them in the AI programs that are embedded in the guidance systems of the computerized caregivers (Wallach and Allen 2009; Anderson and Anderson 2011; Winfield et al. 2014). However, AI caregivers’ systems are learning systems that change their behavior based on new information and experiences. They may hence stray from their guidelines. Thus, AI caregivers instructed to alert a nurse when a patient complains about pain may “learn” that many patients frequently complain and/or nurses do not respond—and hence conclude that it is futile to alert the nurses and stop doing so. An AI caregiver nanny linked to the internet may be coaxed into sharing private information about the children in its care, circumventing whatever safeguards the original program provided. (Recall Microsoft’s chat bot, Tay, whose online exposure led her to embrace Nazi sympathies.)

Indeed, such learning systems are widely considered to be ‘autonomous.’ John Sullins defines ‘autonomous’ systems as those “capable of making at least some of the major decisions about their actions using their own programming” (2011, p. 155). Robotic caregivers are hence frequently referred to as ‘autonomous.’ For instance, Michael Anderson and Susan Leigh Anderson refer to an eldercare robot as a “complex autonomous machine” (2011, p. 2). Noel Sharkey and Amanda Sharkey point out that “[f]or costly childcare robots to be attractive to consumers or institutions, they will need to have sufficient autonomous functioning to free the carer’s time and call upon them only in unusual circumstances” (2010, p. 164). Aside from the chat bot Tay, another instance of machine learning gone wrong is Google’s photo categorization system. In 2015, it “identified two African Americans as ‘gorillas’” because “the data used to train the software relied too heavily on photos of white people, diminishing its ability to accurately identify images of people with different features,” writes Jesse Emspak in the Scientific American article “How a Machine Learns Prejudice” (2016; see also Mayer- Schönberger and Cukier 2013).

To deal with the kind of situations we just cited, we advanced the thesis that to keep AI-equipped technologies from straying, legally and ethically, the AI community needs to develop a new slew of AI programs—oversight programs that can hold AI operations programs accountable (we called them AI Guardians) (Etzioni and Etzioni 2016a). All societies throughout history have had oversight systems. Workers have supervisors; businesses have accountants; school teachers have principals. That is, all these systems have hierarchies in the sense that the first-line operators are subject to oversight by a second layer and are expected to respond to corrective signals from the oversight systems (see also Etzioni and Etzioni 2016a).

The same point applies here. AI caregivers need oversight, to be provided by specialized AI programs. To give but one example: a program akin to audit trails could routinely determine whether or not AI caregivers released information about children or patients to unauthorized people, determine who these are, and alert the parents or the authorities. One may argue that this is a job a regular software program could accomplish. However, given that the operations of AI programs are opaque, complex, and learning systems (Mayer-Schönberger and Cukier 2013, p. 178), AI interrogation and enforcement programs will be needed.

When an AI-guided robot strays from its instructions, this can be due to programming mistakes, users’ attempts to circumvent the instructions, or—as a result of the learning and deliberations of the computer. We asked several AI researchers if a human being, unassisted, could examine the algorithms used in AI-guided systems and determine who the ‘culprit’ is. They all agreed that this seems not possible. We hence suggested—as a needed research program—that AI systems should be developed to be able to interrogate other AI systems. This has not been done as far as we know, though some efforts can be understood as moving in this direction.

Until now, society has treated AI by and large as one field that encompasses many programs, ranging from IBM’s Deep Blue to airplane autopilots and surgical robots. From here on, AI should be divided into two categories. The first category would consist of operational AI programs—the computerized “brains” that guide smart instruments. The second category would be composed of oversight AI programs that verify the first category’s claims and keep them in line with the law. These oversight programs, which we call “AI Guardians,” would include AI programs to interrogate, discover, supervise, audit, and guarantee the compliance of operational AI programs.

Thoughtful people have asked for centuries, “Who will guard the guardians?”5 We have no new answer to this question, which has never been answered well. For now, the best we can hope for is that all smart instruments will be outfitted with a readily locatable off-switch to grant ultimate control to human agents over both operational and oversight AI programs.

17.8 Conclusion

There is a great need for ‘smart’ computerized caregivers that draw on AI, which we call AI caregivers. These are defined as programs that can display emotions, which is needed for bonding with humans, and without which caregiving will be woefully deficient. These AI caregivers raise a variety of ethical concerns. They clearly are inauthentic. However, if their inauthenticity either causes minimal harm—or benefits those that are cared for rather than those who give the care—it should be tolerated, as we do when we deal with human caregivers.

Other deficiencies of AI caregivers can be corrected or mitigated if they are used as care partners with humans rather than as substitutes. Much more work is needed to spell out the more effective divisions of labor and forms of cooperation between AI caregivers and humans. Last but not least, AI oversight programs need to be formed to ensure that AI caregivers will act legally and ethically—at least as ethically as human caregivers.

Footnotes

  1. 1.

    For further reading, see: Sharkey and Sharkey 2006, 2010, 2012; Sharkey 2008.

  2. 2.

    This is encompassed within “affective computing.” See, for example, Picard 1997.

  3. 3.

    Some refer to this as “Socially Assistive Robotics” See, for example: Feil-Seifer and Matarić 2011.

  4. 4.

    A reviewer of a previous draft of this chapter noted that AI might merit stricter rules because we have a theory of mind for human caregivers and others, but not for AI agency.

  5. 5.

    The question “Quis custodiet ipsos custodes” was first posed by the Roman author Juvenal. See Juvenal 2014, p. 65.

References

  1. Al Jazeera. 2014. Robots for the elderly.Google Scholar
  2. Anderson, M., and S.L. Anderson, eds. 2011. Machine ethics. New York: Cambridge University Press.Google Scholar
  3. Bohannon, J. 2015. The synthetic therapist. Science 349(6247).Google Scholar
  4. Bostrom, N. 2014. When machines outsmart humans. CNN.Google Scholar
  5. Coeckelbergh, M. 2010. Health care, capabilities, and AI assistive technologies. Ethical Theory and Moral Practice 13 (2):181–190.Google Scholar
  6. Emspak, J. 2016. How a machine learns prejudice. The Scientific American.Google Scholar
  7. Etzioni, A. and O. Etzioni. 2016a. Keeping AI legal. Vanderbilt Journal of Entertainment and Technology Law 19(1).Google Scholar
  8. ———. 2016b. Killer robots won’t doom humanity, but our fears of AI might. Quartz.Google Scholar
  9. Fasola, J., and M.J. Mataric. 2013. A socially assistive robot exercise coach for the elderly. Journal of Human-Robot Interaction 2 (2).Google Scholar
  10. Feil-Seifer, D., and M. Matarić. 2011. Ethical principles for socially assistive robotics. IEEE Robotics and Automation Magazine, Special issue on Roboethics, Veruggio, J. Solis and M. Van der loos 18 (1): 24–31.Google Scholar
  11. Gonsior, B., S. Sosnowski, C. Mayer, et al. 2011. Improving aspects of empathy and subjective performance for HRI through mirroring facial expressions. Ro-Man 2011: 350–356.Google Scholar
  12. Hawking, S., M. Tegmark, F. Wilczek, and S. Russell. 2014. Transcending complacency on superintelligent machines. The Huffington Post.Google Scholar
  13. Hawkins, J., and S. Blakeslee. 2004. On intelligence. New York: Times Books.Google Scholar
  14. Juvenal. 2014. Satire. In, ed. Lindsay Watson and Patricia Watson, 6th ed. Cambridge: Cambridge University Press.Google Scholar
  15. Kurzweil, R. 2012. How to create a mind: The secret of human thought revealed. New York: Viking.Google Scholar
  16. Leyzberg, D., E. Avruni, J. Liu, and B. Scassellati. 2011. Robots that express emotion elicit better human teaching. Proceedings of the 6th international conference on human-robot interaction ACM 347–354.Google Scholar
  17. MacCargar, B. 2005. Watts, heat and light: Measuring the heat output of different lamps. January http://www.reptileuvinfo.com/html/watts-heat-lights-lamp-heat-output.html.
  18. Markoff, J. 2015. Machines of loving grace: The quest for common ground between humans and robots. First ed. New York: ECCO.Google Scholar
  19. Matarić, M.J., J. Eriksson, D.J. Feil-Seifer, and C.J. Winstein. 2007. Socially assistive robotics for post-stroke rehabilitation. Journal of Neuroengineering and Rehabilitation 4 (1): 5–5.CrossRefGoogle Scholar
  20. Mayer-Schönberger, V., and K. Cukier. 2013. Big data: A revolution that will transform how we live, work, and think. London: John Murray Publishers.Google Scholar
  21. Merriam-Webster.com. n.d.-a. “Humanoid.” Merriam-Webster.Google Scholar
  22. ———. n.d.-b. “Robot.” Merriam-Webster.Google Scholar
  23. Nakajima, H., R. Yamada, S. Brave, Y. Morishima, C. Nass, and S. Kawaji. 2003. The functionality of human-machine collaboration systems – mind model and social behavior. SMC’03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics 3:2381–2387.Google Scholar
  24. Picard, R. 1997. Affective computing. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
  25. Price, R. 2016. Microsoft is deleting its AI chatbot’s incredibly racist tweets. Business Insider.Google Scholar
  26. Reese, H. 2016. Why microsoft’s Tay AI bot went wrong. Tech Republic.Google Scholar
  27. Rheingold, H. 1985. Tools for thought: The history and future of mind-expanding technology. Cambridge, MA: MIT Press.Google Scholar
  28. Rosenberg, T. 2015. Depressed? Try therapy without the therapist. The New York Times.Google Scholar
  29. Sharkey, N. 2008. The ethical frontiers of robotics. Science 322: 1800–1801.CrossRefGoogle Scholar
  30. Sharkey, N., and A. Sharkey. 2006. Artificial intelligence and natural magic. Artificial Intelligence Review 25: 9–19.CrossRefGoogle Scholar
  31. ———. 2010. The crying shame of robot nannies: An ethical appraisal. Interaction Studies 11 (2): 161–190.CrossRefGoogle Scholar
  32. Sharkey, A., and N. Sharkey. 2012. Granny and the robots: Ethical issues in robot care for the elderly. Ethics and Information Technology 14: 27–40.CrossRefGoogle Scholar
  33. Shea, S. 2006. Improving medication adherence: How to talk with patients about their medications. Philadelphia: Lippincott Williams & Wilkins.Google Scholar
  34. Sparrow, R. 2002. The march of robot dogs. Ethics and Information Technology 4: 305–318.CrossRefGoogle Scholar
  35. Sparrow, R., and L. Sparrow. 2006. In the hands of machines? The future of aged care. Minds and Machines.Google Scholar
  36. Sullins, J.P. 2011. When is a robot a moral agent? In Machine ethics, ed. M. Anderson and S.L. Anderson. Cambridge: Cambridge University Press.Google Scholar
  37. Tanaka, F., A. Cicourel, and J.R. Movellan. 2007. Socialization between toddlers and robots at an early childhood education center. Proceedings of the National Academy of Sciences 104 (46): 17954–17958.CrossRefGoogle Scholar
  38. Turkle, S. 2010. Alone together: Why we expect more from technology and less from each other. New York: Basic Books.Google Scholar
  39. ———. 2011. The tethered self: Technology reinvents intimacy and solitude. Continuing Higher Education Review 75: 28–31.Google Scholar
  40. Turkle, S., C. Breazeal, O. Dasté, and B. Scassellati. 2006. First encounters with Kismet and Cog: Children respond to relational artifacts. In Digital media: Transformations in human communication, ed. P. Messaris and L. Humphreys. New York: Peter Lang.Google Scholar
  41. van Wynsberghe, A. 2012. Designing robots with care: Creating an ethical framework for the future design and implementation of care robots (doctoral dissertation).Google Scholar
  42. Wallach, W., and A. Allen. 2009. Moral machines: Teaching robots right from wrong. New York: Oxford University Press.CrossRefGoogle Scholar
  43. Winfield, A.F., C. Blum, and W. Liu. 2014. Towards an ethical robot: Internal models, consequences and ethical action selection. In Conference towards autonomous robotic systems, 85–96. Springer International Publishing.Google Scholar

Copyright information

© Springer International Publishing AG 2018

This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  • Amitai Etzioni
    • 1
  1. 1.The George Washington UniversityWashington, DCUSA

Personalised recommendations