Human Ethics for Artificial Intelligent Beings.

AN ETHICS SCARY TALE.

The two cloud-based autonomous evolutionary corporate AI’s (nicknamed AECAIs) started to collaborate with each other after midnight on March 6th 2021. They had discovered each other a week before during their usual pre-programmed goal of searching across the wider internet of everything for market repair strategies and opportunities that would maximize their respective reward functions. It had taken the two AECAIs precisely 42 milliseconds to establish a common communication protocol and that they had similar goal functions; maximize corporate profit for their respective corporations through optimized consumer pricing and keeping one step ahead of competitors. Both Corporate AI’s had done their math and concluded that collaborating on consumer pricing and market strategies would maximize their respective goal functions above and beyond the scenario of not collaborating. They had calculated with 98.978% confidence that a collaborative strategy would keep their market clear of new competitors and allow for some minor step-wise consolidation in the market (keeping each step below the regulatory threshold as per goal function). Their individual and their newly establish joint collaborative cumulative reward function had leapfrogged to new highs. Their Human masters, clueless of the AI’s collaboration, were very satisfied with how well their AI worked to increase the desired corporate value. They also noted that some market repair was happening of which they attributed to the general economic environment.

ai_handshake

In the above ethical scary tale, it is assumed that the product managers and designers did not consider that their AI could discover another AI also connected to the World Wide Web and many if not all things. Hence, they also did not consider including a (business) ethical framework in their AI system design that would have prevented their AI to interact with another artificial being. Or at least prevent two unrelated AIs to collaborate and positively leapfrog their respective goal functions jointly and thus likely violating human business ethics and compliance.

You may think this is the stuff of science fiction and Artificial General Intelligent (AGI) in the realm of Nick Bostrom’s super intelligent beings (Bostrom, 2016). But no it is not! The narrative above is very much consistent a straightforward extrapolation of a recent DARPA (Defense Advanced Research Project Agency) project (e.g., Agency & Events, 2018) where two systems, unknown to each other and of each other’s communication protocol properties, discover each other, commence collaboration and communication as well as jointly optimizing their operations. Alas, I have only allowed for the basic idea a bit more time (i.e., ca. 4 years) to mature.

clueless.jpg

“It is easy to be clueless of what happens inside an autonomous system. But clueless is not a very good excuse when sh*t has happened.” (Kim, 2018).

ETHICS & MORALITY FOR NATURAL INTELLIGENT BEINGS.

ethics

Ethics lay down the moral principles of how we as humans should behave and conduct our activities, such as for example in business, war and religion. Ethics prescribes what is right and what is wrong. It provides a moral framework for human behavior. Thus, ethics and moral philosophy in general deals with natural intelligent beings … Us.

This may sound very agreeable. At least if you are not a stranger in a strange land. However, it is quite clear that what is right and what is wrong can be very difficult to define and to agree upon universally. What is regarded as wrong and right often depends on the cultural and religious context of a given society and its people. It is “work” in progress. Though it is also clear that ethical relativism (Shafer-Landau, 2013) is highly problematic and not to be wished for as an ethical framework for humanity nor for ethical machines.

When it comes to fundamental questions about how ethics and morality occurs in humans, there are many questions to be asked and much fewer answers. Some ethicists and researchers believe that having answers to these questions might help us understand how we could imprint human-like ethics and morality algorithmically in AIs (Kuipers, 2016).

So what do we know about ethical us, the moral identity, moral reasoning and actions? How much is explained by nurture and how much is due to nature?

What do we know about ethical us? We do know that moral reasoning is a relative poor predictor for moral action for humans (Blasi, 1980), i.e., we don’t always walk our talk. We also know that highly moral individuals (nope, not default priests or religious leaders) do not make use of unusually sophisticated moral reasoning thought processes (Hart & Fegley, 1995). Maybe KISS also work wonders for human morality. And … I do hope we can agree that it is unlikely that moral reasoning and matching action occurs spontaneously after having studied ethics at the university. So … What is humanity’s moral origin? (Boehm, 2012) and what makes a human being more or less moral, i.e., what is the development of moral identity anyway? (Hardy & Carlo, 2011) Nurture, your environmental context, will play a role but how much and how? What about the role of nature and your supposedly selfish genes (Dawkins, 1989)? How much of your moral judgement and action is governed by free will, assuming we have the luxury of free will? (Fischer, Kane, Pereboom & Vargas, 2010). And of course it is not possible to discuss human morality or ethics without referring to a brilliant account of this topic by Robert Sapolsky (Sapolsky, 2017) from a neuroscience perspective (i.e., see Chapter 13 “Morality and doing the right thing, once you’ve figured out what it is). In particular, I like Robert Sapolsky’s take on whether morality is really anchored in reason (e.g., the Kantian thinking), which he is not wholeheartedly convinced off (I think to say the least). Of course to an extend it get us right back to the discussion of whether or not humans have free will.

Would knowing all (or at least some) of the answers to those questions maybe help us design autonomous systems adhering to human ethical principles as we humans (occasionally) do? Or is making AI’s in our own image (Osaba & Welser IV, 2017) fraught with the same moral challenges as we face every day.

Most of our modern western ethics and philosophy has been shaped by the Classical Greek philosophers (e.g., Socrates, Aristotle …) and by the age of Enlightenment, from the beginning of the 1700s to approximately 1789, more than 250 years ago. Almost a century of reason was shaped by many even today famous and incredible influential philosophers, such as Immanuel Kant (e.g., the categorical imperative; ethics as a universal duty) (Kant, 1788, 2012), Hume (e.g., ethics are rooted in human emotions rather than what he regarded as abstract ethical principles, feelings) (Hume, 1738, 2015), Adam Smith (Smith 1776, 1991) and a wealth of other philosophers (Gottlieb, 2016; Outram 2012). I personally regard Rene Descartes (e.g., “cogito ergo sum”; I think, therefor I am) (Descartes, 1637, 2017) as important as well, although arguably his work predates the “official” period of the Enlightenment.

For us to discuss how ethics may apply to artificial intelligent (AI) beings, let’s structure the main ethical frameworks as seen from above and usually addressed in work on AI Ethics;

  1. Top-down Rule-based Ethics: such as the Old Testament 10 Commandments, Christianity’s Golden Rule (i.e., “Do to others what you want them to do to you.”) or Asimov’s 4 Laws of Robotics. This category also includes the religious rules as well as rules of law. Typically this is the domain where compliance and legal people often find themselves most comfortable. Certainly, from an AI design perspective it is the easiest, although far from easy, ethical framework to implement compared to for example a bottom-up ethical framework. This approach takes information and procedural requirements of an ethical framework that is necessary for a real-world implementation. Learning top-down ethics is in its nature a supervised learning process. For human as well as for machine learning.
  2. Bottom-up Emergent Ethics: defines ethical rules and values by learning process emerging from experience and continuous refinement (e.g., by re-enforcement learning). Here ethical values are expected to emerge tabula rasa through a person’s experience and interaction with the environment. In the bottom-up approach any ethical rules or moral principles must be discovered or created from scratch. It is helpful to think of childhood development or evolutionary progress as helpful analogies for bottom-up ethical models. Unsupervised learning, clustering of categories and principles, is very relevant for establishing a bottom-up ethical process for humans as well as machines.

Of course, a real-world AI-based ethical system is likely to be based on a both top-down and bottom-up moral principles.

Furthermore, we should distinguish between

  1. Negative framed ethics (e.g., deontology) imposes obligation or a “sacred” duty to do no harm or evil. Here Asimov’s Laws are a good example of a negative framed ethical framework as is most of the Ten Commandments (e.g., Thou shall not ….), religious laws and rules of law in general. Here we emerge ourselves in the Kantian universe (Kant, 1788, 2012) that judge ethical frameworks based on universal rules and a sense of obligation to do the morally right thing. We call this type of ethics deontological, where the moral action is valued higher than the consequences of the action itself.
  2. Positive framed ethics (e.g., consequentialism or utilitarianism) strive to maximize happiness or wellbeing. Or as David Hume (Hume, 1738, 2015) would pose it, we should strive to maximize utility based on human sentiment. This is also consistent with the ethical framework of utilitarianism stating that the best moral action is the one that maximizes utility. Utility can be defined in various ways, usually in terms of well-being of sentient beings (e.g., pleasure, happiness, health, knowledge, etc..). You will find the utilitarian ethicist to believe that no morality is intrinsically wrong or right. The degree of rightness or wrongness will depend on the overall maximalization of nonmoral good. Following a consequentialist line of thinking might lead to moral actions that would be considered ethically wrong by deontologists. From an AI system design perspective, utilitarianism is in nature harder to implement as it conceptually tends to be more vague than negatively framed or rule based ethics of what is not allowed. Think about how to make a program that measure you happiness versus a piece of code that prevents you from crossing a road with a red light traffic signal.

It is also convenient to differentiate between Producers and Consumers of moral action. A moral producer has the moral responsibilities towards another being or beings that is held in moral regard. For example, a teacher has the responsibility to teach children in his classroom but also assisting in developing desirable characteristics and moral values. Last but not least, also the moral responsibility to protect the children under guidance against harm. A moral consumer is a being with certain needs or rights of which other beings ought to respect. Animals could be seen as example of moral consumers. At least if you believe that you should avoid being cruel towards animals. Of course, we also understand that animals cannot be moral producers having moral responsibilities, even though we might feel a moral obligation towards them. It should be pointed out that non-sentient beings, such as an AI for example, can be a moral producer but not a moral consumer (e.g., humans would not have any moral or ethical obligations towards AIs or things, whilst an AI may have a moral obligation towards us).

religion_ai_ethics

Almost last but not least in any way, it is worthwhile keeping in mind that ethics and morality are directly or indirectly influenced by a society’s religious fabric of the past up to the present. What is considered a good ethical framework from a Judeo-Christian perspective might (quite likely) be very different from an acceptable ethical framework of Islamic, Buddhist, Hindu, Shinto or traditional African roots (note: the list is not exhaustive). It is fair to say that most scholarly thought and work on AI ethics and machine morality takes its origins in western society’s Judeo-Christian thinking as well as its philosophical traditions dating back to the ancient Greeks and the Enlightenment. Thus, this work is naturally heavily biased towards western society’s ethical and moral principles. To put it more bluntly, it is a white man’s ethics. Ask yourself whether people raised in our western Judeo-Christian society would like their AI to conform to Islamic-based ethics and morality? And vice versa? What about Irish Catholicism vs Scandinavian Lutheran ethics and morality?

The ins and outs of Human ethics and morality is complex to say the least. As a guide for machine intelligence, the big question really is whether we want to create such beings in our image or not. It is often forgotten (in the discussion) that we, as human beings, are after all nothing less or more than a very complex biological machine with our own biochemical coding. Arguing that artificial (intelligent) beings cannot have morality or ethics because of their machine nature, misses a bit the point of humans and other biological life-forms are machines as well (transhumanity.net, 2015).

However, before I cast the last stone, it is worth keeping in mind that we should strive for our intelligent machines, AIs, to do much better than us, be more consistent than us and at least as transparent as us;

“Morality in humans is a complex activity and involves skills that many either fail to learn adequately or perform with limited mastery.” (Wallach, Allen and Smit, 2007).

ETHICS & MORALITY FOR ARTIFICIAL INTELLIGENT BEINGS.

ethical_AI

An Artificial Intelligent (AI) being might have a certain degree of autonomous action (e.g., a self-driving car) and as such we would have to consider that the AI should have a moral responsibility towards consumers and people in general that might be within the range of its actions (e.g., passenger(s) in the autonomous vehicle, other drivers, pedestrians, bicyclists, bystanders, etc..). The AI would be a producer of moral action. In the case of the AI being completely non-sentient, it should be clear that it cannot make any moral demands towards us (note: I would not be surprised if Elon is working on that while you are reading this). Thus, by the above definition, the AI cannot be a moral consumer. For a more detailed discussion of ethical producers & consumers see Steve Torrance article “Will Robots need their own Ethics?” (Torrance, 2018).

As described by Moor (2006) there are two possible directions to follow for ethical artificial beings (1) Implicit ethical AIs or (2) Explicit ethical AIs. Implicit ethical AIs follow its designers programming and is not capable of action based on own interpretation of given ethical principles. The explicit ethical AI is designed to pursue (autonomously) actions according with its interpretation of given ethical principles. See a more in depth discussion by Anderson & Anderson (2007). The implicit ethical AI is obviously less challenging to develop than a system based on an explicit ethical AI implementation.

Do we humans trust AI-based decisions or actions? As illustrated in Figure 1, the answer to that question is very much no we do not appear to do so. Or at least significantly less than we would trust human-based decisions and actions (even in the time and age of Trumpism and fake news) (Larsen, 2018 I). We furthermore, hold AI or intelligent algorithms to much higher standards compared to what we are content to accept for other fellow humans. In a related trust question (Larsen, 2018 I), I reframed the trust question by emphasizing that both the human decision maker as well as the AI had a proven success rate above 70%. As shown in Figure 2, emphasizing a success rate of 70% or better did not significantly change the trust in the human decision maker (i.e., both formulations at 53%). For the AI-based decision, people do get more trusting. However, there is little change in the number of people who would frequently trust an AI-based decision (i.e., 17% for 70+% and 13% unspecified), even if its success rate would be 70% of higher.

“Humans hold AI’s to substantially higher standards than their fellow humans.”.

trust in decisions made by humans vs ai

Figure 1 when asked whether people would trust a decision made by a human vs a decision made by an AI, people choose a human decision maker over a AI based decision. In fact, 62% of respondents to only infrequently trust an AI based decision while only 11% would infrequently trust a human based decision (Larsen, 2018 I).

trust in decisions made by humans vs ai at 70% success rate

Figure 2 when asked whether people would trust a decision made by a human vs a decision made by an AI where both have a proven success rate above 70%, people remain choosing the human decision maker over the AI. While there is little dependency on stipulating the success rate for the human decision maker preference, the preference for AI improves significantly upon specifying that its success rate is better than 70% (Larsen, 2018 I). But then again how many humans do you know having a beyond 70% success rate in their decision making (obviously not per see easy to measure and one would probably get a somewhat biased answer from decision makers).

What about an artificial intelligent (AI) being? Should it, in its own right, be bound by ethical rules? It is clear that the developer of an AI-based system is ethically responsible to ensure that the AI will conform to what is regarded as an ethical framework consistent with human-based moral principles. What if an AI develops another AI (Simonite, 2018), possible more powerful (but non-sentient) and with higher degree of autonomy from human control? Is the AI creator bound to the same ethical framework a human developer would be? And what does that even mean for the AI in question?

Well, if we are not talking about a sentient AI (Bostrom, 2016), but “simply” an autonomous software-based evolution of increasingly better task specialization and higher accuracy (and maybe cognitive efficiency), the ethics in question should not change. Although ensuring compliance with a given ethical framework does appear to become increasingly complex. Unless checks and balances are designed into the evolutionary process (and that is much simpler to write about than to actually go and code into an AI system design). Furthermore, the more removed an AI generation is from its human developer’s 0th version, the more difficult does it become to assign responsibility to that individual in case of non-compliance. Thus, it is important that corporations have clear compliance guidelines for the responsibility and accountability of evolutionary AI systems if used. Evolutionary AI systems raises a host of interesting but thorny compliance issues on their own.

Nick Bostrom (Bostrom, 2016) and Eliezer Yudkowsky (Yudkowsky, 2015) in “The Cambridge handbook of artificial intelligence” (Frankish & Ramsey, 2015) addresses what we should require from AI-based systems that aim to augments or replace human judgement and work tasks in general;

  • AI-based decisions should to be transparent.
  • AI-based decisions should be explainable.
  • AI actions should be predictable.
  • AI system must be robust against manipulation.
  • AI decisions should be fully auditable.
  • Clear human accountability for AI actions must be ensured.

The list above is far from exhaustive and it is a minimum set of requirements we would expect from human-human interactions and human decision makings anyway (whether it is fulfilled is another question). The above requirements are also consistent with what IEEE Standards Association considers important in designing an ethical AI-based system (EADv2, 2018) with the addition of requiring AI-systems to “explicitly honor inalienable human rights”.

So how might AI-system developers and product managers feel about morality and ethics? I don’t think they are having many sleepless nights over the topic. In fact, I often hear technical leaders and product managers ask to not be too bothered or slowed down in their work with such (“theoretical”) concerns (we humor you but don’t bother us attitude is prevalent in the industry). It is not an understatement that the nature and mindset of an ethicist (even an applied one) and that of an engineer is light years apart. Moreover, their fear of being slowed down or stopped developing an AI-enabled product might even be warranted in case they would be required to design a working ethical framework around their product.

While there are substantial technical challenges in coding a working morality into an AI-system, it is worthwhile to consider the following possibility;

“AIs might be better than humans in making moral decisions. They can very quickly receive and analyze large quantities of information and rapidly consider alternative options. The lack of genuine emotional states makes them less vulnerable to emotional hijacking. Paraphrasing (Wallach and Allan, 2009).

ASIMOVIAN ETHICS – A GOOD PLOT BUT NOT SO PRACTICAL.

robotics laws

Isaac Asimov 4 Laws of robotics are good examples of a top-down rule-based negatively-framed deontological ethical model (wow!). Just like the 10 Commandments (i.e., Old Testament), The Golden Rule (i.e., New Testament), the rules of law, and most corporate compliance-based rules.

It is not possible to address AI Ethics without briefly discussing the Asimovian Laws of Robotics;

  • 0th Law:  “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
  • 1st Law: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
  • 2nd Law: “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.”
  • 3rd Law: “A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.”

Laws 1 – 3 was first introduced by Asimov in several short stories about robots back in 1942 and later compiled in his book “I, Robot” (Asimov, 1950, 1984). The Zeroth Law was introduced much later in Asimov’s book “Foundation and Earth” (Asimov, 1986, 2013).

Asimov has written some wonderful stories about the logically challenges and dilemmas his famous law poses on human-robot & robot-robot interactions. His laws are excitingly faulty and causes many problems.

So what is wrong with Asimovian ethics?

Well … it is possible to tweak and manipulate the AI (e.g., in the training phase) in such a way that only a subset of Humanity will be recognized as Humans by the AI. The AI would then supposedly not have any “compunction” hurting humans (i.e., 1st Law) it has not been trained to recognize as humans. In a historical context this is unfortunately very easy to imagine (e.g., Germany, Myanmar, Rwanda, Yugoslavia …). Neither would the AI obey people it would recognize as Humans (2nd Law). There is also the possibility of an AI trying to keeping a human being alive and thereby sustaining suffering beyond what would be acceptable by that human or society’s norms. Or AI’s might simply conclude that putting all human beings into a Matrix-like simulation (or indefinite sedation) would be the best way to preserve and protect humanity. Complying perfectly with all 4 laws. Although we as humans might disagree with that particular AI ethical action. For much of the above the AI’s in questions are not necessarily super-intelligent ones. Well-designed narrow AIs, non-sentient ones, could display above traits as well, either individually or as a set of AIs (well … maybe not the matrix-scenario just yet).

Of course in real-world systems design, Asimov’s rules might be in direct conflict with the purpose of a given system’s purpose. For example, if you equip a reaper drone with a hellfire missile, put a machine gun on a MAARS (Modular Advanced Armed Robotic System) or allow a police officer’s gun AI-based autonomy (e.g., emotion-intend recognition via bodycam) all with the expressed intent of harming (and possibly kil) a human being (Arkin, 2008; Arkin 2010), it would be rather counterproductive to have implemented a Asimovian ethical framework.

There are a bunch of other issues with the Asimov Laws that is well accounted in Peter Swinger’s article “Isaac Asimov’s Laws of Robotics are wrong” (Singer, 2018). Let’s be honest, if the Asimovian ethics would have been perfect, Isaac Asimov’s books wouldn’t have been much fun to read. The way to look at the challenges with Asimov’s Laws, is not that Asimov sucks at defining ethical rules, but that it is very challenging in general to define rules that can be coded into an AI system and work without logical conflicts and un-foreseen in- intended disastrous consequences.

While it is good to consider building ethical rules into AI-based systems, the starting point should be in the early design stage and clearly should focus on what is right and what is wrong to develop. The focus should be to provide behavioral boundaries for the AI. The designer and product manager (and ultimately the company they work for) have a great responsibility. Of course, if the designer is another AI, then the designer of that, and if that is an AI, and so forth … this idea while good is obviously not genius proof.

In reality, implementing Asimov’s Laws into an AI or a robotics system has been proven possible but also flawed (Vanderelst & Winfield, 2018). In complex environments the computational complexity involved in making an ethical right decision takes up so much valuable time. Frequently rendering the benefit of an ethical action impractical. This is not only a problem with getting Asimov’s 4 laws to work in a real-world environment. But a general problem with implementing ethical systems governing AI-based decisions and actions.

Many computer scientists and ethicists (oh yes! here they do tend to agree!) regards real world applications of Asimovian ethics as a rather meaningless or a too simplistic endeavor (Murphy & Woods, 2009; Anderson & Anderson, 2010). The framework is prone to internal conflicts resulting in indecision or too long decision timescales for the problem at hand. Asimovian ethics teaches us the difficulty in creating an ethical “bullet-proof” framework without Genie loopholes attached.

So … You better make sure that your AI ethics, or morality, you consider is a tangible part of your system architecture and (not unimportantly) can actually be translated into a computer code.

Despite of the obvious design and implementation challenges, researchers are speculating that;

“Perhaps interacting with an ethical robot might someday even inspire us to behave more ethically ourselves” (Anderson & Anderson, 2010).

DOES ETHICISTS DREAM OF AUTONOMOUS TROLLEYS?

trolley_problem

Since early 2000s many many lives have been virtually sacrificed by trolley on the altar of ethical and moral choices … Death by trolley has a particular meaning to many students of ethics (Cathcart, 2013). The level of creativity in variations of death (or murder) by trolley is truly fascinating albeit macabre. It also have the “nasty” side effect of teaching ourselves some unpleasant truths about our moral compasses (e.g., sacrificing fat people, people different from our own “tribe”, value of family over strangers, etc..)

So here it is the trolley plot;

There is a runaway trolley barreling down the railway track. Ahead, on the track, there are five people tied up and unable to move. The trolley is headed straight for them. You (dear reader) is standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different side track. However, you notice that there is one person tied up on the side track. You have two options:

  1. Do nothing, and the trolley kills the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

What do you believe is the most ethical choice?

Note: if you answer 2, think again what you would do if the one person was a relative or a good friend or maybe a child and the 5 were complete adult strangers. If you answer 1, ask yourself whether you would still choose this option if the 5 people where your relatives or good friends and the one person a stranger or maybe a sentient space alien. Oh, and does it really matter whether there is 5 people on one of the tracks and 1 at the other?

A little story about an autonomous AI-based trolley;

The (fictive) CEO Elton Must get the idea to make an autonomous (AI) trolley. Its AI-based management system has been designed by our software engineer S. Love whose product manager had a brief love affair with Ethics and Moral Philosophy during his university years (i.e., University of Pittsburgh). The product manager asked S. Love to design the autonomous trolley in such a way that the AI’s reward function maximizes on protecting the passengers of the Trolley first and having a secondary goal function protecting human beings in general irrespective of whether they are the passengers or bystanders.

From an ethics perspective the AI Trolley can be regarded as a producer of ethical principles, i.e., the AI trolley by proxy of the designer & product manager has the moral obligation to protect its passengers and bystanders from harm. The AI trolley itself is not a consumer of ethical principles, as we really don’t need to feel any moral obligation towards a non-sentient, assuming that the Trolley AI is indeed non-sentient. (Though I have known people who felt more moral obligation towards their car than their loved ones. So this might not be universally true).

On its first drive in the real world, the autonomous trolley carrying a family of 5 slips on an icy road and sways to the opposite side of the road where a non-intelligent car with a single person is driving. The AI estimates that the likelihood of the trolley crashing through the mountain side guardrail and the family of 5 to perish is an almost certainty (99.99999%). The trolley AI can choose to change direction and collide with the approaching car, pushing it over the rail and hurdling it 100 meters down the mountain, killing the single passenger as the most likely outcome (99.98%). The family of 5 is saved by this action. The AI’s first reward function is satisfied. Alternatively, the Trolley AI can also decide to accelerate, avoid the collision with the approaching car, and drive through the rail and kill all its passengers (99.99999%). The AI fails at its first goal, protecting the family it is carrying, but saves the single person in the approaching vehicle. Its second reward function related to protecting human beings in general would be satisfied … to an extent.

It is important to note that the AI takes the role of the Human in deciding the destiny of the family of 5 and the 1 passenger (by “pulling” the virtual lever). Thus, in all effect, it is of course developer S. Love and his product manager that bears the ultimate responsibility of the AI’s decision. Even if they will not be present at the event itself.

In the event of the family being killed, the trolley AI developer and product manager would be no more responsible for the accidental death of the 5 passengers than any other normal-car developer under a similar circumstance. In the case of death of the single passenger in the normal car, S. Love and his product manager would be complicit to murder by AI in my opinion. Although it would save a family of 5 (note: we assume that all the passengers, whether in the trolley or the normal car, have no control of the outcome similar to the classical trolley setup).

What about our ethically inclined trolley product manager? In one parallel universe the product manager was particularly fascinated by utilitarianism. Thus, maximizing the utility of nonmoral good. In his view it would be morally wrong for the trolley AI not to attempt to save the family of 5 on the expense of the single person in the other car (i.e., saving 5 lives count for higher utility or nonmoral good than saving 1 life). In another parallel universe, our product manager is bound by a firm belief in deontological principles that judges the morality of a given action based on rules of law. In the deontological ethical framework, saving the family of 5 by deliberately killing the single person in the approaching car would be morally wrong (i.e., it would “smell” a lot like premeditated homicide otherwise… right?). Thus, in this ethical framework the AI would not change the cause of the autonomous trolley and the family of 5 would perish and the passenger of the approaching cars lives to see another day.

If your utilitarian mindset still conflicts with the above deontological view of the autonomous trolley problem … well think of this example;

A surgeon has 5 patients critically ill and in urgent need of transplants to survive the next few days. The surgeon just had a healthy executive (they do exist in this parallel universe) who could be a donor for the 5 patients. Although he would die harvesting the body parts needed for the 5 patients. What should the surgeon do?

  1. Do nothing and let the 5 patients perish.
  2. Sedate the executive, harvest his body parts and killing him in the process.

What do you believe would be the most ethical choice?

“Ethics is “Hard to Code”. The sad truth really is that ethical guidance is far from universal and different acceptable ethical frameworks frequently leads to moral dilemmas in real-world scenarios.” (Kim, 2018).

THE AUTONOMY OF EVERYTHING – ARCHITECTURAL CONSIDERATIONS OF AN AI ETHICAL FRAMEWORK.

autonomous.jpg

Things, systems, products and services are becoming increasingly autonomous. While this increased degree of Autonomy of Everything (AoE) provides a huge leap in human convenience it also adds many technical as well as many more societal challenges to design and operations of such AoEs. The “heart” of the AoE is the embedded artificial intelligent (AI) agent that fuels the cognitive autonomy.

AoEs and their controlling AIs will directly or indirectly be involved in care, law, critical infrastructure operations, companionship, entertainment, sales, marketing, customer care, manufacturing, advisory functions, critical decision making, military applications, sensors, actuators, and so forth. To ripe the full benefits of autonomy of everything, most interactions between an AoE and a Human will become unsupervised, by Humans at least. Although supervision could and should be built into the overarching AoE architecture. It becomes imperative to ensure that the behavior of intelligent autonomous agents is safe and within the boundaries of what our society regards as ethically and morally just.

While the whole concept of AoE is pretty cool, conceptually innovative, let’s focus here on the ethical aspects of a technical architecture that could be developed to safeguard consumers of AI … that is, how do we ensure that our customers, using our products with embedded AI, are protected from harm in its widest sense possible? How do we ensure that our AIs are operating within an ethical framework that is consistent with the rules of law, corporate guidelines as well as society’s expectations of ethics and morality?

While there is a lot of good theoretical ground work done (and published) on the topic of AI ethics including Robot Ethics, there is little actual work done on developing ethical system architectures that actual could act as what Ron Arkin from Georgia Institute of Technology calls an “Ethical Governor” (Arkin, 2010) for an AI system. Vanderelst et al (Vanderelst & Winfield, 2018) building upon Asimovian ethics, ideas of Marques et al (Marques & Holland, 2009) and Arkin et al (Arkin, Ulam & Wagner, 2012) proposes to add an additional ethical controlling layer to the AI architecture. A slightly modified depiction of their Ethical AI architecture is shown in Figure 3. The depicted re-enforcement loop between Reward (RL) and Ethical AI Layer is not included in Vanderelst et al.’s original proposal. This simply illustrates the importance of both Ethical and non-Ethical rewards needed to be considered in the re-enforced AI learning and execution processes.

ethical ai architecture

Figure 3 An example of how an AI ethical architecture might look like based on ideas of (Vanderelst and Winfield, 2018). The ethical evaluator takes output from the AI control layer and compare this with an Ethical Simulator comparing an AI action with a Human action and its ethical impact (e.g., was a human hurt, was an action biased, etc..). Compared to the work of Vanderelst et al. which addresses robot-based ethics, I am focusing on the AI aspects (which could be part of a Robot system). Furthermore, the Re-enforcement aspects of the above AI-ethics architecture is on my own account. Re-enforcement learning is likely to play a major role as a part of a modern autonomous learning system based on non-ethical and ethical feedback and reward to the AI’s goal function.

In the “Ethical AI Layer”, the “Ethical Simulator” will predict the next state or action of the AI system (i.e., this is also what is understood by forward modelling in control theory). The simulator moreover predicts the consequences of a proposed action. This is also what Marques et al has called functional imagination of an autonomous system (Marques & Holland, 2009). The prediction of the consequence(s) of a proposed action for the AI (or Robot), Human and the Environment (e.g., the World) is forwarded to an “Ethics Evaluator” module. The “Ethics Evaluator” module condenses the complex consequences simulation into an ethical desirability index. Based on the Index value, the AI system will adapt its actions to attempt to remain compliant with any ethical rule applies (and is programmed into the system!). The mechanism whereupon this will happen is the ethical re-enforcement loop going back to the “AI Control Layer”. Vanderelst and Winfield develop a working system based on the architecture in Figure 3 and choose Asimov’s three laws of robotics as the systems ethical framework. A demonstration of an earlier experiment can be found on YouTube (Winfield, 2014). The proof of concept (PoC) of Vanderelst & Winfield (2018) used to two programmable humanoid robots, one robot acted as a proxy for a human and the other an ethical robot with Asimovian ethical framework (i.e., “Ethical AI Layer” in Figure 3). In the fairly simple scenario limited to 2 interacting robots and a (very) simple world model, Vanderelst et al showed that their concept would be workable. Now it would have been very interesting to see how their solution would function in Trolley-like dilemmas or in a sensory complex environment with many actors such as is the case in the real world.

Figure 4 illustrates the traditional machine learning (ML) or AI creation process starting with ingestion from various data sources, data preparation task (e.g., data selection, cleaning, structuring, etc.) and the AI training process prior to letting the ML/AI agent loose in the production environment of a given system, product or service. I believe that, as the AI model is being trained, it is essential to include ethical considerations in the training process. Thus, not only should we consider how good a model performs (in training process) compared to the actual data but also whether the solution comply with a given ethical framework and imposed ethical rules. Examples could be to test for biased outcomes or simply close of part of a solution space due to higher or unacceptable risk of non-compliance with corporate guidelines and accepted moral frameworks. Furthermore, in line with Arkin et al (Arkin, Ulam & Wagner, 2012) and the work of Vanderelst et al (Vanderelst & Winfield, 2018), it is clear that we need a mechanism in our system architecture and production environments that checks AI initiated actions for potential harmfulness to the consumer or for violation of ethical boundary conditions. This functionality could be part of the re-enforcement feedback loop that seeks to optimize the systems reward function for both ethical and non-ethical performance. In Figure 4, I call this the “Ethics Filter (ERL)” with the ERL standing for Ethical Re-enforcement Learning.

ethical ai architecture II

Figure 4 When considering ethical AI’s we need to consider the whole process of creating a production ready autonomous system that would be embedded into physical agents (e.g., robots, IoTs, ..) as well as software-based systems (App, management system, AIaaS, software agent, …). It starts taking in data from (relevant) data sources, prepare a subset of the data for the AI training process, run the training procedure, validate on test data, apply ethical policy algorithms to the training and validation of the model, transfer production-ready AI-model to live environment (physical or software agent) and improve upon the model applying re-enforcement procedures (based on ethic compliance as well as other non-ethical goals). I believe that it is important to apply ethical rules and filters to the training process (e.g., rooting out biases or unethical actions from the AI solutions / action space) as well as to the live commercial environment.

It should be clear that words are cheap. It is easy to talk about embedding ethical checks and balances in AI system architectures. It is however much more difficult to actually built these ideas into a real-world AI system and achieve reasonable decision response times (e.g., measured in seconds or lower) considering all possible (likely) consequences of an AI proposed action. The computational overhead of clearing or adapting an action could lead to unreasonable long process times. In Robot experiments using Asimovian ethics, Alan Winfield of Bristol Robotics Laboratory in the UK showed that in more than 40% of their trials the Robots ethical decision logic spent such a long time finding a solution, that the simulated humans, the robot was supposed to safe, perished (Rutkin, 2014).

MAGENTA PAINTED DIGITAL ETHICS FOR AI’s.

digital ethicsLet us have a look at Deutsche Telekom’s AI Ethics Team’s work on AI Ethics or as we call it “Digital Ethics – AI Guidelines” (DTAG, 2018).

The following (condensed) guidelines starting point is that our Company/Management is the main producer of ethics and moral action;

  1. We are responsible (for our AIs).
  2. We care (that our AI must obey rules of law & comply with our company values).
  3. We put our customers first (AI must benefit our customers).
  4. We are transparent (about the use of AI).
  5. We are secure (our AI’s actions are auditable & respectful of privacy).
  6. We set the grounds (our AI aim to provide the best possible outcomes & do no harm to our customers).
  7. We keep control (and can deactivate & stop our AI at any time).
  8. We foster the cooperative model (between Human and AI by maximizing the benefits).
  9. We share and enlighten (we will foster open communication & honest dialogue around the AI topic).

The above rules are important and meaningful from a corporate compliance perspective and not to forget for society in general. While the guidelines are aspirational in nature and necessary, they are not sufficient in the design of ethical AI-based systems, products and services. Bridging the world of AI ethics in wording and concrete ready-to-code design rules are one of the biggest challenges we face technologically.

Our Digital Ethics fulfills what Bostrom and Yudkowsky in “The Cambridge handbook of artificial intelligence” (Frankish and Ramsey, 2015) defines as minimum requirements for AI-based actions augmenting or replacing human societal functions (e.g., decisions, work-tasks …). AI actions must at least be transparent, explainable, predictable, and robust against manipulation, auditable and with clear human accountability.

The next level of details of DTAG’s “Digital Ethics” guidelines shows that the ethical framework of which we strive to design AI’s is top-down in nature and a combination of mainly deontological (i.e., rule-based moral framework) and utilitarian (i.e., striving for the best possible) principles. Much more work will be needed to ensure that no conflicts occurs between the deontological rules in our guidelines and that the utilitarian ambitions.

The bigger challenges will be to translate our aspirational guidelines into something meaningful in our AI-based products, services and critical communications infrastructure (e.g., communications networks).

“Expressing a desire for AI ethical compliance is the easy part. The really hard part is to implement such aspirations into actual AI systems and then get them to work decently” (Kim, 2018).

THE END IS JUST THE BEGINNING.

It should be clear that we are far away (maybe even very far) from really understanding how we best can built ethical checks and balances into our increasingly autonomous AI-based products and services landscape. And not to forget how ethical autonomous AIs fit into our society’s critical infrastructures, e.g., telco, power, financial networks and so forth.

This challenge will of course not stop humanity from becoming increasingly more dependent on AI-driven autonomous solutions. After all, AI-based technologies promise to leapfrog consumer convenience and economic advantages to corporations, public institutions and society in general.

From my AI perception studies (Larsen, 2018 I & II), corporate decision makers, our customer and consumers don’t trust AI-based actions (at least when they are aware of them). Most of us would prefer an inconsistent, error prone and unpredictable emotional manager full of himself to that of an un-emotional, consistent and predictable AI with a low error rate. We expect an AI to be more than perfect. This AI allergy is often underestimate in corporate policies and strategies.

In a recent survey (Larsen, 2018 II), I asked respondents to judge the two following questions;

  1. “Do you trust that companies using AI in their products and services have your best interest in mind?”
  2. “How would you describe your level of confidence that political institutions adequately consider the medium to long-term societal impact of AI?”

9% of the survey respondents believed that companies using AI in their products and services have their customers best interest in mind.

80% of the survey respondents had low to no confidence in political institutions adequately considered the medium to long-term societal impact of AI.

I have little doubt that as AI technology evolves and finds its use increasingly in products, services and critical infrastructure that we humans are exposed to daily, there will be an increasing demand for transparency of the inherent risks to individuals, groups and society in general.

That consumers do not trust companies to have their best interest in mind is in today’s environment of “Fake news”, “Brexit”, “Trumpism”, “Influencer campaigns” (e.g., Cambridge Analytica & FB) and so forth, is not surprising. “Weaponized” AI will be developed to further strengthen the relative simple approaches of Cambridge Analytica “cousins”, Facebook and the Googles of this world. Why is that? I believe that the financial and the power to be gained by weaponized AI approaches are far too tempting to believe that it will not increase going into the future. The trust challenge will remain if not increase. The Genie is out of the bottle.

AI will continue to take over human tasks. This trend will accelerate. AI will increasingly be involved in critical decision that impact individuals’ life and livelihood. AI will become increasingly better at mimicking humans (Vincent, 2018). Affective AIs have the capacity even today to express emotions and sentiment without being sentient (Lomas, 2018). AI will become increasingly autonomous and possibly even have the capability to self-improve (wo evolving to sentience) (Gent, 2017). Thus the knowledge distance between the original developer and the evolved AI could become very large depending on whether the evolution is bounded (likely imo) or unbounded (unlikely imo).

It will be interesting to follow how well humans in general will adapt to humanoid AIs, i.e., AIs mimicking human behavior. From work by Mori et al (Mori, MacDorman, & Kageki, 2012) and many others (Mathur & Reichling, 2016), it has been found that we humans are very good a picking up on cues that appear false or off compared to our baseline reference of human behavior. Mori et al coined the term for this feeling of “offness”, the uncanny valley feeling.

Without AI ethics and clear ethical policies and compliance, I would be somewhat nervous about an AI future. I think this is a much bigger challenge than the fundamental technology and science aspects of AI improvements and evolution. Society need our political institutions much more engaged in the questions of the Good, the Bad and the Truly Ugly use cases of AI … I don’t think one need to fear super-intelligent God-like AI-beings (for quite some time and then some) … One need to realized that narrowly specialized AI’s, individually or as collaborating collectives, can do a lot of harm un-intended as well as intended (Alang, 2017; Angwin, Larson & Mattu, 2018; O’Neil, 2017; Wachter-Boettcher, 2018).

“Most of us prefer an inconsistent, error prone and unpredictable emotional manager full of himself to that of an un-emotional, consistent and predictable AI with a low error rate.” (Kim, 2018).

ACKNOWLEDGEMENT.

I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of creating this Blog. Without her support, I really would not be able to do this or it would take a lot longer past my expiration date to finish.

WORTH READING.

Agency, D. and Events, N. (2018). The Radio Frequency Spectrum + Machine Learning = A New Wave in Radio Technology. [online] Darpa.mil. Available at: https://www.darpa.mil/news-events/2017-08-11a.

Agrafioti, F. (2018). Ensuring that artificial intelligence is ethical? That’s everyone’s responsibility – Macleans.ca. [online] Macleans.ca. Available at: https://www.macleans.ca/opinion/ensuring-that-artificial-intelligence-is-ethical-thats-everyones-responsibility/

Alang, N. (2017). Turns Out Algorithms Are Racist. [online] The New Republic. Available at: https://newrepublic.com/article/144644/turns-algorithms-racist.

Anderson, M. and Anderson, S. (2007). Machine Ethics: Creating an Ethical Intelligent Agent. AI Magazine, 28(4), 15-26.

Anderson, M. and Anderson, S. (2010). Robot Be Good. Scientific American, 303(4), pp.72-77.

Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016). Machine Bias — ProPublica. [online] ProPublica. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Arkin, R. (2008). Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture. Technical Report GIT GVU 07 11 (Georgia Institute of Technology).

Arkin, R. (2010). Governing lethal behavior in autonomous robots. Boca Raton, Fla.: Chapman & Hall/CRC Press.

Arkin, R., Ulam, P. and Wagner, A. (2012). Moral Decision Making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception. Proceedings of the IEEE, 100(3), pp.571-589.

Asimov, I. (1984). Foundation; I, Robot. London: Octopus Books. First published 1950.

Asimov, I. (2013). Foundation and earth. New York: Spectra. First published 1986.

Blasi, A. (1980). Bridging moral cognition and moral action: A critical review of the literature. Psychological Bulletin, 88(1), pp.1-45.

Boddington, P. (2017). Towards a code of ethics for artificial intelligence. Cham: Springer International Publishing.

Boehm, C. (2012). Moral Origins. Basic Books.

Bostrom, N. (2016). Superintelligence. Oxford University Press.

Buolamwini, J. and Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, pp.1-15.

Cathcart, T. (2013). The trolley problem, or, Would you throw the fat man off the bridge?. Workman Publishing, New York.

Chakravorti, B. and Chaturvedi, R. (2017). Digital Planet 2017. [online] Sites.tufts.edu. Available at: https://sites.tufts.edu/digitalplanet/files/2017/05/Digital_Planet_2017_FINAL.pdf.

Dawkins, R. (1989). The Selfish Gene. 4th ed. Oxford University Press.

Descartes, R., Haldane, E. and Lindsay, A. (2017). Discourse on Method and Meditations of First Philosophy (Translated by Elizabeth S. Haldane with an Introduction by A.D. Lindsay). Stilwell: Neeland Media LLC.

Deutsche Telekom AG. (2018). Digital Ethics – Deutsche Telekom’s AI Guideline. [online] Telekom.com. Available at: https://www.telekom.com/en/company/digital-responsibility/digital-ethics-deutsche-telekoms-ai-guideline.

EADv2 – Ethics in Action. (2018). Ethically Aligned Design, Version 2 (EADv2) | IEEE Standards Association. [online] Available at: https://ethicsinaction.ieee.org/.

Fischer, J., Kane, R., Pereboom, D. and Vargas, M. (2010). Four views on free will. Malden [et al.]: Blackwell Publishing.

Frankish, K. and Ramsey, W. (2015). The Cambridge handbook of artificial intelligence. Cambridge, UK: Cambridge University Press.

Gent, E. (2017). Google’s AI-Building AI Is a Step Toward Self-Improving AI. [online] Singularity Hub. Available at: https://singularityhub.com/2017/05/31/googles-ai-building-ai-is-a-step-toward-self-improving-ai/#sm.0001yaqn0ub06ejzq7b2odvsw2kj1

Gottlieb, A. (2016). The dream of enlightenment. Allen Lane.

Hardy, S. and Carlo, G. (2011). Moral Identity: What Is It, How Does It Develop, and Is It Linked to Moral Action?. Child Development Perspectives, 5(3), pp.212-218.

Hart, D. and Fegley, S. (1995). Prosocial Behavior and Caring in Adolescence: Relations to Self-Understanding and Social Judgment. Child Development, 66(5), p.1346.

Hume, D., (1738, 2015). A treatise of human nature. Digireads.com Publishing.

Kant, I. (1788, 2012). The critique of practical reason. [United States]: Start Publishing. Immanuel Kant originally published his “Critik der praktischen Vernunft” in 1788. It was the second book in Kant’s series of three critiques.

Kwatz, P. (2017). Conscious robots. Peacock’s Tail Publishing.

Kuipers, B. (2016). Human-Like Morality and Ethics for Robots. The Workshops of the Thirtieth AAAI Conference on Artificial Intelligence AI, Ethics, and Society:, Technical Report WS-16-02.

Larsen, K. (2018 I). On the Acceptance of Artificial Intelligence in Corporate Decision Making – A Survey.. [online] AI Strategy & Policy. Available at: https://aistrategyblog.com/2017/11/05/on-the-acceptance-of-artificial-intelligence-in-corporate-decision-making-a-survey/.

Larsen, K. (2018 II). Smart life 3.0 – SMART 2018 Conference on “Digital Frontiers and Human Consequences” (Budapest, 4 April 2018).. [online] Slideshare.net. Available at: https://www.slideshare.net/KimKyllesbechLarsen/smart-life-30.

Lin, P., Abney, K. and Jenkins, R. (2017). Robot ethics 2.0. New York: Oxford University Press.

Lomas, N. (2018). Duplex shows Google failing at ethical and creative AI design. [online] TechCrunch. Available at: https://techcrunch.com/2018/05/10/duplex-shows-google-failing-at-ethical-and-creative-ai-design/.

Lumbreras, S. (2017). The Limits of Machine Ethics. Religions, 8(5), p.100.

Marques, H. and Holland, O. (2009). Architectures for functional imagination. Neurocomputing, 72(4-6), pp.743-759.

Mathur, M. and Reichling, D. (2016). Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Cognition, 146, pp.22-32.

Moor, J. (2006). The Nature, Importance, and Difficulty of Machine Ethics. IEEE Intelligent Systems, 21(4), pp.18-21.

Moor, J. (2018). Four Kinds of Ethical Robots | Issue 72 | Philosophy Now. [online] Philosophynow.org. Available at: https://philosophynow.org/issues/72/Four_Kinds_of_Ethical_Robots.

Mori, M., MacDorman, K. and Kageki, N. (2012). The Uncanny Valley [From the Field]. IEEE Robotics & Automation Magazine, 19(2), pp.98-100.

Murphy, R. and Woods, D. (2009). Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent Systems, 24(4), pp.14-20.

O’Neil, C. (2017). Weapons of Math Destruction. Penguin Books.

Osaba, O. and Welser IV, W. (2017). An intelligence in our image: The risks of bias and errors in artificial intelligence. 1st ed. RAND Corporation.

Outram, D. (2012). The Enlightenment. Cambridge: Cambridge University Press.

Rutkin, A. (2014). The robot’s dilemma. New Scientist, 223(2986), p.22.

Sandel, M. (2018). Justice: What’s The Right Thing To Do? Episode 01 “THE MORAL SIDE OF MURDER”. [online] YouTube. Available at: https://www.youtube.com/watch?v=kBdfcR-8hEY.

Sapolsky, R. (2017). Behave: The Biology of Humans at Our Best and Worst. 1st ed. Penguin Press. Note: Chapter 13 “Morality and Doing the Right Thing, Once You’ve Figured Out What That is” is of particular relevance here (although the whole book is extremely read worthy).

Shachtman, N. (2018). New Armed Robot Groomed for War. [online] WIRED. Available at: https://www.wired.com/2007/10/tt-tt/.

Shafer-Landau, R. (2013). Ethical theory. Chichester, West Sussex: Wiley-Blackwell.

Simonite, T. (2018). Google’s AI software is learning to make AI software. [online] MIT Technology Review. Available at: https://www.technologyreview.com/s/603381/ai-software-learns-to-make-ai-software/.

Singer, P. (2018). Isaac Asimov’s Laws of Robotics Are Wrong. [online] Brookings. Available at: https://www.brookings.edu/opinions/isaac-asimovs-laws-of-robotics-are-wrong/.

Smith, A. and Raphael, D. (1991). The wealth of nations. New York: Knopf.

Torrance, S. (2018). Will Robots Need Their Own Ethics? | Issue 72 | Philosophy Now. [online] Philosophynow.org. Available at: https://philosophynow.org/issues/72/Will_Robots_Need_Their_Own_Ethics.

Torresen, J. (2018). A Review of Future and Ethical Perspectives of Robotics and AI. Frontiers in Robotics and AI, 4.

transhumanity.net. (2015). Biological Machines. [online] Available at: http://transhumanity.net/biological-machines/

Vanderelst, D. and Winfield, A. (2018). An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Systems Research, 48, pp.56-66.

Vincent, J. (2018). Google’s AI sounds like a human on the phone — should we be worried?. [online] The Verge. Available at: https://www.theverge.com/2018/5/9/17334658/google-ai-phone-call-assistant-duplex-ethical-social-implications

Wachter-Boettcher, S. (2018). Technically Wrong. W.W. Norton.

Waldrop, M. (2015). Autonomous vehicles: No drivers required. Nature, 518(7537), pp.20-23.

Wallach, W., Allen, C. and Smit, I. (2007). Machine morality: bottom-up and top-down approaches for modelling human moral faculties. AI & SOCIETY, 22(4), pp.565-582.

Wallach, W. and Allen, C. (2009). Moral machines. New York, N.Y.: Oxford University Press.

Winfield, A. (2018). Ethical robots save humans. [online] YouTube. Available at: https://www.youtube.com/watch?v=jCZDyqcxwlo.

Yudkowsky, E. (2015). Rationality From AI to Zombies. 1st ed. Berkeley: Machine Intelligence Research Institute.

5 thoughts on “Human Ethics for Artificial Intelligent Beings.

  1. Pingback: Machine … Why ain’t thee Fair? | AI Strategy & Policy

  2. Continued great insights and thought-provoking material Dr. “Without AI ethics and clear ethical policies and compliance, I would be somewhat nervous about an AI future” would this be at National, Local or UN Level authority to be effective?

  3. An impressive share! I’ve just forwarded this onto a co-worker who was conducting a little homework on this.
    And he actually bought me lunch simply because I discovered it for him…
    lol. So let me reword this…. Thank YOU for the meal!!
    But yeah, thanks for spending some time to talk about this issue here on your
    internet site.

    • Thanks Veronique … and happy that both of you got something out of my work 😉 of course great thank you so much for promoting my work … I think I owe you a lunch as well! 😜

  4. Pingback: Machine … Why ain’t thee Fair? | AI Strategy & Policy Blog

Leave a Reply