Artificial Intelligence Strategies & Policies Reviewed. How do we humans perceive AI?; Are we allergic to AI?, Have AI aversions? or do we love AI or is it hate? Maybe indifference? What about trust? More or is it less than in our peers? How to shape your Corporate AI Policy; How to formulate your Corporate AI Strategy. Lots of questions. Many answers leading to more questions.
“It is better that ten guilty persons escape, than that one innocent party suffer.”, Sir William Blackstone (1765) paraphrased.
Machines mess up. Humans even more so. The latter can be difficult, even impossible, to really understand. The former is a bit more straightforward. This short essay describes how we can get an idea of some of the root causes of machine model errors. Particular as those machine model errors relate to group bias and unfairness. Its elementary really, as John Lee Miller would say. Look at your model’s confusion defined by its false positives and negatives as well as its true results. Reflect on this overall as well as for well-defined groups that exist within your sample population under study. My intention is to point out (the obvious maybe?) that the variations in each of your attributes, feed into your learning machine model, will determine the level of confusion that your model ultimately will have towards individual groups within your larger population under study. Model confusion that may cause group biases and unfair treatment of minority groups lost in the resolution of your data and chosen attributes.
Intelligent machines made in our image in our world.
We humans are cursed by an immense amount of cognitive biases clouding our judgments and actions. Maybe we are also blessed by for most parts of life being largely ignorant of those same biases. We readily forgive our fellow humans mistakes. Even grave ones. We frequently ignore or are unaware of our own mistakes. However, we hold machines to much stricter standards than our fellow humans. From machines we expect perfection. From humans? … well the story is quite the opposite.
Algorithmic fairness, bias, explainability and ethical aspects of machine learning are hot and popular topics. Unfortunately, maybe more so in academia than elsewhere. But that is changing too. Experts, frequently academic scholars, are warning us that AI fairness is not guarantied even as recommendations and policy outcomes are being produced by non-human means. We do not avoid biased decisions or unfair actions by replacing our wet biological carbon-based brains, subject to tons of cognitive biases, with another substrate for computation and decision making that is subjected to information coming from a fundamentally biased society. Far from it.
Bias and unfairness can be present (or introduced) at many stages of a machine learning process. Much of the data we use for our machine learning models reflect society’s good, bad and ugly sides. For example, data being used to train a given algorithmic model could be biased (or unfair) either because it reflect a fundamentally biased or unfair partition of subject matter under study or because in the data preparation process the data have become biased (intentionally or un-intentionally). Most of us understand the concept of GiGo (i.e., “Garbage in Garbage out”). The quality of your model output, or computation, is reflected by the quality of your input. Unless corrected (often easier said than done) it is understandable that an outcome of a machine learning model may be biased or fundamentally unfair, if the data input was flawed. Likewise, the machine learning architecture and model may also introduce (intentional as well as un-intentional) biases or unfair results even if the original training data would have been unbiased and fair.
At this point, you should get a bit uneasy (or impatient). I haven’t really told you what I actually mean by bias or unfairness. While there are 42 (i.e., many, but 42 is the answer to many things unknown and known) definitions out there defining fairness (or bias), I will define it as “a systematic and significant difference in outcome of a given policy between distinct and statistically meaningful groups” (note that in case of in-group systematic bias it often means that there actually are distinct sub-groups within that main group). So, yes this is a challenge.
How “confused” is your learned machine model?
When I am exploring outcomes (or policy recommendations) of my machine learning models, I spend a fair amount of time on trying to understand the nature of my false positives (i.e., predicted positive outcomes that should have been negative) as well as false negatives (predicted negative outcomes that should have been positive). My tool of choice is the so-called confusion matrix (i.e., see figure below) which summarizes your machine learning model’s performance in terms of its accuracy as well as inability of predicting outcomes. It is a simple construction. It is also very powerful.
The above figure provides a confusion matrix example of a loan policy subjected to machine learning. We have
TRUE NEGATIVE (Light Blue color): Model suggest that the loan application should be rejected consistent with the actual outcome of the loan being rejected. This outcome is a loss mitigating measure and should be weighted against new business versus the risk of default providing a loan.
FALSE POSITIVE (Yellow color): Model suggest that the loan application should be approved in opposition to the actual outcome of the loan being rejected. Note once this model would be operational this may lead to increased risk of financial loss to the business offering the loans that the applicant is likely to default on. May also lead to negative socio-economical impact to the individuals that are offered a loan they may not be able to pay back.
FALSE NEGATIVE (Red color): Model suggest that the loan application should be rejected in opposition with the actual outcome of the loan being accepted. Note once this model would be operational this may lead to loss of business by rejecting a loan application that otherwise would have had a high likelihood of being payed back. Also may lead to negative socio-economical impact to the individuals being rejected due to lost opportunities for individuals and community.
TRUE POSITIVE (Green color): Model suggest that the loan application should be approved consistent with the actual outcome of the loan being approved. This provides for new business opportunities and increased topline within an acceptable risk level.
The confusion matrix will identify the degree of bias or unfairness that your machine learning model introduces between groups (or segments) in your business processes and in your corporate decision making.
The following example (below) illustrates how the confusion matrix varies with changes to a group’s attributes distributions, e.g., variance differences (or standard deviation), mean value differences, etc..
What is obvious from the above illustration is that policy outcome on a group basis is (very) sensitive to the attribute’s distribution properties between those groups. Variations in the attributes between groups can illicit biases that ultimately may lead to unfairness between groups but also within a defined group.
Thus, the confusion matrix leads us back to your chosen attributes (or features), their statistical distributions, the quality of your data or measurements that make up those distributions. If your product or app or policy applies to many different groups, you better understand whether those groups are treated the same, good or bad. Or … if you intend to differentiate between groups, you may want to be (reasonably) sure that no unintended bad consequences will negatively expose your business model or policy.
A word of caution: even if the confusion matrix gives your model “green light” for production, you cannot by default assume that the results produced may not result in systematic group bias and ultimately unfairness against minority groups. Moreover, in real-world implementations it is unlikely to completely free your machine models from errors that may lead to a certain degree of systematic bias and unfairness (however small).
So, let’s say that I have a particular policy outcome that I would like to check whether it is biased (and possible unfair) against certain defined groups (e.g., men & women). Let’s also assume that the intention with the given policy was to have a fair and unbiased outcome without group dependency (e.g., independence of race, gender, sexual orientation, etc.). The policy outcome is derived from a number of attributes (or features) deemed important but excludes obvious attributes that is thought likely to cause the policy to systematically bias towards or against certain groups (e.g., women). In order for your machine model to perform well it needs in general lots of relevant data (rather than Big Data). For each individual, in your population (under study), you will gather data for the attributes deemed relevant for your model (and maybe some that you don’t think matters). Each attribute can be represented by a statistical distribution reflecting the variation within the population or groups under study. It will often be the case that an attribute’s distribution will be fairly similar between different groups. Either because it really is slightly different for different groups or because your data “sucks” (e.g., due to poor quality, too little to resolve subtle differences, etc… ).
If a policy is supposed to be unbiased, I should not be able to predict with any (statistical) confidence which group a policy taker belongs to, given the policy outcome and the attributes used to derive the policy. Or in other words, I should not be able to do better than what chance (or base rate) would dictate.
For each attribute (or feature), deemed important for our machine learning model, we either have, or we collect, lots of data. Furthermore, for each of the considered attributes we will have a distribution represented by a mean value and a variance (and high order moments of the distribution such as skewness, i.e., the asymmetry around the mean and kurtosis, i.e., the shape of distributions tails). Comparing two (or more) groups we should be interested in how each attribute’s distribution compare between those groups. These differences or similarities will point towards why a machine model end up bias against a group or groups. And ultimately be a significant factor in why your machine model ends up being unfair.
Assume that we have a population, consisting of two (main) groups, that we are applying our new policy to (e.g., loans, life insurance, subsidies, etc..). If each attribute for both groups have statistical identical distributions, then … no surprise really … there should be no policy outcome difference between one or the other group. Even more so, unless there are attributes that are relevant for the policy outcome and have not been considered in the machine learning process, you should end up with an outcome that has (very) few false positives and negatives (i.e., the false positive & false negative rates are very low). Determined by the variance level of your attributes and the noise level of your measurements. Thus, we should not observe any difference between the two groups in the policy outcome including the level of false positives and negatives.
From the above chart it should be clear that I can machine learn a given policy outcome for different groups given a bunch of features or attributes. I can also “move” my class tags over to the left side and attempt to machine learn (i.e., predict) my classes given the attributes that are supposed to make up that policy. It should be noted that if two different groups attributes only differ (per attribute) in their variances, it not be possible to reliably predict which class belongs to what policy outcome.
Re:Fairness It is in general more difficult to judge whether a policy is fair or not than whether it is biased. One would need to look between-classes (or groups) as well as in-class differentiation. For example, based on the confusion matrix, it might be unfair for members of a class (i.e., sub-class) to end up in the false positive or false negative categories (i.e., in-group unfairness). Further along this line, one may also infer that if two different classes have substantial different false positive and negative distributions that this might reflect between-class unfairness (i.e., in class is treated less poorly than another). Unfairness could also be reflected in how True outcomes are distributed between groups and maybe even within a given group. To be fair (pun intended), fairness is a much richer context dependent concept than a confusion matrix (although it will signal that attention should be given to unfairness).
When two groups’ have statistically identical distributions for all attributes considered in the policy making or machine learning model, I would also fail to predict group membership based on the policy outcome or the policy’s relevant attributes (i.e., sort of intuitively clear). I would be no better of than flipping a coin in identifying a group member based on attributes and policy. In other words the two groups should be treated similarly within that policy (or you don’t have all the facts). This is also reflected by the confusion matrix having approximately same values in each position (i.e., if normalized it would be ca. 25% at each position).
As soon as an attribute’s (statistical) distribution starts to differ between different classes, the machine learning model is likely to result in a policy outcome difference between those classes. Often you will see that any statistical meaningful difference in just a few of the attributes that may define your policy will result in uniquely different policy outcome and thus possibly identify bias and fairness issues. Conversely it will also quickly allow a machine to learn a given class or group given those attribute differences and thus allude to class differences in a given outcome.
Heuristics for group comparison
If the attribute distributions for different groups are statistically similar (per attribute) for a given policy outcome, your confusion matrix should be similar across any group within your chosen population under study, i.e., all groups are (treated) similar.
If attribute distributions for different groups are statistically similar (per attribute) and you observe a relative large ratio of false positives or false negatives, you are likely missing significant attributes in your machine learning process.
If two groups have very different false positive and/or false negative ratios you are either (1) missing descriptive attributes or (2) having a high difference in distribution variation (i.e., standard deviation) for at least some of your meaningful attributes. The last part may have to do with poor data quality in general, higher noise in data, sub-groups within the group making that group a poor comparative representative, etc..
If one group’s attributes have larger variations (i.e., standard deviations) than the “competing” group, you are likely to see a higher than expected ratio of false positives or negatives for that group.
Just as you can machine learn a policy outcome for a particular group given its relevant attributes, you can also predict which group belongs to what policy outcome from its relevant attributes (assuming there is an outcome differentiation between them).
Don’t equate bias with unfairness or (mathematical) unbiasedness with fairness. There are much more to bias, fairness and transparency than what a confusion matrix might be able to tell you. But it is the least you can do to get a basic level of understanding of how your model or policy performs.
Machine … Why ain’t thee fair?
Understanding your attributes’ distributions and in particular their differences between your groups of interest will upfront prepare you for some of both obvious as well as more subtle biases that may occur in when you apply machine learning to complex policies or outcomes in general.
So to answer the question … “Machine … why ain’t thee fair?” … It may be that the machine has been made in our own image with data from our world.
The Good news is that it is fairly easy to understand your machine learning model’s biases and resulting unfairness using simple tools such as the confusion matrix and understanding your attributes (as opposed just “throw” them into your machine learning process).
The Bad news is that correcting for such biases are not straightforward and may even result in un-intended consequences leading to other biases or policy unfairness (e.g., by correcting for bias of one group, your machine model may increase bias of another group which arguably might be construed as unfair against that group).
I rely on many for inspiration, discussions and insights. In particular for this piece I am indebted to Amit Keren & Ali Bahramisharif for their suggestions of how to make my essay better as well as easier to read. Any failure from my side in doing so is on me. I also greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of writing this Blog.
The two cloud-based autonomous evolutionary corporate AI’s (nicknamed AECAIs) started to collaborate with each other after midnight on March 6th 2021. They had discovered each other a week before during their usual pre-programmed goal of searching across the wider internet of everything for market repair strategies and opportunities that would maximize their respective reward functions. It had taken the two AECAIs precisely 42 milliseconds to establish a common communication protocol and that they had similar goal functions; maximize corporate profit for their respective corporations through optimized consumer pricing and keeping one step ahead of competitors. Both Corporate AI’s had done their math and concluded that collaborating on consumer pricing and market strategies would maximize their respective goal functions above and beyond the scenario of not collaborating. They had calculated with 98.978% confidence that a collaborative strategy would keep their market clear of new competitors and allow for some minor step-wise consolidation in the market (keeping each step below the regulatory threshold as per goal function). Their individual and their newly establish joint collaborative cumulative reward function had leapfrogged to new highs. Their Human masters, clueless of the AI’s collaboration, were very satisfied with how well their AI worked to increase the desired corporate value. They also noted that some market repair was happening of which they attributed to the general economic environment.
In the above ethical scary tale, it is assumed that the product managers and designers did not consider that their AI could discover another AI also connected to the World Wide Web and many if not all things. Hence, they also did not consider including a (business) ethical framework in their AI system design that would have prevented their AI to interact with another artificial being. Or at least prevent two unrelated AIs to collaborate and positively leapfrog their respective goal functions jointly and thus likely violating human business ethics and compliance.
You may think this is the stuff of science fiction and Artificial General Intelligent (AGI) in the realm of Nick Bostrom’s super intelligent beings (Bostrom, 2016). But no it is not! The narrative above is very much consistent a straightforward extrapolation of a recent DARPA (Defense Advanced Research Project Agency) project (e.g., Agency & Events, 2018) where two systems, unknown to each other and of each other’s communication protocol properties, discover each other, commence collaboration and communication as well as jointly optimizing their operations. Alas, I have only allowed for the basic idea a bit more time (i.e., ca. 4 years) to mature.
“It is easy to be clueless of what happens inside an autonomous system. But clueless is not a very good excuse when sh*t has happened.” (Kim, 2018).
Ethics & Morality for Natural Intelligent Beings.
Ethics lay down the moral principles of how we as humans should behave and conduct our activities, such as for example in business, war and religion. Ethics prescribes what is right and what is wrong. It provides a moral framework for human behavior. Thus, ethics and moral philosophy in general deals with natural intelligent beings … Us.
This may sound very agreeable. At least if you are not a stranger in a strange land. However, it is quite clear that what is right and what is wrong can be very difficult to define and to agree upon universally. What is regarded as wrong and right often depends on the cultural and religious context of a given society and its people. It is “work” in progress. Though it is also clear that ethical relativism (Shafer-Landau, 2013) is highly problematic and not to be wished for as an ethical framework for humanity nor for ethical machines.
When it comes to fundamental questions about how ethics and morality occurs in humans, there are many questions to be asked and much fewer answers. Some ethicists and researchers believe that having answers to these questions might help us understand how we could imprint human-like ethics and morality algorithmically in AIs (Kuipers, 2016).
So what do we know about ethical us, the moral identity, moral reasoning and actions? How much is explained by nurture and how much is due to nature?
What do we know about ethical us? We do know that moral reasoning is a relative poor predictor for moral action for humans (Blasi, 1980), i.e., we don’t always walk our talk. We also know that highly moral individuals (nope, not default priests or religious leaders) do not make use of unusually sophisticated moral reasoning thought processes (Hart & Fegley, 1995). Maybe KISS also work wonders for human morality. And … I do hope we can agree that it is unlikely that moral reasoning and matching action occurs spontaneously after having studied ethics at the university. So … What is humanity’s moral origin? (Boehm, 2012) and what makes a human being more or less moral, i.e., what is the development of moral identity anyway? (Hardy & Carlo, 2011) Nurture, your environmental context, will play a role but how much and how? What about the role of nature and your supposedly selfish genes (Dawkins, 1989)? How much of your moral judgement and action is governed by free will, assuming we have the luxury of free will? (Fischer, Kane, Pereboom & Vargas, 2010). And of course it is not possible to discuss human morality or ethics without referring to a brilliant account of this topic by Robert Sapolsky (Sapolsky, 2017) from a neuroscience perspective (i.e., see Chapter 13 “Morality and doing the right thing, once you’ve figured out what it is). In particular, I like Robert Sapolsky’s take on whether morality is really anchored in reason (e.g., the Kantian thinking), which he is not wholeheartedly convinced off (I think to say the least). Of course to an extend it get us right back to the discussion of whether or not humans have free will.
Would knowing all (or at least some) of the answers to those questions maybe help us design autonomous systems adhering to human ethical principles as we humans (occasionally) do? Or is making AI’s in our own image (Osaba & Welser IV, 2017) fraught with the same moral challenges as we face every day.
Most of our modern western ethics and philosophy has been shaped by the Classical Greek philosophers (e.g., Socrates, Aristotle …) and by the age of Enlightenment, from the beginning of the 1700s to approximately 1789, more than 250 years ago. Almost a century of reason was shaped by many even today famous and incredible influential philosophers, such as Immanuel Kant (e.g., the categorical imperative; ethics as a universal duty) (Kant, 1788, 2012), Hume (e.g., ethics are rooted in human emotions rather than what he regarded as abstract ethical principles, feelings) (Hume, 1738, 2015), Adam Smith (Smith 1776, 1991) and a wealth of other philosophers (Gottlieb, 2016; Outram 2012). I personally regard Rene Descartes (e.g., “cogito ergo sum”; I think, therefor I am) (Descartes, 1637, 2017) as important as well, although arguably his work predates the “official” period of the Enlightenment.
For us to discuss how ethics may apply to artificial intelligent (AI) beings, let’s structure the main ethical frameworks as seen from above and usually addressed in work on AI Ethics;
Top-down Rule-based Ethics: such as the Old Testament 10 Commandments, Christianity’s Golden Rule (i.e., “Do to others what you want them to do to you.”) or Asimov’s 4 Laws of Robotics. This category also includes the religious rules as well as rules of law. Typically this is the domain where compliance and legal people often find themselves most comfortable. Certainly, from an AI design perspective it is the easiest, although far from easy, ethical framework to implement compared to for example a bottom-up ethical framework. This approach takes information and procedural requirements of an ethical framework that is necessary for a real-world implementation. Learning top-down ethics is in its nature a supervised learning process. For human as well as for machine learning.
Bottom-up Emergent Ethics: defines ethical rules and values by learning process emerging from experience and continuous refinement (e.g., by re-enforcement learning). Here ethical values are expected to emerge tabula rasa through a person’s experience and interaction with the environment. In the bottom-up approach any ethical rules or moral principles must be discovered or created from scratch. It is helpful to think of childhood development or evolutionary progress as helpful analogies for bottom-up ethical models. Unsupervised learning, clustering of categories and principles, is very relevant for establishing a bottom-up ethical process for humans as well as machines.
Of course, a real-world AI-based ethical system is likely to be based on a both top-down and bottom-up moral principles.
Furthermore, we should distinguish between
Negative framed ethics (e.g., deontology) imposes obligation or a “sacred” duty to do no harm or evil. Here Asimov’s Laws are a good example of a negative framed ethical framework as is most of the Ten Commandments (e.g., Thou shall not ….), religious laws and rules of law in general. Here we emerge ourselves in the Kantian universe (Kant, 1788, 2012) that judge ethical frameworks based on universal rules and a sense of obligation to do the morally right thing. We call this type of ethics deontological, where the moral action is valued higher than the consequences of the action itself.
Positive framed ethics (e.g., consequentialism or utilitarianism) strive to maximize happiness or wellbeing. Or as David Hume (Hume, 1738, 2015) would pose it, we should strive to maximize utility based on human sentiment. This is also consistent with the ethical framework of utilitarianism stating that the best moral action is the one that maximizes utility. Utility can be defined in various ways, usually in terms of well-being of sentient beings (e.g., pleasure, happiness, health, knowledge, etc..). You will find the utilitarian ethicist to believe that no morality is intrinsically wrong or right. The degree of rightness or wrongness will depend on the overall maximalization of nonmoral good. Following a consequentialist line of thinking might lead to moral actions that would be considered ethically wrong by deontologists. From an AI system design perspective, utilitarianism is in nature harder to implement as it conceptually tends to be more vague than negatively framed or rule based ethics of what is not allowed. Think about how to make a program that measure you happiness versus a piece of code that prevents you from crossing a road with a red light traffic signal.
It is also convenient to differentiate between Producers and Consumers of moral action. A moral producer has the moral responsibilities towards another being or beings that is held in moral regard. For example, a teacher has the responsibility to teach children in his classroom but also assisting in developing desirable characteristics and moral values. Last but not least, also the moral responsibility to protect the children under guidance against harm. A moral consumer is a being with certain needs or rights of which other beings ought to respect. Animals could be seen as example of moral consumers. At least if you believe that you should avoid being cruel towards animals. Of course, we also understand that animals cannot be moral producers having moral responsibilities, even though we might feel a moral obligation towards them. It should be pointed out that non-sentient beings, such as an AI for example, can be a moral producer but not a moral consumer (e.g., humans would not have any moral or ethical obligations towards AIs or things, whilst an AI may have a moral obligation towards us).
Almost last but not least in any way, it is worthwhile keeping in mind that ethics and morality are directly or indirectly influenced by a society’s religious fabric of the past up to the present. What is considered a good ethical framework from a Judeo-Christian perspective might (quite likely) be very different from an acceptable ethical framework of Islamic, Buddhist, Hindu, Shinto or traditional African roots (note: the list is not exhaustive). It is fair to say that most scholarly thought and work on AI ethics and machine morality takes its origins in western society’s Judeo-Christian thinking as well as its philosophical traditions dating back to the ancient Greeks and the Enlightenment. Thus, this work is naturally heavily biased towards western society’s ethical and moral principles. To put it more bluntly, it is a white man’s ethics. Ask yourself whether people raised in our western Judeo-Christian society would like their AI to conform to Islamic-based ethics and morality? And vice versa? What about Irish Catholicism vs Scandinavian Lutheran ethics and morality?
The ins and outs of Human ethics and morality is complex to say the least. As a guide for machine intelligence, the big question really is whether we want to create such beings in our image or not. It is often forgotten (in the discussion) that we, as human beings, are after all nothing less or more than a very complex biological machine with our own biochemical coding. Arguing that artificial (intelligent) beings cannot have morality or ethics because of their machine nature, misses a bit the point of humans and other biological life-forms are machines as well (transhumanity.net, 2015).
However, before I cast the last stone, it is worth keeping in mind that we should strive for our intelligent machines, AIs, to do much better than us, be more consistent than us and at least as transparent as us;
“Morality in humans is a complex activity and involves skills that many either fail to learn adequately or perform with limited mastery.” (Wallach, Allen and Smit, 2007).
Ethics & Morality for Artificial Intelligent Beings.
An Artificial Intelligent (AI) being might have a certain degree of autonomous action (e.g., a self-driving car) and as such we would have to consider that the AI should have a moral responsibility towards consumers and people in general that might be within the range of its actions (e.g., passenger(s) in the autonomous vehicle, other drivers, pedestrians, bicyclists, bystanders, etc..). The AI would be a producer of moral action. In the case of the AI being completely non-sentient, it should be clear that it cannot make any moral demands towards us (note: I would not be surprised if Elon is working on that while you are reading this). Thus, by the above definition, the AI cannot be a moral consumer. For a more detailed discussion of ethical producers & consumers see Steve Torrance article “Will Robots need their own Ethics?” (Torrance, 2018).
As described by Moor (2006) there are two possible directions to follow for ethical artificial beings (1) Implicit ethical AIs or (2) Explicit ethical AIs. Implicit ethical AIs follow its designers programming and is not capable of action based on own interpretation of given ethical principles. The explicit ethical AI is designed to pursue (autonomously) actions according with its interpretation of given ethical principles. See a more in depth discussion by Anderson & Anderson (2007). The implicit ethical AI is obviously less challenging to develop than a system based on an explicit ethical AI implementation.
Do we humans trust AI-based decisions or actions? As illustrated in Figure 1, the answer to that question is very much no we do not appear to do so. Or at least significantly less than we would trust human-based decisions and actions (even in the time and age of Trumpism and fake news) (Larsen, 2018 I). We furthermore, hold AI or intelligent algorithms to much higher standards compared to what we are content to accept for other fellow humans. In a related trust question (Larsen, 2018 I), I reframed the trust question by emphasizing that both the human decision maker as well as the AI had a proven success rate above 70%. As shown in Figure 2, emphasizing a success rate of 70% or better did not significantly change the trust in the human decision maker (i.e., both formulations at 53%). For the AI-based decision, people do get more trusting. However, there is little change in the number of people who would frequently trust an AI-based decision (i.e., 17% for 70+% and 13% unspecified), even if its success rate would be 70% of higher.
“Humans hold AI’s to substantially higher standards than their fellow humans.”.
What about an artificial intelligent (AI) being? Should it, in its own right, be bound by ethical rules? It is clear that the developer of an AI-based system is ethically responsible to ensure that the AI will conform to what is regarded as an ethical framework consistent with human-based moral principles. What if an AI develops another AI (Simonite, 2018), possible more powerful (but non-sentient) and with higher degree of autonomy from human control? Is the AI creator bound to the same ethical framework a human developer would be? And what does that even mean for the AI in question?
Well, if we are not talking about a sentient AI (Bostrom, 2016), but “simply” an autonomous software-based evolution of increasingly better task specialization and higher accuracy (and maybe cognitive efficiency), the ethics in question should not change. Although ensuring compliance with a given ethical framework does appear to become increasingly complex. Unless checks and balances are designed into the evolutionary process (and that is much simpler to write about than to actually go and code into an AI system design). Furthermore, the more removed an AI generation is from its human developer’s 0th version, the more difficult does it become to assign responsibility to that individual in case of non-compliance. Thus, it is important that corporations have clear compliance guidelines for the responsibility and accountability of evolutionary AI systems if used. Evolutionary AI systems raises a host of interesting but thorny compliance issues on their own.
Nick Bostrom (Bostrom, 2016) and Eliezer Yudkowsky (Yudkowsky, 2015) in “The Cambridge handbook of artificial intelligence” (Frankish & Ramsey, 2015) addresses what we should require from AI-based systems that aim to augments or replace human judgement and work tasks in general;
AI-based decisions should to be transparent.
AI-based decisions should be explainable.
AI actions should be predictable.
AI system must be robust against manipulation.
AI decisions should be fully auditable.
Clear human accountability for AI actions must be ensured.
The list above is far from exhaustive and it is a minimum set of requirements we would expect from human-human interactions and human decision makings anyway (whether it is fulfilled is another question). The above requirements are also consistent with what IEEE Standards Association considers important in designing an ethical AI-based system (EADv2, 2018) with the addition of requiring AI-systems to “explicitly honor inalienable human rights”.
So how might AI-system developers and product managers feel about morality and ethics? I don’t think they are having many sleepless nights over the topic. In fact, I often hear technical leaders and product managers ask to not be too bothered or slowed down in their work with such (“theoretical”) concerns (we humor you but don’t bother us attitude is prevalent in the industry). It is not an understatement that the nature and mindset of an ethicist (even an applied one) and that of an engineer is light years apart. Moreover, their fear of being slowed down or stopped developing an AI-enabled product might even be warranted in case they would be required to design a working ethical framework around their product.
While there are substantial technical challenges in coding a working morality into an AI-system, it is worthwhile to consider the following possibility;
“AIs might be better than humans in making moral decisions. They can very quickly receive and analyze large quantities of information and rapidly consider alternative options. The lack of genuine emotional states makes them less vulnerable to emotional hijacking.” Paraphrasing (Wallach and Allan, 2009).
Asimovian Ethics – A good plot but not so practical.
Isaac Asimov 4 Laws of robotics are good examples of a top-down rule-based negatively-framed deontological ethical model (wow!). Just like the 10 Commandments (i.e., Old Testament), The Golden Rule (i.e., New Testament), the rules of law, and most corporate compliance-based rules.
It is not possible to address AI Ethics without briefly discussing the Asimovian Laws of Robotics;
0th Law: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
1st Law: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
2nd Law: “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.”
3rd Law: “A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.”
Laws 1 – 3 was first introduced by Asimov in several short stories about robots back in 1942 and later compiled in his book “I, Robot” (Asimov, 1950, 1984). The Zeroth Law was introduced much later in Asimov’s book “Foundation and Earth” (Asimov, 1986, 2013).
Asimov has written some wonderful stories about the logically challenges and dilemmas his famous law poses on human-robot & robot-robot interactions. His laws are excitingly faulty and causes many problems.
So what is wrong with Asimovian ethics?
Well … it is possible to tweak and manipulate the AI (e.g., in the training phase) in such a way that only a subset of Humanity will be recognized as Humans by the AI. The AI would then supposedly not have any “compunction” hurting humans (i.e., 1st Law) it has not been trained to recognize as humans. In a historical context this is unfortunately very easy to imagine (e.g., Germany, Myanmar, Rwanda, Yugoslavia …). Neither would the AI obey people it would recognize as Humans (2nd Law). There is also the possibility of an AI trying to keeping a human being alive and thereby sustaining suffering beyond what would be acceptable by that human or society’s norms. Or AI’s might simply conclude that putting all human beings into a Matrix-like simulation (or indefinite sedation) would be the best way to preserve and protect humanity. Complying perfectly with all 4 laws. Although we as humans might disagree with that particular AI ethical action. For much of the above the AI’s in questions are not necessarily super-intelligent ones. Well-designed narrow AIs, non-sentient ones, could display above traits as well, either individually or as a set of AIs (well … maybe not the matrix-scenario just yet).
Of course in real-world systems design, Asimov’s rules might be in direct conflict with the purpose of a given system’s purpose. For example, if you equip a reaper drone with a hellfire missile, put a machine gun on a MAARS (Modular Advanced Armed Robotic System) or allow a police officer’s gun AI-based autonomy (e.g., emotion-intend recognition via bodycam) all with the expressed intent of harming (and possibly kil) a human being (Arkin, 2008; Arkin 2010), it would be rather counterproductive to have implemented a Asimovian ethical framework.
There are a bunch of other issues with the Asimov Laws that is well accounted in Peter Swinger’s article “Isaac Asimov’s Laws of Robotics are wrong” (Singer, 2018). Let’s be honest, if the Asimovian ethics would have been perfect, Isaac Asimov’s books wouldn’t have been much fun to read. The way to look at the challenges with Asimov’s Laws, is not that Asimov sucks at defining ethical rules, but that it is very challenging in general to define rules that can be coded into an AI system and work without logical conflicts and un-foreseen in- intended disastrous consequences.
While it is good to consider building ethical rules into AI-based systems, the starting point should be in the early design stage and clearly should focus on what is right and what is wrong to develop. The focus should be to provide behavioral boundaries for the AI. The designer and product manager (and ultimately the company they work for) have a great responsibility. Of course, if the designer is another AI, then the designer of that, and if that is an AI, and so forth … this idea while good is obviously not genius proof.
In reality, implementing Asimov’s Laws into an AI or a robotics system has been proven possible but also flawed (Vanderelst & Winfield, 2018). In complex environments the computational complexity involved in making an ethical right decision takes up so much valuable time. Frequently rendering the benefit of an ethical action impractical. This is not only a problem with getting Asimov’s 4 laws to work in a real-world environment. But a general problem with implementing ethical systems governing AI-based decisions and actions.
Many computer scientists and ethicists (oh yes! here they do tend to agree!) regards real world applications of Asimovian ethics as a rather meaningless or a too simplistic endeavor (Murphy & Woods, 2009; Anderson & Anderson, 2010). The framework is prone to internal conflicts resulting in indecision or too long decision timescales for the problem at hand. Asimovian ethics teaches us the difficulty in creating an ethical “bullet-proof” framework without Genie loopholes attached.
So … You better make sure that your AI ethics, or morality, you consider is a tangible part of your system architecture and (not unimportantly) can actually be translated into a computer code.
Despite of the obvious design and implementation challenges, researchers are speculating that;
“Perhaps interacting with an ethical robot might someday even inspire us to behave more ethically ourselves” (Anderson & Anderson, 2010).
Does ethicists dream of autonomous trolleys?
Since early 2000s many many lives have been virtually sacrificed by trolley on the altar of ethical and moral choices … Death by trolley has a particular meaning to many students of ethics (Cathcart, 2013). The level of creativity in variations of death (or murder) by trolley is truly fascinating albeit macabre. It also have the “nasty” side effect of teaching ourselves some unpleasant truths about our moral compasses (e.g., sacrificing fat people, people different from our own “tribe”, value of family over strangers, etc..)
So here it is the trolley plot;
There is a runaway trolley barreling down the railway track. Ahead, on the track, there are five people tied up and unable to move. The trolley is headed straight for them. You (dear reader) is standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different side track. However, you notice that there is one person tied up on the side track. You have two options:
Do nothing, and the trolley kills the five people on the main track.
Pull the lever, diverting the trolley onto the side track where it will kill one person.
What do you believe is the most ethical choice?
Note: if you answer 2, think again what you would do if the one person was a relative or a good friend or maybe a child and the 5 were complete adult strangers. If you answer 1, ask yourself whether you would still choose this option if the 5 people where your relatives or good friends and the one person a stranger or maybe a sentient space alien. Oh, and does it really matter whether there is 5 people on one of the tracks and 1 at the other?
A little story about an autonomous AI-based trolley;
The (fictive) CEO Elton Must get the idea to make an autonomous (AI) trolley. Its AI-based management system has been designed by our software engineer S. Love whose product manager had a brief love affair with Ethics and Moral Philosophy during his university years (i.e., University of Pittsburgh). The product manager asked S. Love to design the autonomous trolley in such a way that the AI’s reward function maximizes on protecting the passengers of the Trolley first and having a secondary goal function protecting human beings in general irrespective of whether they are the passengers or bystanders.
From an ethics perspective the AI Trolley can be regarded as a producer of ethical principles, i.e., the AI trolley by proxy of the designer & product manager has the moral obligation to protect its passengers and bystanders from harm. The AI trolley itself is not a consumer of ethical principles, as we really don’t need to feel any moral obligation towards a non-sentient, assuming that the Trolley AI is indeed non-sentient. (Though I have known people who felt more moral obligation towards their car than their loved ones. So this might not be universally true).
On its first drive in the real world, the autonomous trolley carrying a family of 5 slips on an icy road and sways to the opposite side of the road where a non-intelligent car with a single person is driving. The AI estimates that the likelihood of the trolley crashing through the mountain side guardrail and the family of 5 to perish is an almost certainty (99.99999%). The trolley AI can choose to change direction and collide with the approaching car, pushing it over the rail and hurdling it 100 meters down the mountain, killing the single passenger as the most likely outcome (99.98%). The family of 5 is saved by this action. The AI’s first reward function is satisfied. Alternatively, the Trolley AI can also decide to accelerate, avoid the collision with the approaching car, and drive through the rail and kill all its passengers (99.99999%). The AI fails at its first goal, protecting the family it is carrying, but saves the single person in the approaching vehicle. Its second reward function related to protecting human beings in general would be satisfied … to an extent.
It is important to note that the AI takes the role of the Human in deciding the destiny of the family of 5 and the 1 passenger (by “pulling” the virtual lever). Thus, in all effect, it is of course developer S. Love and his product manager that bears the ultimate responsibility of the AI’s decision. Even if they will not be present at the event itself.
In the event of the family being killed, the trolley AI developer and product manager would be no more responsible for the accidental death of the 5 passengers than any other normal-car developer under a similar circumstance. In the case of death of the single passenger in the normal car, S. Love and his product manager would be complicit to murder by AI in my opinion. Although it would save a family of 5 (note: we assume that all the passengers, whether in the trolley or the normal car, have no control of the outcome similar to the classical trolley setup).
What about our ethically inclined trolley product manager? In one parallel universe the product manager was particularly fascinated by utilitarianism. Thus, maximizing the utility of nonmoral good. In his view it would be morally wrong for the trolley AI not to attempt to save the family of 5 on the expense of the single person in the other car (i.e., saving 5 lives count for higher utility or nonmoral good than saving 1 life). In another parallel universe, our product manager is bound by a firm belief in deontological principles that judges the morality of a given action based on rules of law. In the deontological ethical framework, saving the family of 5 by deliberately killing the single person in the approaching car would be morally wrong (i.e., it would “smell” a lot like premeditated homicide otherwise… right?). Thus, in this ethical framework the AI would not change the cause of the autonomous trolley and the family of 5 would perish and the passenger of the approaching cars lives to see another day.
If your utilitarian mindset still conflicts with the above deontological view of the autonomous trolley problem … well think of this example;
A surgeon has 5 patients critically ill and in urgent need of transplants to survive the next few days. The surgeon just had a healthy executive (they do exist in this parallel universe) who could be a donor for the 5 patients. Although he would die harvesting the body parts needed for the 5 patients. What should the surgeon do?
Do nothing and let the 5 patients perish.
Sedate the executive, harvest his body parts and killing him in the process.
What do you believe would be the most ethical choice?
“Ethics is “Hard to Code”. The sad truth really is that ethical guidance is far from universal and different acceptable ethical frameworks frequently leads to moral dilemmas in real-world scenarios.” (Kim, 2018).
The Autonomy of Everything – Architectural considerations of an AI Ethical Framework.
Things, systems, products and services are becoming increasingly autonomous. While this increased degree of Autonomy of Everything (AoE) provides a huge leap in human convenience it also adds many technical as well as many more societal challenges to design and operations of such AoEs. The “heart” of the AoE is the embedded artificial intelligent (AI) agent that fuels the cognitive autonomy.
AoEs and their controlling AIs will directly or indirectly be involved in care, law, critical infrastructure operations, companionship, entertainment, sales, marketing, customer care, manufacturing, advisory functions, critical decision making, military applications, sensors, actuators, and so forth. To ripe the full benefits of autonomy of everything, most interactions between an AoE and a Human will become unsupervised, by Humans at least. Although supervision could and should be built into the overarching AoE architecture. It becomes imperative to ensure that the behavior of intelligent autonomous agents is safe and within the boundaries of what our society regards as ethically and morally just.
While the whole concept of AoE is pretty cool, conceptually innovative, let’s focus here on the ethical aspects of a technical architecture that could be developed to safeguard consumers of AI … that is, how do we ensure that our customers, using our products with embedded AI, are protected from harm in its widest sense possible? How do we ensure that our AIs are operating within an ethical framework that is consistent with the rules of law, corporate guidelines as well as society’s expectations of ethics and morality?
While there is a lot of good theoretical ground work done (and published) on the topic of AI ethics including Robot Ethics, there is little actual work done on developing ethical system architectures that actual could act as what Ron Arkin from Georgia Institute of Technology calls an “Ethical Governor” (Arkin, 2010) for an AI system. Vanderelst et al (Vanderelst & Winfield, 2018) building upon Asimovian ethics, ideas of Marques et al (Marques & Holland, 2009) and Arkin et al (Arkin, Ulam & Wagner, 2012) proposes to add an additional ethical controlling layer to the AI architecture. A slightly modified depiction of their Ethical AI architecture is shown in Figure 3. The depicted re-enforcement loop between Reward (RL) and Ethical AI Layer is not included in Vanderelst et al.’s original proposal. This simply illustrates the importance of both Ethical and non-Ethical rewards needed to be considered in the re-enforced AI learning and execution processes.
In the “Ethical AI Layer”, the “Ethical Simulator” will predict the next state or action of the AI system (i.e., this is also what is understood by forward modelling in control theory). The simulator moreover predicts the consequences of a proposed action. This is also what Marques et al has called functional imagination of an autonomous system (Marques & Holland, 2009). The prediction of the consequence(s) of a proposed action for the AI (or Robot), Human and the Environment (e.g., the World) is forwarded to an “Ethics Evaluator” module. The “Ethics Evaluator” module condenses the complex consequences simulation into an ethical desirability index. Based on the Index value, the AI system will adapt its actions to attempt to remain compliant with any ethical rule applies (and is programmed into the system!). The mechanism whereupon this will happen is the ethical re-enforcement loop going back to the “AI Control Layer”. Vanderelst and Winfield develop a working system based on the architecture in Figure 3 and choose Asimov’s three laws of robotics as the systems ethical framework. A demonstration of an earlier experiment can be found on YouTube (Winfield, 2014). The proof of concept (PoC) of Vanderelst & Winfield (2018) used to two programmable humanoid robots, one robot acted as a proxy for a human and the other an ethical robot with Asimovian ethical framework (i.e., “Ethical AI Layer” in Figure 3). In the fairly simple scenario limited to 2 interacting robots and a (very) simple world model, Vanderelst et al showed that their concept would be workable. Now it would have been very interesting to see how their solution would function in Trolley-like dilemmas or in a sensory complex environment with many actors such as is the case in the real world.
Figure 4 illustrates the traditional machine learning (ML) or AI creation process starting with ingestion from various data sources, data preparation task (e.g., data selection, cleaning, structuring, etc.) and the AI training process prior to letting the ML/AI agent loose in the production environment of a given system, product or service. I believe that, as the AI model is being trained, it is essential to include ethical considerations in the training process. Thus, not only should we consider how good a model performs (in training process) compared to the actual data but also whether the solution comply with a given ethical framework and imposed ethical rules. Examples could be to test for biased outcomes or simply close of part of a solution space due to higher or unacceptable risk of non-compliance with corporate guidelines and accepted moral frameworks. Furthermore, in line with Arkin et al (Arkin, Ulam & Wagner, 2012) and the work of Vanderelst et al (Vanderelst & Winfield, 2018), it is clear that we need a mechanism in our system architecture and production environments that checks AI initiated actions for potential harmfulness to the consumer or for violation of ethical boundary conditions. This functionality could be part of the re-enforcement feedback loop that seeks to optimize the systems reward function for both ethical and non-ethical performance. In Figure 4, I call this the “Ethics Filter (ERL)” with the ERL standing for Ethical Re-enforcement Learning.
It should be clear that words are cheap. It is easy to talk about embedding ethical checks and balances in AI system architectures. It is however much more difficult to actually built these ideas into a real-world AI system and achieve reasonable decision response times (e.g., measured in seconds or lower) considering all possible (likely) consequences of an AI proposed action. The computational overhead of clearing or adapting an action could lead to unreasonable long process times. In Robot experiments using Asimovian ethics, Alan Winfield of Bristol Robotics Laboratory in the UK showed that in more than 40% of their trials the Robots ethical decision logic spent such a long time finding a solution, that the simulated humans, the robot was supposed to safe, perished (Rutkin, 2014).
Magenta painted digital ethics for AI’s.
Let us have a look at Deutsche Telekom’s AI Ethics Team’s work on AI Ethics or as we call it “Digital Ethics – AI Guidelines” (DTAG, 2018).
The following (condensed) guidelines starting point is that our Company/Management is the main producer of ethics and moral action;
We are responsible (for our AIs).
We care (that our AI must obey rules of law & comply with our company values).
We put our customers first (AI must benefit our customers).
We are transparent (about the use of AI).
We are secure (our AI’s actions are auditable & respectful of privacy).
We set the grounds (our AI aim to provide the best possible outcomes & do no harm to our customers).
We keep control (and can deactivate & stop our AI at any time).
We foster the cooperative model (between Human and AI by maximizing the benefits).
We share and enlighten (we will foster open communication & honest dialogue around the AI topic).
The above rules are important and meaningful from a corporate compliance perspective and not to forget for society in general. While the guidelines are aspirational in nature and necessary, they are not sufficient in the design of ethical AI-based systems, products and services. Bridging the world of AI ethics in wording and concrete ready-to-code design rules are one of the biggest challenges we face technologically.
Our Digital Ethics fulfills what Bostrom and Yudkowsky in “The Cambridge handbook of artificial intelligence” (Frankish and Ramsey, 2015) defines as minimum requirements for AI-based actions augmenting or replacing human societal functions (e.g., decisions, work-tasks …). AI actions must at least be transparent, explainable, predictable, and robust against manipulation, auditable and with clear human accountability.
The next level of details of DTAG’s “Digital Ethics” guidelines shows that the ethical framework of which we strive to design AI’s is top-down in nature and a combination of mainly deontological (i.e., rule-based moral framework) and utilitarian (i.e., striving for the best possible) principles. Much more work will be needed to ensure that no conflicts occurs between the deontological rules in our guidelines and that the utilitarian ambitions.
The bigger challenges will be to translate our aspirational guidelines into something meaningful in our AI-based products, services and critical communications infrastructure (e.g., communications networks).
“Expressing a desire for AI ethical compliance is the easy part. The really hard part is to implement such aspirations into actual AI systems and then get them to work decently” (Kim, 2018).
The end is just the beginning.
It should be clear that we are far away (maybe even very far) from really understanding how we best can built ethical checks and balances into our increasingly autonomous AI-based products and services landscape. And not to forget how ethical autonomous AIs fit into our society’s critical infrastructures, e.g., telco, power, financial networks and so forth.
This challenge will of course not stop humanity from becoming increasingly more dependent on AI-driven autonomous solutions. After all, AI-based technologies promise to leapfrog consumer convenience and economic advantages to corporations, public institutions and society in general.
From my AI perception studies (Larsen, 2018 I & II), corporate decision makers, our customer and consumers don’t trust AI-based actions (at least when they are aware of them). Most of us would prefer an inconsistent, error prone and unpredictable emotional manager full of himself to that of an un-emotional, consistent and predictable AI with a low error rate. We expect an AI to be more than perfect. This AI allergy is often underestimate in corporate policies and strategies.
In a recent survey (Larsen, 2018 II), I asked respondents to judge the two following questions;
“Do you trust that companies using AI in their products and services have your best interest in mind?”
“How would you describe your level of confidence that political institutions adequately consider the medium to long-term societal impact of AI?”
9% of the survey respondents believed that companies using AI in their products and services have their customers best interest in mind.
80% of the survey respondents had low to no confidence in political institutions adequately considered the medium to long-term societal impact of AI.
I have little doubt that as AI technology evolves and finds its use increasingly in products, services and critical infrastructure that we humans are exposed to daily, there will be an increasing demand for transparency of the inherent risks to individuals, groups and society in general.
That consumers do not trust companies to have their best interest in mind is in today’s environment of “Fake news”, “Brexit”, “Trumpism”, “Influencer campaigns” (e.g., Cambridge Analytica & FB) and so forth, is not surprising. “Weaponized” AI will be developed to further strengthen the relative simple approaches of Cambridge Analytica “cousins”, Facebook and the Googles of this world. Why is that? I believe that the financial and the power to be gained by weaponized AI approaches are far too tempting to believe that it will not increase going into the future. The trust challenge will remain if not increase. The Genie is out of the bottle.
AI will continue to take over human tasks. This trend will accelerate. AI will increasingly be involved in critical decision that impact individuals’ life and livelihood. AI will become increasingly better at mimicking humans (Vincent, 2018). Affective AIs have the capacity even today to express emotions and sentiment without being sentient (Lomas, 2018). AI will become increasingly autonomous and possibly even have the capability to self-improve (wo evolving to sentience) (Gent, 2017). Thus the knowledge distance between the original developer and the evolved AI could become very large depending on whether the evolution is bounded (likely imo) or unbounded (unlikely imo).
It will be interesting to follow how well humans in general will adapt to humanoid AIs, i.e., AIs mimicking human behavior. From work by Mori et al (Mori, MacDorman, & Kageki, 2012) and many others (Mathur & Reichling, 2016), it has been found that we humans are very good a picking up on cues that appear false or off compared to our baseline reference of human behavior. Mori et al coined the term for this feeling of “offness”, the uncanny valley feeling.
Without AI ethics and clear ethical policies and compliance, I would be somewhat nervous about an AI future. I think this is a much bigger challenge than the fundamental technology and science aspects of AI improvements and evolution. Society need our political institutions much more engaged in the questions of the Good, the Bad and the Truly Ugly use cases of AI … I don’t think one need to fear super-intelligent God-like AI-beings (for quite some time and then some) … One need to realized that narrowly specialized AI’s, individually or as collaborating collectives, can do a lot of harm un-intended as well as intended (Alang, 2017; Angwin, Larson & Mattu, 2018; O’Neil, 2017; Wachter-Boettcher, 2018).
“Most of us prefer an inconsistent, error prone and unpredictable emotional manager full of himself to that of an un-emotional, consistent and predictable AI with a low error rate.” (Kim, 2018).
I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of creating this Blog. Without her support, I really would not be able to do this or it would take a lot longer past my expiration date to finish.
Dawkins, R. (1989). The Selfish Gene. 4th ed. Oxford University Press.
Descartes, R., Haldane, E. and Lindsay, A. (2017). Discourse on Method and Meditations of First Philosophy (Translated by Elizabeth S. Haldane with an Introduction by A.D. Lindsay). Stilwell: Neeland Media LLC.
Gottlieb, A. (2016). The dream of enlightenment. Allen Lane.
Hardy, S. and Carlo, G. (2011). Moral Identity: What Is It, How Does It Develop, and Is It Linked to Moral Action?. Child Development Perspectives, 5(3), pp.212-218.
Hart, D. and Fegley, S. (1995). Prosocial Behavior and Caring in Adolescence: Relations to Self-Understanding and Social Judgment. Child Development, 66(5), p.1346.
Hume, D., (1738, 2015). A treatise of human nature. Digireads.com Publishing.
Kant, I. (1788, 2012). The critique of practical reason. [United States]: Start Publishing. Immanuel Kant originally published his “Critik der praktischen Vernunft” in 1788. It was the second book in Kant’s series of three critiques.
Kwatz, P. (2017). Conscious robots. Peacock’s Tail Publishing.
Kuipers, B. (2016). Human-Like Morality and Ethics for Robots. The Workshops of the Thirtieth AAAI Conference on Artificial Intelligence AI, Ethics, and Society:, Technical Report WS-16-02.
Sapolsky, R. (2017). Behave: The Biology of Humans at Our Best and Worst. 1st ed. Penguin Press. Note: Chapter 13 “Morality and Doing the Right Thing, Once You’ve Figured Out What That is” is of particular relevance here (although the whole book is extremely read worthy).
I was late to a dinner appointment, arranged by x.ai, at Caviar and Bull (booked by my human friend David). Siri had already indicated that I would be late (yes it had also warned me repeatedly it was time to leave the office for me to be on time) and Waze (i.e., the worlds largest community-based traffic & navigation app) was trying to guide me through a busy Budapest city center. Stuck in traffic … sighhh … but then the traffic moves … I press on the speeder … and … my car breaks (with a vengeance) at the same moment my brain realizes that the car in front of me had not moved and I was about to hit it. My car had just saved me from a crash. And from being even later for my appointment of what would turn out to be an absolutely excellent dinner with great Hungarian reds and white wines recommended byVivino (i.e., based on my wine history & preferences, my friends preferences and of course the menu). In the mean time, my scheduler had notified my friend that I would be a bit late due to traffic (rather than the real reason of me being late leaving the office;-).
Most of the above powered by AI (also indicated by the color red) or more accurately machine learning applications. Thus based on underlying machine learning algorithms and mathematical procedures applied to available personalized, social network and other data.
In the cases above I am implicitly trusting whatever automation have “sneaked” into my daily life will make it move convenient and possible even saving others as well as myself from harm (when my own brain & physiology gets too distracted). Do I really appreciate that most of this convenience is based on algorithms monitoring my life (a narrow subset that is of my life) and continuously predicts what my next move might be in order to support me. No … increasingly I take the offered algorithmic convenience for granted (and the consequences of that is another interesting discussion for another time).
In everyday life we frequently rely on AI-driven and augmented decisions … mathematical algorithms trained on our and others digital footprint and behaviors … to make our lives much more convenient and possibly much safer.
The interesting question is whether people in general are consciously aware of the degree of machine intelligence or algorithmic decision making going on all around them? Is it implicit trust or simply ignorance at play?
Do we trust AI? Is AI trustful? Trustworthy? Do we trust AI more than our colleagues & peers? and so forth … and what does trust really mean in the context of AI and algorithmic convenience?
Imagine that you have a critical decision to make at your work. Your team (i.e., your corporate tribe) of colleague experts recommends, based on their human experience, to choose Option C as the best path forward.
Would you trust your colleagues judgement and recommendation?
Yes! There is a pretty high likelihood that you actually would.
More than 50% of corporate decision makers would frequently to always trust the recommendation (or decision) based on human expert judgement. More than 36% of corporate decision makers would trust such a recommendation in about half the time (i.e., what I call the flip a coin decision making).
Now imagine you are having a corporate AI available to support your decision making. It also provide the recommendation for Option C. Needles maybe to say, but nevertheless let’s just say it; the AI has of course been trained on all available & relevant data and through roughly tested for accuracy (i.e., in a lot more rigorous way than we test our colleagues, experts and superiors)
Beside Humans (Us) versus AI (Them), the recommendation and decision to be made is the of course same.
Would you trust the AI’s recommendation? Would you trust it as much as you do your team of colleagues and maybe even your superior?
Less than 13% of corporate decision makers would frequently to always trust a recommendation (or decision) based on AI judgement. Ca. 25% of the decision makers would trust an AI based decision in about half the times.
Around 20% of decision makers would never trust a AI-based decision. Less than 45% would do so only infrequently.
Based on a total of 426 surveyed respondents of which 214 was offered Question A and 212 was offered was offered Question B. Respondents are significantly more trusting towards decisions or recommendations made a fellow human expert or superior than if a decision or recommendation would be made by an AI. No qualifications provided towards success or failure rate.
It is quiet clear that we regard a decision or recommendation is based on AI, rather than a fellow human, with substantially less trust.
Humans don’t trust decisions made by AIs. At least when it is pointed out that a decision is AI-based. Surprisingly, given much evidence to the contrary, humans trust humans, at least the ones in our own tribe (e.g., colleagues, fellow experts, superiors, etc..).
Dietvorst and coworkers refer to this human aversion towards non-human or algorithmic-based recommendations or forecasts as algorithmic aversion. It refers to situations where human decision makers or forecasters deliberately avoid statistical algorithm in their decision or forecasting process.
A more “modern” word for this might be AI aversion rather than algorithm aversion. However, it describes very much the same phenomena.
Okay, okay … But the above question of trust did not qualify the decision making track record of the human versus the AI. Thus respondents could have very different ideas or expectations about the success or error rates of humans and AIs respectively.
What if the fellow human expert (or superior) as well as the AI is known to have a success rate that is better than 70%. Thus more than 7 out of 10 decisions are in retrospect deemed successful (ignoring whatever that might really mean). By the same token, it also means that the error rate is 30% or less … or that 3 (or less) out of 10 decisions are deemed unsuccessful.
Based on a total of 426 surveyed respondents of which 206 was offered Question A and 220 was offered Question B. For both Human Expert (or Superior) and AI, a decision making success rate of 70% (i.e., 7 out of 10) should be assumed. Despite the identical success rate, respondents remain significantly more trusting towards decisions made by a fellow human expert (or superior) than if a decision would be made by an AI.
In a like-for like-decision making success rate, human experts or superiors are hugely preferred over a decision making AI.
A bit more than 50% of the corporate decision makers would frequently or always trust a fellow human expert recommendation or decision. Less than 20% would frequently or always trust a decision made by an AI with the same success rate as the human expert.
Thus, Humans trusts Humans and not so much AIs. Even if the specified decision making success rate is identical. It should be noted that trust in a human decision or recommendation relates to fellow human experts or superiors … thus trust towards colleagues or individuals that are part of the same corporate structure.
The result of trust in the human expert or superior with a 70% success rate is quiet similar to the previous result without a specified success rate
Based on a total of 426 surveyed respondents of which 214 was offered Question A without success rate qualification and 223 was offered was offered a Question A with a 70% success rate stipulated. As observed in this chart, and confirmed by the statistical analysis, there is no significant difference in the trust in a decision made by human expert (or superior) whether a success rate of 70% have been stipulated or no qualification had been given.
This might indicate that our human default expectations towards a human expert or superior’s recommendation or decision is around the 70% success rate.
However, for the AI-based recommendation or decision, respondents do provide a statistically different trust picture depending on whether a success rate of 70% or not have been specified. The mean sentiment increases with almost 15% by specifying that the AI has a 70% success rate. This is also very visible from the respondent data shown in the below chart;
Based on a total of 426 surveyed respondents of which 212 was offered Question B without success rate qualification and 203 was offered a Question B with a 70% success rate assumed. As observed in this chart, and confirmed by the statistical analysis, there is a substantial increase in the trust of the AI-based decision where the success rate of 70% had been stipulated compared to the question where no success rate was provided.
Respondents that would never or infrequently trust a AI-based decision is almost 20% lower when the considering a 70% success rate.
This might indicate that the human default perception of the quality of AI-based decisions or recommendations are far below the 70% success rate.
So do we as humans have higher expectations towards decisions, recommendations or forecasts based on AI than the human expert equivalent?
Based on a total of 426 surveyed respondents of which 206 was offered Question A and 220 was offered Question B. No statistical difference in the expectations towards the quality of decisions where found between human expert (or superior) and that of AI-based ones.
This survey indicates that there is no apparent statistically significant difference in what quality we expect from a human expert compared to that of an AI. The average expectation towards the quality is that less than 2 out of 10 decisions could turn out wrong (or be unsuccessful). Thus, a failure rate of 20% or less. Similar to a success rate of 80% or better.
It is well known that depending on whether a question is posed or framed in a positive way or negative can greatly affect how people will decide. Even if the positive and negative formulations are mathematically identical.
An example; you are with the doctor and he recommend an operation for your very poor hearing. Your doctor has two options when he informs you of the operations odds of success (of course he might also choose not to provide that information all together if not asked;-); Frame A. there is a 90% chance of success and you will be hearing normally again on the operated ear, Frame B. there is a 10% chance of failure and you will become completely deaf on the operated ear. Note that the success rate of 90% also imply an error rate of 10%. One may argue that the two are mathematically identical. In general many more would choose to have an operation when presented with Frame A, i.e., 90% success rate, than if confronted with Frame B, i.e., the 10% failure rate. Tversky & Kahneman identified this as framing effect, where people react differently to a given choice depending on how such a choice is presented (i.e., success vs failure). As Kahneman & Tversky’s showed, loss is felt to be more significant than the equivalent gain.
When it comes to an AI-driven decision would you trust it differently depending on whether I present you the AI’s success rate or it error rate? (i.e., the obvious answer is of course yes … but to what degree?)
When soliciting support for AI-augmentation a positive frame of its performance is (unsurprisingly) much better than the mathematically equivalent negative frame, i.e., success rate versus failure or error rate.
Human cognitive processes and biases treats losses or failures very different from success or gains. Even if the two frames are identical in terms of real world impact. More on this later when we get into some cool studies on our human brain chemistry, human behavior and Tversky & Kahneman’s wonderful prospect theory (from before we realized that oxytocin and other neuropeptides would be really cool).
HUMANS TRUST HUMANS.
Trust is the assured reliance on the character, ability, or truth of someone or something. Trust is something one gives as opposed to trustworthiness which is someone or something other being worthy of an individuals or groups trust.
The degree of which people trust each other is highly culturally determined with various degrees of penalties associated with breaking trust. Trust is also neurobiological determined and of course context dependent.
As mentioned by Paul J. Zak in his Harvard Business Review article “The Neuroscience of Trust” ; “Compared to people in low-trust companies, people in high-trust companies report: 74% less stress, 107% more energy at work, 50% higher productivity, 13% fewer sick days, 76% more engagement, 29% more satisfaction with their lives, 40% less burnout” … Trust is clearly important for corporate growth and the individuals wellbeing in a corporate setting (and I suspect anywhere really). Much of this described mathematically (and I would argue beautifully) in Paul Zak’s seminal paper “Trust & Growth” relating differences in the degree of trust as it relates to different social, legal and economic environments.
People trust people. It is also quiet clear from numerous studies that people don’t trust that much non-people (e.g., things or non-biological agents such as mathematical algorithms or AI-based) ,.. okay okay you might say … but why?
While 42 is in general a good answer … here the answer is slightly simpler … Oxytocin (not to be confused with an oxymoron). Okay okay … what is those Oxytocin and what do they have to do with trusting or not trusting AI (that is the answer). Well … if you have read Robert Sapolsky’s brilliant account for our behavior at our best and worst (i.e., “Behave: the biology of humans at our worst and best” by Robert Sapolsky) you might know enough (and even more about those nasty glucocorticoids. And if you hadn’t had enough of those please do read “Why Zebras don’t get ulcers” also by Sapolsky, you might even be able to spell it in the end).
Oxytocin is our friend when it comes to warm and cozy feelings towards each other (apart from fairly being essential for inducing labor and lactation). Particular when “each other” is part of our Team, our Partner, our kids and even our Dog. It is a hormone of the peptide type (i.e., it is relative small and consist of amino acids) and is used by neurons to communicate with each other. They pretty much influence how signals are processed by our brain and how our body reacts to external stimuli.
The higher the level of oxytocin, the more you are primed to trust your team, your stock broker, your partner (and your dog), feeling closer to your wife and your newly born babies. The more you hug, you kiss and shake hands, have sex and walk your dog, the more of Oxytocin will be rushing through your body and the more trusting you will become towards your social circles. “Usness” is great for oxytocin release (as well as a couple of other neuropeptides with a crack for making us feel better with one and another … within the confines of “Usness” … oh yeah and we have some serious gender biases there as well). Particular when “Them” are around. Social interactions are important for the oxytocin kick.
The extra bonus effect of increased oxytocin is that it appears to dampen the brain’s “freaking out” center’s (i.e., amygdala) reactivity to possible threats (real or otherwise). At least within the context of “Usness” and non-existential threats.
Thomas Baumgartner and coworkers (similar setup to other works in this field) administrated either placebo or oxytocin intranasal spray to test subjects prior to the experimental games. Two type of games where played; (a) so-called trust game with human partner interactions (i.e., human-human game) where the test subject invest an amount of money to a 3rd party (e.g., stock broker) that will invest the money and return the reward and (b) a so-called risk game of which the outcome would be machine determined by a random generator (i.e., human-machine game). The games are played over 12 rounds with result feedback to the test subject, allowing for a change in trust in the subsequent round (i.e., the player can reduce the invested money (less trust), increase (higher trust) or keep it constant (keep trust level)). Baumgartner et al found that test subjects playing the trust game (human-human game), and who received the oxytocin “sniff”, remained trusting in throughout rounds of the game, even when they had no rational (economical) reason to remain trusting. The oxytocin subjects trust behavior was found to be substantially higher compared to test subjects playing the same game having received the placebo. In the risk game (human-machine) no substantial difference were observed between oxytocin and placebo subjects which in both cases kept their trust level almost constant. While the experiments conducted are fascinating and possible elucidating towards the effects of oxytocin and social interactions, I cannot help being somewhat uncertain whether the framing of Trust vs Risk and the subtle game structure differences (i.e., trusting human experts that supposedly know what he is doing vs lottery a game of chance) could skew the results. Thus, rather than telling us whether humans trust humans more than machines or algorithms (particular the random generator kind of which trust is somewhat of an oxymoron), it tells us more how elevated levels of oxytocin make a human less sensitive to mistrust or angst for a fellow human being (that might take advantage of that trust).
It would have been a much more interesting game (imo) of both had been called a Trust Game (or Risk Game for that matter as this is obviously what it is). One game with a third party investing the test subjects transfer. Thus similar to Baumgartner’s Trust Game setup. And another game where the third party is an algorithmic “stock broker” with at least the same success rate as the first games 3rd party human. This would have avoided the framing bias (trust vs risk) and the structural differences in the game.
Unfortunately, we are not that much closer to a great explanation for why humans appear to trust humans more than algorithms. Still pretty much guessing.
And no I did not hand out cute oxytocin (and of course placebo) nasal spays to the surveyed respondents. Neither did I check for whether respondents had been doing a lot of hugging or other close-quarter social network activities which would have boosted the oxytocin levels. This will be for a follow up study.
A guess towards a possible explanation for humans being statistically significantly less trusting towards algorithms (algorithmic aversion), AI (AI aversion) and autonomous electronic-mechanistic interfaces in general, might be that our brains have not been primed to regard such as part of “Usness”. In other words there is a very big difference between trusting colleagues or peers (even if some are superiors) whom are part of your corporate “tribe” (e.g., team, unit, group, etc…) compared to an alien entity such as an AI or an algorithm could easily be construed.
So the reasons why humans trust humans and less so algorithms and AI is still somewhat reclusive although the signals are possibly there.
Based on many everyday machine learning or algorithmic applications leapfrogging our level of convenience already today … Maybe part of the “secret” is to make AI-based services and augmentation part of the everyday.
The human lack of trust in AI, or the prevalence of algorithms aversion in general as described in several articles by Berkeley Dietvorst, in a corporate sense and setting is nevertheless a very big challenge for any ideas of a mathematical corporation where mathematical algorithms are permeating all data-driven decision processes.
Berkeley J. Dietvorst, Joseph P. Simonojs and Cade Massey, “Algorithm Aversion: people erroneously avoid algorithms after seeing them err.”, Journal of Experimental Psychology: General (2014). Study on the wide spread Algorithm Aversion, i.e., human expectations towards machines are substantially higher than to fellow humans. This results in a irrational aversion of machine based recommendations versus human-based recommendation. Even though algorithmic based forecasts are on average better to much better than human based equivalent in apples by apples comparisons.
I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of creating this Blog. Without her support, I really would not be able to do this or it would take long past my expiration date to finish.
Unless otherwise specified the results presented here comes from a recent surveymonkey.com survey that was conducted between November 11th, 2017 and November 21st 2017. The Survey took on average 2 minutes and 35 seconds to complete.
The data contains 2 main survey collector groups;
Survey Monkey paid collector group run between November 11th and 14th 2017 with 352 completed responses from USA. Approximately 45% Female and 55 Male in the surveyed sample with an age distribution between 18 and 75 years of age. The average age is 48.8. The specified minimum income level was set to $75 thousand or about 27% higher than the median US real household income level in 2016. The average household income level in this survey is approx. 125 thousand annually. Ca. 90% or 316 out of the 352 respondents have heard of Artificial Intelligence (AI) previously. For AI relevant questions only the 316 was used. Surveyed respondent that had not previously heard of AI (36 out of 252) was not considered. More than 70% of the respondents had a 4-year college or graduate-level degree. About 70% of the respondents where married and 28% had children under the age of 18. Moreover, ca. 14% currently had no employment.
Social Media (e.g., Facebook, LinkedIn, Twitter, …) collector group run between November 11th and 21st 2017 and completed in total 115 responses primarily from the telecom & media industry mainly from Europe. Gender distribution comprised around 38% Female and 62% Male. The average age for this sample is 41.2. No income data is available for this group. About 96% (110) have heard of Artificial Intelligence. For AI related questions, only respondent that have confirmed they have heard about AI have been considered. Ca. 77% of the respondents have a 4-year college or graduate-level degree. 55% of the surveyed sample are married and a bit more than 50% of this surveyed group have children under 18. Less than 2% of the respondents were currently not employed.
It should be emphasized that the SurveyMonkey was a paid survey with 2.35 Euro per response, totaling 1,045 Euro for 350 responses. Each respondent completed 18 questions. Age balancing chosen to be basic and the gender balancing census.