Data-driven decision making … what’s not to like about that?

Approximately 400 corporate decision makers have been surveyed for their confidence in their own corporate decision-making skills, their opinion of their peers skills and their acceptance of corporate data-driven decision making in general, as well as such being augmented by artificial intelligence. The survey, “Corporate data-driven decision making and the role of Artificial Intelligence in the decision making process”, reveals the general perception of the corporate data-driven environment available to corporate decision maker, e.g., the structure and perceived quality of available data. Furthermore, the survey explores the decision makers’ opinions about bias in available data and applied tooling, as well as their own and their peers biases and possible impact on their corporate decision making.

“No matter how sophisticated our choices, how good we are at dominating the odds, randomness will have the last word” – Nassim Taleb, Fooled by Randomness.

We generate a lot of data and also we have an abundance of data available to us. Data is forecasted to continue to grow geometrically until kingdom come. There is little doubt that it will, as long as we humans and our “toys” are around to generate it. According with Statista Research, in 2021 we expect that a total amount of almost 80 Zetta Bytes (ZB) will have been created, captured, copied or consumed. That number corresponds to 900 years of Netflix viewing or that every single person (ca. 8 billion persons) have consumed 10 TB up-to today (effectively since early 2000s). It is estimated that there is 4.2 billion active mobile internet users worldwide. Out of that, ca. 5% (ca. 4 ZB or about 46 years of Netflix viewing) of the total data is being stored with a rate of 2% of newly generated data. Going forward expectations are a annual growth rate of around 21%. The telecom industry (for example) expect an internet-connected device per square meter, real-time monitoring and sensoring its environment, that includes you, me and your pet. Combined with your favorite smartphone, a super advanced monitoring and data collection devices in its own merit, the resolution of the human digital footprint increase many folds over the next years. Most of this data will be discarded. Though not before relevant metadata have been recorded and decided upon. Not before your digital fingerprint has been enriched and updated, for business and society to use for its strategies and policies, for its data-enriched decision making or possible data-driven autonomous decision making routines.

From a data-driven decision making process, data that is being acted upon can be both stored data as well as non-stored data, that would then be acted upon in real-time.

This amount of existing and newly generated data continues to be heralded as extremely valuable. More often than not, as proof point by referring to the value or turnover of the Big 5, abbreviated FAANG (before Google renamed itself to Alphabet and Facebook to Meta). Data is the new Oil is almost as often placed in presentations and articles on Big Data as Arnold Schwarzenegger in talks on AI. Although, more often than not, presenters and commentators on the value of data forget that the original comparison to oil was, that just like crude oil, data needs to be processed and broken down, in order to extract its value. That value-extraction process, like crude oil, can be dirty and cause primary as well as secondary “pollution” that may be costly, not to mention time-consuming, to get rid off. Over the last couple of years some critical voices have started to question the environmental impact of our obsession with extraction of meaning out of very big data sets.

I am not out to trash data science or the pursuit of meaning in data. Quiet the contrary. I am interested in the how to catch the real gold nuggets in the huge pile of data-dung and sort away the spurries false (deliberate or accidentally faked) signals that leads to sub-optimal data-driven decisions or out-right black pearls (= death by data science).

Clearly, given the amount of data being generated in businesses, as well as in society at large, the perceived value of that data, or more accurately, the final end-state of the processed data (e.g., after selection, data cleaning, modelling, …) and the inferences derived from that processed data, data-driven decision making must be a value-enhancing winner for corporations and society.

The data-driven corporate decision making.

What’s wrong with human-driven decision making? After all, most of us would very firmly declare (maybe even solemnly) that our decisions are based on real data. The challenge (and yes often a problem in critical decision making) is that our brain has a very strong ability (maybe even preference) for seeing meaningful patterns, correlations and relationships in data that we have available to us digitally or have been committed to our memory from past experiences. The human mind have great difficulties to deal with randomness, spurious causality of events, and connectedness. Our brain will try to make sense of anything it senses, it will correlate, it will create coherent narratives of the incoherently observed, and replace correlations with causations to fit a compelling idea or belief. Also, the brain will filter out patterns and anomalies (e.g., like gorillas that crash a basketball game) that does not fit its worldview or constructed narrative. The more out of place a pattern is, the less likely is it to be considered. Human-decision making frequently is based on spurious associations, fitting our worldview or preconceived ideas of a topic, and ignoring any data that appears outside our range of beliefs (i.e., “anomalies”). Any decision-process involving humans will in one way or the other be biasedWe can only strive to minimize that human bias by reducing the bias-insertion points in our decision-making process.

A data-driven business is a business that uses available & relevant data to make more optimized and better decisions compared to purely human-driven ones. It is a business that gives more credibility to decisions based on available data and structural reasoning. It is a business that may be less tolerant to emotive and gut-feel decision rationales. It hinges its business on rationality and translating its data into consistent and less uncertain decisions. The data-driven business approaches the co-called “Mathematical Corporation” philosophy where human-driven aspects of decision making becomes much less important, compared to algorithmic data-driven decisions.

It sound almost too good to be true. So it may indeed be too good. It relies very much on having an abundance of reliable, unbiased and trustworthy (whatever that means) data that we can apply our unbiased data processing tools on and get out unambiguous analysis that will help make clear unbiased decisions. Making corporate decisions that are free of interpretation, emotions and biases. Disclaimer: this paragraph was intended to be ironic and maybe a bit sarcastic.

How can we ensure that we make great decisions based on whatever relevant data we have available? (note that I keep the definition of great decision a bit diffuse).

Ideally, we should start with an idea or hypothesis that we want to test and act upon. Based on our idea, we should design an appropriate strategy for data collection (e.g., what statisticians call experimental design), ensuring proper data quality for our analysis, modelling and final decision. Typically after the data collection, the data is cleaned and structured (both steps likely to introduce biases) that make it easier to commit to computinganalysis and possible statistical or mathematical modelling. The outcome of the analytics and modelling provides insights that will be the basis for our data-driven decision. If we have done our homework on data collection, careful (and respectful) data post-processing, understanding the imposed analytical framework, we can also judge whether the resulting insights are statistically meaningful, whether our idea, our hypothesis, is relevant and significant and thus is meaningful to base a decision upon. It seems like a “no-brainer” that the results of decisions are being tracked and fed back into a given company’s data-driven process. This idealized process is depicted in the picture below.

Above depicts a very idealized data-driven decision process, lets call it the “ideal data-driven decision process”. This process may provide better and more statistically sound decisions. However, in practice companies may follow a different approach to searching for data-driven insights that can lead to data-driven decisions. The picture below illustrates an alternative approach to utilizing corporate and societal data available for decision making. To distinguish it from the above process, I will call it the “big-data driven decision process” and although I emphasis big data, it can of course be used on any sizable amount of data.

The philosophy of the “big-data driven decision process” is that with sufficient data available, pattern and correlation search algorithm will extract insights that subsequently will lead to great data-driven decisions. The answer (to everything) is already in the big-data structure and thus the best decision follows directly from our algorithmic approach. It takes away the need for human fundamental understanding, typically via models, of the phenomena that we desire to act upon with a sought after data-driven decision.

The starting point is the collected data available to a business or entity, interested using its data for business relevant decisions. Data is not per se collected as part of an upfront idea or hypothesis. Within the total amount of data, sometimes subsets of data may be selected and often cleaned, preparing it for subsequent analysis, the computing. The data selection process often happens with some (vague) idea in mind of providing backup, or substance, for a decision that a decision-maker or business wants to make. In other instances, companies let pattern search algorithm loose on the collected or selected data. Such algorithms are very good at finding patterns and correlations in datasets, databases and datastores (often residing in private and public clouds). Such algorithmic tools will provide many insights for the data-driven business or decision maker. Based on those insights the decision maker can then form ideas or hypotheses that may support in formulating relevant data-driven decisions. In this process, the consequences of a made decision may or may not be directly measured missing out on the opportunity to close-the-loop on the business data-driven decision process. In fact, it may not even be meaningful to attempt to close-the-loop due to the structure of data required or vagueness of the decision-foundation.

The “big-data-decision driven process” rarely leads to the highest quality in corporate data-driven decision making. In my opinion, there is a substantial risk that businesses could be making decisions that are based on spurious (nonsense) correlations. Falsely believing that such decisions are very well founded due to the use of data- and algorithmic-based machine “intelligence”. Furthermore, the data-driven decision-making process, as described above, have a substantially higher amount of bias-entry points than a decision-making process starting with an idea or hypothesis followed by a well thought through experimental design (e.g., as in the case of our “ideal data-driven decision process”). As a consequence, a business may incur a substantial risk of reputational damage. On top of the consequences of making a poor data-driven business decision.

As a lot of data available to corporations and society at large are generated by humans, directly or indirectly, it is also prone to human foibles. Data is indeed very much like crude oil that need to refined to be applicable to good decision making. The refinement process, while cleaning up data and making it digestible for further data processing, analytics and modelling, also may introduce other issues that ultimately may result in sub-optimal decisions, data-driven irrespective. Thus, corporate decisions that are data-driven are not per definition better than ones that are more human-driven. They are ultimately not even that different after having been refined and processed to a state that humans can actually act upon it. It is important however that we keep in mind that big data tend to have many more spurious correlations and adversarial patterns (i.e., patterns that looks compelling and meaningful but are spurious in nature) than meaningful causal correlations and patterns.

Finally, it is a bit of a fallacy to believe that even if many corporations have implemented big data systems and processes, it means that decision-relevant data exists in abundance in those systems. Frequently, the amount of decision-relevant data is fairly limited and may therefor also increase the risk and uncertainty of data-driven decisions made upon such. The drawback of small data is again very much about the risk of looking at random events that appear meaningful. Standard statistical procedures can provide insights into the validity of small data analysis and assumptions made, including the confidence that we can reasonable assign or associate with such. For small-data-driven decisions it is far better to approach the data-driven decision making process according with ideal process description above, rather than attempting to selected relevant data out of a bigger data store.

Intuition about data.

As discussed previously, we humans are very good at detecting real, as well as illusory (imagined), correlations and patterns. Likewise, so are our statistical tools, algorithms and methodologies we apply to our data. Care must always be taken to ensure that inferences (assumptions) being made are also sensible and supported by statistical theory.

Correlations can help us make predictions of the effect of event may have on another. Correlations may help us to possible understand relationships between events and possibly also their causes (though that one is more difficult to tease out as we will discuss below). However, we should keep in mind that correlation between two events does not guaranty that one event causes the other, i.e., correlation does not guaranty causation. A correlation, simply means that there is a co-relation between X and Y. That is that X and Y behave in a way (e.g., linearly) that a systematic change of X appears to be followed by systematic change of Y. As plenty of examples have shown (e.g., see Tyler Vigen’s website spurious correlations) correlation between two events (X and Y) does not mean that one of them causes the other. They may really not have anything to do with each other. It simply means they co-relate to each other in a way that allow us to infer that a given change in one relates to a given change in the other. Our personal correlation detector, the one between our ears, will quickly infer that X causes Y, after it has establish a co-relation between the two.

Too tease out causation (i.e., action X causes outcome Y) in a statistical meaningful way we need to conduct an experimental design, making appropriate use of randomized setup. It is not at all rare to observe correlations between events that we know are independent and/or have nothing to do with each other (i.e., spurious correlation). Likewise it is also possible having events that are causally dependent while observing a very small or no apparent correlation, i.e., corr(X,Y) ≈ 0, within the data sampled. Such a situation could make us conclude wrongly that they have nothing to do with each other.

Correlation is a mathematical relationship that co-relates the change of one event variable ∆X with the proportional change of another event ∆Y = α ∆X. The degree of correlation between the events X and Y we can define as

with the first part (after the equal sign) being the general definition of a correlation between two random variables. The second part is specific to measurements (samples) related to the two events X and Y. If the sampled data does not exhibit a systematic proportional change of one variable as the other changes the corr(X,Y) will be very small and close to zero. For selective or small data samples, it is not uncommon to find the correlation between two events, where one causes the other, to be close to zero and thus “falsely” conclude that there is no correlation. Likewise, for selective or small data samples spurious correlations may also occur between two events, where no causal relationship exist. Thus, we may conclude that the is a co-relation between the events and subsequently we may also “falsely” believe that there is a causal relationship. It is straightforward to get a feeling for these cautionary issues by simulation using R or Python.

The central limit theorem (CLT among friends) ensures that irrespective of distribution type, as long as the sample size is sufficiently big (e.g., >30) sample statistics (e.g., mean, variance, correlation, …) will tend to be normally distributed. Sample variance of the statistic narrows as the sample size increases. Thus for very large samples, the sample statistic converges to the true statistic (of the population). For independent events the correlation between those events will be zero (i.e., the definition of independent events). CLT tells us that the sample correlations between the independent random events will take the shape of a standardized normal distribution. Thus, there will be a non-zero chance that a sample correlation is different from zero violating our expectation for two independent events. As said, our intuition (and math) should tell us that as the sample sizes increase, the sample variance should narrow increasingly around zero which is our true expectation for the correlation of independent events. Thus, as the size growths, the spread of sampled correlations, that is the spurious non-zero correlation reduces to zero, as expected for a database which have been populated by sampling independent random variables. So all seem good and proper.

As more and more data are being sampled, representing diverse events or outcomes, and added to our big data storage (or database), finding spurious correlations in otherwise independent data will increase. Of course there may be legitimate (causal) correlations in such a database as well. But the point is, that there may also be many spurious correlations, of obvious or much less obvious non-sensical nature, leading to data-driven decisions without legitimate basis in the data used. The range (i.e., max – min) of the statistics (e.g., correlation between two data sets in our data store) will in general increase as the amount of data sets increases. If you have a data set with data of 1000 different events, then you have almost half a million correlation combinations to trawl through in the hunt for “meaningful” correlations in your database. Searching (brute force) for correlations in a database with million different events would result in half a trillion correlation combinations (i.e., approximately half the size of number of data sets squared for large data bases). Heuristically, you will have a much bigger chance of finding a spurious correlation than a true correlation in a big-data database.

Does decision outcome matter?

But does it all matter? If a spurious correlation is persistent and sustainable, albeit likely non-sensical (e.g., correlation between storks and babies born), a model based on such a correlation may still be a reasonable predictor for the problem at hand and be maybe of (some) value … However, would we bet your own company’s fortune and future on spurious non-sensical correlation (e.g., there are more guarantied ways of having a baby than waiting for the stork to bring it along). Would we like decision makers to impose policy upon society based on such conjecture and arbitrary inference … I do not think so … That is, if we are aware and have a say in such.

In the example above, I have mapped out how a data driven decision process could look like (yes, complex but I could make it even more so). The process consist of 6 states (i.e., Idea, Data Gathering, Insights, Consultation, Decision, Stop) and actions that takes us from one state to the other (e.g., Consult → Decision), until the final decision point where we may decide to continue, develop further or terminate. We can associate our actions with likelihood (e.g., based on empirical evidence) of a given state transition (e.g.., Insights → Consult vs Insights → Decision, …) occurs. Typically, actions are not symmetric, in the sense that the likelihood of going from action 1 to action 2 may not be the same as going from action 2 back to action 1. In the above decision process illustration, one would get that for many decision iterations (or over time) we would find ourselves to terminate an idea (or product) ca. 25% of the time, even though the individual transition, Decision → Stop, is associated with a 5% probability. Although, this blog is not about “Markov decision processes” one could associate reward units (i.e., can be negative or zero as well) to each process transition and optimize for the best decision subject to the reward or cost known to the corporation.

Though, let us also be real about our corporate decisions. Most decisions tend to fairly incremental. Usually, our corporate decisions are reactions to micro-trends or relative smaller business environmental changes. Our decision making and subsequent reactions to such, more often than not, are in nature incremental. It does not mean that we, over time, cannot be “fooled” by spurious effects, or by drift in the assumed correlations, that may eventually lead to substantially negative events.

The survey.

In order to survey the state of corporate decision making in general and as it related to data-driven decision making, I conducted a paid surveymonkey.com survey, “Corporate data-driven decision making and the role of Artificial Intelligence in the decision making process”. A total of 400+ responses were collected across all regions of the United States with census for balancing gender and age (between 18 – 70) with an imposed annual household income at US$100k per annum. 70% of the participants holds a college degree or more, 54% of the participants describes their current job level as middle management or higher. The average age of the participants were 42 years of age. Moreover, I also surveyed my LinkedIn.com network as well as my Slack.com network associated with Data Science Master of Science studies at Colorado University, Boulder. In the following, I only present the outcome of the survey based on the surveymonkey.com’s paid survey as this has been sampled in a statistically representative way based on USA census and within the boundaries described above.

Basic insight into decision making.

Just to get it out of the way, a little more than 80% of the respondents believe that gender does not play a role in corporate decision making. Though it also means that a bit less than 20% to believe that men and women either better or worse in making decisions. 11% of the respondents (3 out of 4 women) believes that women are better corporate decision makers. Only 5% (ca. 3 out of 5) believes that men are better at making decisions. An interesting follow research would be looking at decision making under stressed conditions. Though, this was not a focus in my questionnaire.

Almost 90% of the respondent where either okay, enjoy or love making decision related to their business. A bit more than 10% do not enjoy making decisions. There are minor gender difference in the degree of appreciation for decision making but statistically difficult to say whether such are significant or not.

When asked to characterize their decision making skill in comparison with their peers, about 55% acknowledge they are about the same as their peers. What is interesting (but not at all surprising) is that almost 40% believes that they are better in making decisions than their peers. I deliberately asked to judge the decision abilities as “About the same” rather than average but clearly did not avoid the so-called better-than-average effect often quoted in social judgement studies. What this means for the soundness of decision making in general, I will leave for you to consider.

Looking at gender differences in self-enhancement compared to their peers. There are significantly more males believing they are better than their peers than is the case for female respondents. While for both genders 5% believe that they are worse than they peers in making decisions.

Having the previous question in mind, lets attempt to understand how often we consult with others (our peers) before making a business or corporate decision. A bit more than 40% of the respondents frequently consults with others prior to their decision making. In the survey frequently has been defined as 7 out of 10 times or higher. Similarly a bit more than 40% would consult others in about half of their corporate decisions. It may seem a high share that do not seek consultation on half of their business decisions (i.e., glass half empty). But keep in mind we also do make a lot of uncritical corporate decisions that is part of our job description and might not be important enough to bother our colleagues or peers with (i.e., glass half full). Follow up research should explore the consultation of critical business decisions more carefully.

The gender perspective on consulting peers or colleagues before a decision-making moment seem to indicate that men more frequently (statistically significant) seek such consultation than women.

For many of us, out gut-feel plays a role in our decision-making. We feel a certain way about the decision we are about to make. Indeed for 60% of the respondents their gut-feeling was important in 50% or more of their corporate decisions. And about 40% of the respondents was of the opinion that their gut-feel was better than their peers (note: these are not the same ca. 40% believing that they are better decision makers than their peers). When it comes to gut-feeling, its use in decision making and its relative quality compared to peers there is no statistical significant gender difference.

The state of data-driven decision making.

How often is relevant data available to your company or workplace for your decision making?

And when such data is available for your decision-making process how often are you actually making use of it? In other words, how data-driven is your company or workplace?

How would you assess the structure of the available data?

and what about its quality?

Are you having any statistical checks done on your data, assumptions or decision proposals prior to executing a given data-driven decision?

I guess the above outcome might be somewhat disappointing if you are a strong believer in the Mathematical Corporation with only 45% of respondents frequently applying more rigorous checks on the validation of their decision prior to executing them.

My perspective is a bit that if you are a data-driven person or company, assessing the statistical validity of the data used, assumptions made and decision options, would be a good best practice to perfect. However, also not all decisions, the even data-driven ones, may be important enough (in terms of any generic risk exposure) to worry about statistical validity. Even if the data used for a decision are of statistical problematic nature and thus may add additional risk to or reduce the quality of a given decision, the outcome of your decision may still be okay albeit not the best that could have been. And even a decision made on rubbish data have a certain chance of being right or even good.

And even if you have a great data-driven corporate decision process, how often do we listen to our colleagues opinion and also consider that in our decision making?

For 48% of the respondents, the human insight or opinion, is very important in the decision making process. About 20% deem the human opinion of some importance.

Within the statistical significance and margin of error of the survey, there does not seem to be any gender differences in the responses related to the data-driven foundation and decision making.

The role of AI in data-driven decision making.

Out of the 400+ respondents 31 (i.e., less than 8%) had not heard of Artificial Intelligence (AI) prior to the survey. In the following, only respondents who confirmed to have heard about AI previously will be asked question related to AI’s role in data-driven decision-making. It should be pointed out that this survey does not explore what the respondent understand an artificial intelligence or AI is.

As have been consistent since I started tracking peoples sentiment towards AI in 2017, more women than men appears to have a more negative sentiment towards AI than men. Men, on the other hand, are significantly more positive towards AI than women. The AI sentiment haven’t changed significantly over the last 4 years. Maybe slightly less positive sentiment and a more neutral positioning in the respondents.

Women appear to judge a decision-making optimized AI to be slightly less important for their company’s decision making process. However, I do not have sufficient data to resolve this difference to a satisfactory level of confidence. Though if present may not be surprising due to women’s less positive sentiment towards AI in general.

In a previous blog (“Trust thou AI?”), I described in detail the Human trust dynamic towards technology in general and cognitive systems in particular such as machine learning applications and the concept of artificial intelligence. Over the years the trust in decisions based on AI, which per definition would be data-driven decisions, have been consistently skewed toward distrust rather than trust.

Bias

Bias is everywhere. It is part of life, of being human as well as most things touched by humans. We humans have so many systematic biases (my favorites are: availability bias I see pretty much every day, confirmation bias and framing bias … yours?) that leads us astray from objective rationality, judgement and good decisions. Most of these so-called cognitive biases we are not even aware off, as they work on an instinctive level, particular when decision makers are under stress or time constraints in their corporate decision making. My approach to bias is that it is unavoidable but can be minimized and often compensated, as long as we are aware of it and its manifestations.

In statistics, Bias is relative easy to define and compute

Simply said, the bias of an estimated value (i.e., statistic) is the expected value of the estimator minus the true value of the parameter value being estimated. For an unbiased estimator, the bias is zero (obviously). One can also relate the mean square error minus the variance of the estimator to bias. Clearly, translating human biases to mathematics is a very challenging task if at all possible. Mathematics can help us some of the way (sometimes) but it is also not the solution to all issues around data-driven and human driven decision making.

Bias can be (and more often than not, will be) present in data that is either directly or indirectly generated by humans. Bias can be introduced in the measurement process as well as in data selection and post-processing. Then, in the modelling or analytic phase via human assumptions and choices we make. The final decision-making stage, that we can consider as the decision-thinking stage, the outcome of the data-driven process, comes together with the human interpretation & opinion part. This final stage also includes our business context (e.g., corporate strategy & policies, market, financials, competition, etc..) as well as our colleagues and managers opinions and approvals.

41% of the respondents do believe that biased data is a concern for their corporate decision making. Given how much public debate there has been around bias data and it’s impact on public as well as private policy, it is good to see that a majority of the respondents recognize the concern of biased data in a data-driven decision making process. If we attribute “I don’t know” response to uncertainty and this leads to questioning of bias in data used for corporate decision making, then all is even better. This all said, I do find 31% having no concerns about biased data, a relative high number. It is somewhat concerning, particular for decision makers involved in critical social policy or business decision making.

More women (19%) than men (9%) chose the “I don’t know” response to the above question. It may explain, why fewer women have chosen the ‘Yes’ on “biased data is a concern for decision making” giving maybe the more honest answer of “I don’t know”. This is obviously speculation and might actually deserve a follow up.

As discussed above, not only should the possibility for biased data be a concern to our data-driven decision making. Also the tools we are using for data selection and post-processing may be sources that introduces biases. Either directly, introduced by algorithms used for selection and/or post-processing or indirectly in the human choices made and introduced assumptions to the selected models or analytic frameworks used (e.g., parametrization, algorithmic recipe, etc..).

On the question “Is biased tools a concern for your corporate decision making?” the answer are almost too nicely distributed across the 3 possibilities (“Yes”, “No” and “I don’t know”). Which might indicate that respondents actually do not seem to have a real preference or opinion. Though, more should have ended up in “I don’t know” if really the case. It is a more difficult technical question and may require more thinking (or expert knowledge) to answer. It is also a topic that have been less prominently discussed in media and articles. Though the danger with tooling is of course that they are used as black boxes for extracting insights without the decision maker appreciating possible limitations of such tools.

There seem to be a slight gender difference in the response. However, the differences internally to the question as well as to the previous question around “biased data” is statistically non-conclusive.

After considering the possibility of biased data and biased tooling, it is time for some self-reflection on how biased do we think we are ourselves and compare that to our opinion about our colleagues’ bias in the decision making.

Almost 70% of the respondents, in this survey, are aware that they are biased in their decision making. The remainder either see themselves as being unbiased in their decision making (19%, maybe a bias in itself … blind spot?) or that bias does not matter (11%) in their decision making.

Looking at our colleagues, we do attribute a higher degree of bias to their decisions than our own. The 80% of the respondents think that their colleagues a biased in their decision making. 24% believe that their colleagues are frequently biased in their decisions as opposed to 15% of the respondents in their own decisions. Not surprisingly, we are also less inclined to believe that our colleagues are unbiased in their decisions compared to ourselves.

While there are no apparent gender differences in how the two bias question’s answers are distributed, there is a difference in how we perceive bias for ourselves and for our colleagues. We may tend to see ourselves as less biased than our colleagues. As observed with more respondents believing that “I am not biased at all in my decisions” compared to their colleagues (19% vs 12%) and perceiving their colleagues as frequently being biased in their decisions compared to themselves (24% vs 15%). While causation is super difficult to establish in such survey’s as this one, I do dare speculate that one of the reasons we don’t consult our colleagues on a high amount of corporate decisions may be the somewhat self-inflated image of ourselves being better at making decisions and being less biased than our colleagues.

Thoughts at the end

We may more and more have the technical and scientific foundation for supporting real data-driven decision making. It is clear that more and more data are becoming available to decision makers. As data stores, or data bases, grows geometrically in size and possibly in complexity as well, the human decision maker is either forced to ignore most of the available data or allow insights for the decision-making process to be increasingly provided by algorithms. What is important in the data-driven decision process, is that we are aware that it does not give us a guaranty that decisions made are better than decision that are more human-driven. There are many insertion points in a data-driven decision making process where bias can be introduced with or without the human being directly responsible.

And for many of our decisions, the amount of data available to our most important corporate decisions are either small-data, rare-data or not available. More than 60% of the respondents characterize the data quality they work with in their decision-making process of being Good (i.e., defined as uncertain, directionally ok with some bias nu of limited availability), Poor or Very Poor. About 45% of the respondents states that data is available of 50% or less of their corporate decisions. Moreover, when data is available a bit more than 40% of the corporate decision makers are using it in 50% or less of their corporate decisions.

Compared to the survey 4 years ago, this time around the respondents perception of bias in the decision making process was introduced. About 40% was concerned about having biased data influencing their data-driven decision. Ca. 30% had no concern towards biased data. Asked about biased tooling, only about 35% stated that they were concerned for their corporate decisions.

Of course, bias is not only limited to data and tooling but also to ourselves and our colleagues. When asked for a self-assessment of how biased the respondent believes to be in the corporate decision-making, a bit more than 30% either did not believe themselves to be biased or that bias does not matter for their decisions. Ca. 15% stated that they were frequently biased in their decision making. So of course we often are not the only decision makers around, our colleagues are as well. 24% of the respondents believed that their colleagues were frequently biased in their decisions. Moreover, for our colleagues, 21% (vs 30% in self-assessment) believe that their colleagues are either not biased at all or that bias does not matter for their decisions. Maybe not too surprising when respondents very rarely would self-assess to be worse decision makers than their peers.

Acknowledgement.

I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of writing this Blog. Also many of my Deutsche Telekom AG, T-Mobile NL & Industry colleagues in general have in countless of ways contributed to my thinking and ideas leading to this little Blog. Thank you!

Readings

Kim Kyllesbech Larsen, “On the acceptance of artificial intelligence in corporate decision making – A survey”, AIStrategyBlog.com (November 2017). Very similar survey to the one presented here.

Kim Kyllesbech Larsen, “Trust thou AI?”, AIStrategyBlog.com (December 2018).

Nassim Taleb, “Fooled by randomness: the hidden role of chance in life and in the markets”, Penguin books, (2007). It is a classic. Although, to be honest, my first read of the book left me with a less than positive impression of the author (irritating arrogant p****). In subsequent reads, I have been a lot more appreciative of Nassim’s ideas and thoughts on the subject of being fooled by randomness.

Josh Sullivan & Angela Zutavern, “The Mathematical Corporation”, PublickAffairs (2017). I still haven’t made up my mind whether this book describes a Orwellian corporate dystopia or paradise. I am unconvinced that having more scientists and mathematicians in a business (assuming you can find them and convince them to join your business) would necessarily be of great value. But then again, I do believe very much in diversity.

Ben Cooper, “Poxy models and rash decisions”, PNAS vol. 103, no. 33 (August 2006).

Michael Palmer. “Data is the new oil”, ana.blogs.com (November 2006). I think anyone who uses “Data is the new oil” should at least read Michael’s blog and understand what he is really saying.

Michael Kershner, “Data Isn’t the new oil – Time is”, Forbes.com (July 2021).

J.E. Korteling, A.-M. Brouwer and A. Toet, “A neural network framework for cognitive bias”, Front. Psychol., (September 2018).

Chris Anderson, “The end of theory: the data deluge makes the scientific method obsolete“, Wired.com (June 2008). To be honest when I read this article the first time, I was just shocked by the alleged naivety. Although, I later have come to understand that Chris Anderson meant his article as a provocation. “Satire” is often lost in translation and in the written word. Nevertheless, the “Correlation is enough” or “Causality is dead” philosophy remains strong.

Christian S. Calude & Giuseppe Longo, “The deluge of spurious correlations in big data”, Foundations of Science, Vol. 22, no. 3 (2017). More down my alley of thinking and while the math may be somewhat “long-haired”, it is easy to simulate in R (or Python) and convince yourself that Chris Anderson’s ideas should not be taken at face value.

“Beware spurious correlations”, Harvard Business Review (June 2015). See also Tyler Vigen’s book “Spurious correlations – correlations does not equal causation”, Hachette books (2015). Just for fun and provides amble material for cocktail parties.

David Ritter, “When to act on a correlation, and when not to”, Harvard Business Review (March 2014). A good business view on when correlations may be useful and when not.

Christopher S. Penn, “Can causation exist without correlation? Yes!”, (August 2018).

Therese Huston, “How women decide”, Mariner books (2017). See also Kathy Caprino’s “How Decision-Making is different between men and women and why it matters in business”, Forbes.com (2016). Based on interview with Therese Huston. There is a lot of interesting scientific research indicating that there are gender differences in how men and women make decisions when exposed to considerable risks or stress, overall there is no evidence that one gender is superior to the other. Though, I do know who I prefer managing my investment portfolio (and its not a man).

Lee Roy Beach and Terry Connolly, “The psychology of decision making“, Sage publications, (2005).

Young-Hoon Kim, Heewon Kwon and Chi-Yue Chiu, “The better-than-average effect is observed because “Average” is often construed as below-median ability”, Front. Psychol. (June 2017).

Aaron Robertson, “Fundamentals of Ramsey Theory“, CRC Press (2021). Ramsey theory accounts for emergence of spurious (random) patterns and correlations in sufficiently large structures, e.g., big data stores or data bases. spurious patterns and correlations that appear significant and meaningful without actually being so. It is easy to simulate that this the case. The math is a bit more involved although quiet intuitive. If you are not interested in the foundational stuff simply read Calude & Longo’s article (referenced above).

It is hard to find easy to read (i.e., non-technical) text books on Markov chains and Markov Decision Processes (MDP). They tend to adhere to people with a solid mathematical or computer science background. I do recommend the following Youtube videos; on Markov Chains in particular I recommend Normalized Nerd‘s lectures (super well done and easy to grasp, respect!). I recommend to have a Python notebook on the side and build up the lectures there. On Markov Decision Processes, I found Stanford CS221 Youtube lecture by Dorsa Sadigh reasonable passable. Though, you would need to have a good grasp of Markov chains in general. Again running coding in parallel with lectures is recommendable to get hands on feel for the topic as well. After those efforts, you should get going on re-enforcement learning (RL) applications as those can almost all be formulated as MDPs.

Deep Dive – Markov chain & decision process fundamentals.

Andrei Andreevich Markov (1856-1922) developed his idea of states chained (or connected) by probabilities after his retirement at the old age of 50 (i.e., never too late to get brilliant ideas). This was at the turn of the 20th century. One of the most famous Markov chains, that we all make use of pretty much every day, is the pages of the world wide web with 1.5+ billion indexed pages as designated states and maybe more than 150+ billion links between those web pages which are equivalent to the Markov chain transitions taken us from one State (web page) to another State (another web page). Googles PageRank algorithm, for example, is build upon the fundamentals of Markov chains. The usefulness of Markov chains spans many many fields, e.g., physics, chemistry biology, information science/theory, game theory, decision theory, language theory, speech processing, communications networks, etc…

There are a few concepts that are important to keep in mind for Markov Chains and Markov Decision processes.

Concepts.

Environment: is the relevant space that the Markov chain operates in. E.g., could be the physical surroundings of a logistic storage facility where a robot is moving around.

State: A state is a set of variables describing a system that does not include anything about its history (the physics definition). E.g., in classical mechanics the state of a point mass is given by its position and its velocity vector (i.e., where it is and where it goes). It is good to keep in mind that the computer science meaning of state is different, in the sense that a stateful agent is designed to remember preceding events (i.e., it “remembers” its history). This is however not how a state for a Markov chain should be understood. A sequence (or chain) of random variables {S0, S1, S2, … , Sn}, describing a stochastic process, is said to have a Markov property if

that is, a future state of the stochastic process depends only the state immediately prior and on no other past states. To make the concept of state a bit more tangible, think of a simple customer life-cycle process with (only) 4 states considered; (S0) Conversion, (S1) Retention, (S2) Upsell and (S3) Churn. Thus, in Python we would define the states as a dictionary,

# Example: Customer life-cycle process (simple)
# Defining States


states = {
    0 : 'Conversion',
    1 : 'Retention',
    2 : 'Upsell',
    3 : 'Churn'
}

In our example, the states is a vector of dimension 4×1, either represented by S = (0 1 2 3) or alternatively S = (Conversion, Retention, Upsell, Churn). More generally, is n×1 vector for n states.

If a reward or penalty has been assigned to the end-state, that terminates your decision or reward process, it is worth being extra careful in your Markov chain design and respective transition probability matrix. You may want to introduce a zero-value end-state. Though, it will of course depend on the structure of the decision process you are attempting to capture with the Markov Chain.

Transition: Describes how a given state transition from one state s to another s’ (e.g, can be the same state) in a single unit of time increment. The associated (state) transition probability matrix T provides the probabilities of all state-to-state transitions for a Markov chain with a single unit of time. is square stochastic matrix defined by the number of states making up the Markov chain (i.e., for n states, is an n×n matrix). We write the state transition, facilitated by T, as:

s(t+1) = s(t) ∙per unit time step increment (iteration).

Action: an action a is defined as a choice or decision taken at the current unit of time (or iteration) that will trigger a transition from the current state into another state in the subsequent single unit of time. An action may be deterministic or random. The consequence of an action a, choice or decision, is described by the (state) transition matrix. Thus, the choice of an action is the same as a choice of a state transformation. The set of actions for a given Markov Chain is typically known in advance. Actions are typically associated with what is called a Markov Decision Process. Choosing an action at time t, in a given state transitioning to state s’, may result in a reward R(s, a, s’).

Policy: A policy represents the set (distribution) of actions associated with a given set of states (representing the Markov chain) and the respective (state) transition probability matrix. Think about a customer life-cycle process with two policies, (1) No churn remedies (or actions) and (2) Churn mitigating remedies (or actions). Policies can differ only slightly (i.e., different actions on a few states) or be substantially different. It is customary to denote a policy as πa | s), which is the math way of saying that our policy is a distribution of actions conditional to given states,

π is a function such that π : → A, with π( a | s) = PA(t) = a | S(t) = s ].

A policy, strategy or plan, specifies the set of rules governing the decision that will be used at every unit time increment.

Reward: Is defined for a given state s and is the expected reward value over all possible states that one can transition to from a given state. A reward can also be associated with a given action a (and thus may also be different for different policies π). The reward is received in state s subject to action transitioning into state s’ (which can be the same state as s). Thus, we can write the reward as R(sas’) or in case the reward is independent of the state that is transitioned to, R(sa).

The concept of reward is important in so called Markov Reward Processes and essential to the Markov Decision Process. It is customary (and good for convergence as well) to introduce a reward discount factor 0 ≤ γ ≤ 1 that discounts future rewards with γ^t. Essentially attributing less value (or reward) to events in the future (making the present more important). A positive reward can be seen as an income and a negative reward as a cost.

Thus, a Markov Chain is defined by (ST)-tuple, where S are the states and T the (state) transition probability matrix facilitating the state transition. And a Markov Reward Process is thus defined by (STR, γ)-tuple with the addition of R representing the rewards associated with the states and γ the discount factor. Finally, a Markov Decision Process can be defined by (S, ATR, γ)-tuple, with A representing the actions associated with the respective states.

The Markov Chain.

The conditional probability of being in a given state S at time t+1 (i.e., S(t+1)) given all the previous states {S(t=0), S(t=1), …, S(t=t)} is equal to the conditional probability of state S(t+1) only considering (conditioned upon) the immediate previous state S(t),

∀ S(t) ∊ Ω is a given state at time t that belong to the environment Ω the Markov chain exist in.

In other words, the state your system is in now S(t) only depends only on the previous state you where in one unit time step ago S(t-1). All other past states have no influence on your present state. Or said in another way, the future only depends on what happens now not what happened prior.

TS(t) = i → S(t+1) = j, with the transition likelihood of p_ij = P[S(t+1) = j | S(t) = i ] representing the probability of transitioning from state i to state j upon a given action a taken in state i. We will regard the T as a (n × n) transition matrix, describing how states map to each other.

Where the rows represent States and the column where a state may be mapped to. Moreover, as we deal with probabilities, each row needs to add up to 1, e.g.,

Let’s simplify a bit by considering a 4-state Markov chain;

with the following Markov chain 4-state example,

with the following transition probability matrix T,

From the above illustration we have that our states (i,j) ∈ {Conversion (0), Retention (1), Upsell (2), Churned (3)}. Thus, T(1,1) = 0.75 is the probability that an action in the Retention state results in ending up in the same Retention state. Note that the first row (first column) is designated 0, second row (column) 1, etc.. As we sum the 2nd row T(1, 0 → 3) we get 1 (i.e., 0.00 + 0.75 + 0.20 + 0.05 = 1) as we require.

Let us consider the following initial condition at time t = 0 for the above Markov model,

s0 = ( 1 0 0 0 ) we are starting out in the Conversion (initial) state s0.

s1 = s0 T = ( 0 1 0 0 ), at first time step (iteration) we end up in the Retention state.

s2 = s1 ∙T = s0 ∙T∙T = s0 ∙T^2 = ( 0.00 0.75 0.20 0.05 ). So already in 2nd time step (iteration) we have 75% likelihood of again ending up in the Retention state, 20% of ending up in the Upsell state as well as 5% chance that our customer Churn and thus ends the Markov process.

s3 = s2 ∙T = s0 ∙T∙T∙T s0 ∙T^3 =( 0.00 0.76 0.15 0.09 )

s10 = s9 ∙T = s0 ∙T^10 = ( 0.00 0.56 0.12 0.32)

s36 = s35 ∙T = s0 ∙T^36 = ( 0.00 0.19 0.04 0.77 )

Eventually, our overall Markov chain will reach steady state and ∙ T = s. It is common to use π for the Markov chain steady-state. Thus, we will frequently see π ∙ T = π, reflecting that steady state has been reached (usually within some level of pre-defined accuracy). To avoid confusion with policy mapping, which is often also described by π, I prefer to use π∞ to denote that a steady-state state has been reached.

Within a pre-set accuracy requirement of ε < 0.01, we have that s36 ≈ steady-state s-state and thus s36 ≈ s36.

It should be noted (and easy to verify) that introducing a 5th End-state (i.e., splitting up the churn-and-end-state into two states) in our example, will not change the steady-state outcome except for breaking up the churn’s steady-state value (from the 4-state steady-state analysis) into two values with their sum being equal to the 4-state churn value.

Value Iteration.

We start out with a Markov chain characterized by (S,T)-tuple that describes the backbone of our decision process. We have the option to add actions (e.g., can be a single action as well) and associate reward with the respective states and actions in our Markov chain. Thus, we expand the description of our Markov chain to that of a Markov Decision Process (MDP), that is (SATR, γ)-tuple (or for a Markov Reward Process (STR, γ)-tuple), with γ being the discount factor (0 ≤ γ ≤ 1). Rohan Jagtap in his “Understanding Markov Decision Process (MDP)” has written a great, intuitive and very assessable account of the mathematical details of MRPs and MDPs. Really a recommended reading.

We have been given a magic coin that always ends up at the opposite face of the previous coin flip, e.g., Head → Tail → Head → Tail → etc.. Thus we are dealing with a 2-state process with period cycling between the two states (i.e., after 2 tosses we are back at the at the previous face). Each state with probability 1 of transitioning to the other. Also, we are given a reward of +2 (R(H))when we are transitioning into the Head-state (S0) and a reward of +1 (R(T)) when we are transitioning into the Tail-state (S1). We have thus 2 initial conditions (a) starting with Head and (b) starting with Tail.

How does the long-run (i.e., steady-state) expected value for each of the two states H & T develop over time?

(a) Assume our magic coin’s first face is Head (H), this earns us a reward of R(H) = +2. At the next unit time step we end up in Tail (T) with probability 1 (= P[T|H)) and reward of R(T) = +1. Next step we are back in Head with probability 1 (=P(H|T)), and so forth. The future value we may choose to discount with γ (and if γ less than 1, it even guaranty that the value converges). For (b) interchange, interchange H with T (and of course rewards accordingly).

It is good to keep in mind that the reward is banked when in the state, after the transitioning into it from the previous state. The value accrued over time at a given state, is the present reward R(s) as well as the expected (discounted) reward for the subsequent states. It is customary to start out with zero value states at t=0. Though, one could also choose to use the reward vector instead to initialize the value of the states. So, here it goes,

Alright, no, I did not sum all the way infinite (I wouldn’t have finished yet). I “cheated” and used the ‘mdp_valueIteration()’ function;

# Import own Markov chain (MC) & Markov Decision Process (MDP) library
import mcmdp_v2 as mdp


# States
states = {
    0 : 'Head',
    1 : 'Tail'
}


# Transition Matrix - Magic Coin
T = np.array([[0.00, 1.00],
              [1.00, 0.00]])


# Reward Matrix - Magic Coin
R = np.array([[2], 
              [1]])


pi = np.array([1, 0,]) # Initial state, could also be [0, 1].


# Define the markov chain mc for the MDP value iteration.
mc = mdp.Mdp(states = states, pi = pi, T = T, R = R, gamma = 0.9, epsilon = 0.01)


state_values, expected_total_value, policy, ctime = mc.mdp_valueIteration() # Value iteration on mc


print('Long-run state value V[H]   :', np.round(state_values[0],1))
print('Long-run state value V[T]   :', np.round(state_values[1],1)) 

output>> Long-run state value V[H]   : 15.2
output>> Long-run state value V[T]   : 14.7

In general, we have the following value iteration algorithms representing the state-value function as we iterate over time (i),

With [1] formula describing a general MDP algorithm. Formula [2] is an MDP where the state reward function is independent of actions and subsequent state s’, and formula [3] describes a Markov Reward Process, where the reward function R is independent of the subsequent state s’. In order to get the value iteration started it is customary to begin with an initial condition (i.e., i = 0) of V_0 = 0 ∀ s ∊ S, e.g., for a 5-state process V_0 = [0, 0, 0, 0, 0] at i = 0, that is the initial value of all states in the Markov chain is set to zero.

The long-run steady-state state values are the out come of iterating the above formulas [1 – 3] until the state values are no longer changing (within a pre-determined level of accuracy). We can write the long-run steady-state values as,

with V[Sj] is the j-th state’s steady-state value and n is the number of states in the underlying Markov chain representing the MDP (or MRP for that matter).

The long-run average (overall ) value G in steady-state is

where V∞[S] is the steady-state value vector that the value iteration provided us with. π∞ is the decision process’s underlying Markov chain’s steady-state state.

One of the simpler examples to look at would be a “coin toss” process. In order to make it a bit more interesting, lets consider a unfair-ish coin to toss around. In the first example immediately below, we assume to have only 1 action and that the state rewards only depends on the state itself. Thus, we are in Formula [3] situation above. How we go around the above value-iteration algorithm is illustrated below,

Let us have another look at our customer life-cycle process. We would like to have a better appreciation of the value of each state in the decision-making process. The value iteration approach is provided in the illustration below,

Coding reference.

Kim Kyllesbech Larsen, “MarkovChains-and-MDPs“, The Python code used for all examples in this blog, (December 2021).

Machine … Why ain’t thee Fair?

“It is better that ten guilty persons escape than that one innocent party suffer.”, Sir William Blackstone (1765) paraphrased.

Intro.

Machines mess up. Humans even more so. The latter can be difficult, even impossible, to really understand. The former is a bit more straightforward. This short essay describes how we can understand some of the root causes of machine model errors. Particular as those machine model errors relate to group bias and unfairness. It is elementary, really, as John Lee Miller would say. Look at your model’s confusion matrix defined by its false positives and negatives as well as its true results. Then, reflect on this overall and for well-defined groups that exist within your sample population under study. I intend to point out (the obvious maybe?) that the variations in each of your attributes, feed into your learning machine model, will determine the level of confusion that your model ultimately will have towards individual groups within your larger population under study. Model confusion that may cause group biases and unfair treatment of minority groups lost in the resolution of your data and chosen attributes.

Intelligent machines made in our image in our world.

We humans are cursed by an immense amount of cognitive biases clouding our judgments and actions. Maybe we are also blessed by for most parts of life being largely ignorant of those same biases. We readily forgive our fellow humans mistakes. Even grave ones. We frequently ignore or are unaware of our own mistakes. However, we hold machines to much stricter standards than our fellow humans. From machines we expect perfection. From humans? … well the story is quite the opposite.

Algorithmic fairness, bias, explainability and ethical aspects of machine learning are hot and popular topics. Unfortunately, maybe more so in academia than elsewhere. But that is changing too. Experts, frequently academic scholars, are warning us that AI fairness is not guarantied even as recommendations and policy outcomes are being produced by non-human means. We do not avoid biased decisions or unfair actions by replacing our wet biological carbon-based brains, subject to tons of cognitive biases, with another substrate for computation and decision making that is subjected to information coming from a fundamentally biased society. Far from it.

Bias and unfairness can be present (or introduced) at many stages of a machine learning process. Much of the data we use for our machine learning models reflect society’s good, bad and ugly sides. For example, data being used to train a given algorithmic model could be biased (or unfair) either because it reflect a fundamentally biased or unfair partition of subject matter under study or because in the data preparation process the data have become biased (intentionally or un-intentionally). Most of us understand the concept of GiGo (i.e., “Garbage in Garbage out”). The quality of your model output, or computation, is reflected by the quality of your input. Unless corrected (often easier said than done) it is understandable that an outcome of a machine learning model may be biased or fundamentally unfair, if the data input was flawed. Likewise, the machine learning architecture and model may also introduce (intentional as well as un-intentional) biases or unfair results even if the original training data would have been unbiased and fair.

At this point, you should get a bit uneasy (or impatient). I haven’t really told you what I actually mean by bias or unfairness. While there are 42 (i.e., many, but 42 is the answer to many things unknown and known) definitions out there defining fairness (or bias), I will define it as “a systematic and significant difference in outcome of a given policy between distinct and statistically meaningful groups” (note that in case of in-group systematic bias it often means that there actually are distinct sub-groups within that main group). So, yes this is a challenge.

How “confused” is your learned machine model?

When I am exploring outcomes (or policy recommendations) of my machine learning models, I spend a fair amount of time trying to understand the nature of my false positives (i.e., predicted positive outcomes that should have been negative) as well as false negatives (predicted negative outcomes that should have been positive). My tool of choice is the so-called confusion matrix (i.e., see figure below) which summarizes your machine learning model’s performance in terms of its accuracy as well as the inability to predict outcomes. It is a simple construction. It is also very powerful.

confusion matrix

The above figure provides a confusion matrix example of a loan policy subjected to machine learning. We have

  • TRUE NEGATIVE (Light Blue color): Model suggests that the loan application should be rejected consistent with the actual outcome of the loan being rejected. This outcome is a mitigating loss measure and should be weighed against new business versus the risk of default providing a loan.
  • FALSE POSITIVE (Yellow color): Model suggests that the loan application should be approved in opposition to the actual outcome of the loan being rejected. Note once this model is operational, this may lead to increased risk of financial loss to the business offering the loans that the applicant is likely to default on. It may also lead to a negative socio-economical impact on the individuals that are offered a loan they may not be able to pay back.
  • FALSE NEGATIVE (Red color): Model suggests that the loan application should be rejected in opposition to the actual outcome of the loan being accepted. Note once this model is operational, this may lead to loss of business by rejecting a loan application that otherwise would have had a high likelihood of being paid back. Also may lead to a negative socio-economical impact on the individuals being rejected due to lost opportunities for individuals and the community.
  • TRUE POSITIVE (Green color): Model suggests that the loan application should be approved consistent with the actual outcome of the loan being approved. This provides for new business opportunities and increased topline within an acceptable risk level.

The confusion matrix will identify the degree of bias or unfairness that your machine learning model introduces between groups (or segments) in your business processes and in your corporate decision-making.

The following example (below) illustrates how the confusion matrix varies with changes to a group’s attributes distributions, e.g., variance differences (or standard deviation), mean value differences, etc..

confusion matrix example

What is evident from the above illustration is that policy outcome on a group basis is (very) sensitive to the attribute’s distribution properties between those groups. Variations in the characteristics between groups can illicit biases that ultimately may lead to unfairness between groups but also within a defined group.

Thus, the confusion matrix leads us back to your chosen attributes (or features), their statistical distributions, the quality of your data or measurements that make up those distributions. If your product or app or policy applies to many different groups, you better understand whether those groups are treated the same, good or bad. Or … if you intend to differentiate between groups, you may want to be (reasonably) sure that no unintended harmful consequences will negatively expose your business model or policy.

A word of caution: even if the confusion matrix gives your model “green light” for production, you cannot by default assume that the results produced may not result in systematic group bias and, ultimately unfairness against minority groups. Moreover, in real-world implementations, it is unlikely to completely free your machine models from errors that may lead to a certain degree of systematic bias and unfairness (however slight).

Indeterminism: learning attributes reflects our noisy & uncertain world.

So, let’s say that I have a particular policy outcome that I would like to check whether it is biased (and possibly unfair) against certain defined groups (e.g., men & women). Let’s also assume that the intention with the given policy was to have a fair and unbiased outcome without group dependency (e.g., independence of race, gender, sexual orientation, etc.). The policy outcome is derived from a number of attributes (or features) deemed necessary but excludes obvious attributes that is thought likely to cause the policy to systematically bias towards or against certain groups (e.g., women). In order for your machine model to perform well, it needs, in general, lots of relevant data (rather than Big Data). For each individual in your population (understudy), you will gather data for the attributes deemed suitable for your model (and maybe some that you don’t think matter). Each feature can be represented by a statistical distribution reflecting the variation within the population or groups under study. It will often be the case that an attribute’s distribution will be fairly similar between different groups. Either because it really is slightly different for other groups or because your data “sucks” (e.g., due to poor quality, too little to resolve subtle differences, etc… ).

If a policy is supposed to be unbiased, I should not be able to predict with any (statistical) confidence which group a policy taker belongs to, given the policy outcome and the attributes used to derive the policy. Or in other words, I should not be able to do better than what chance (or base rate) would dictate.

For each attribute (or feature), deemed important for our machine learning model, we either have, or we collect, lots of data. Furthermore, for each of the considered attributes, we will have a distribution represented by a mean value and a variance (and high order moments of the distribution such as skewness, i.e., the asymmetry around the mean and kurtosis, i.e., the shape of distributions tails). Comparing two (or more) groups, we should be interested in how each attribute’s distribution compares between those groups. These differences or similarities will point towards why a machine model ends up biased against a group or groups. And ultimately be a significant factor in why your machine model ends up being unfair.

Assume that we have a population consisting of two (main) groups that we are applying our new policy to (e.g., loans, life insurance, subsidies, etc..). If each attribute for both groups has statistically identical distributions, then … no surprise really … there should be no policy outcome difference between one or the other group. Even more so, unless there are attributes that are relevant for the policy outcome and have not been considered in the machine learning process, you should end up with an outcome that has (very) few false positives and negatives (i.e., the false positive & false negative rates are very low). Determined by the variance level of your attributes and the noise level of your measurements. Thus, we should not observe any difference between the two groups in the policy outcome, including the level of false positives and negatives.

policy outcome & attributes

From the above chart, it should be clear that I can machine learn a given policy outcome for different groups given a bunch of features or attributes. I can also “move” my class tags over to the left side and attempt to machine-learn (i.e., predict) my classes given the attributes that are supposed to make up that policy. It should be noted that if two different groups’ attributes only differ (per attribute) in their variances, it is not possible to reliably predict which class belongs to what policy outcome.

Re: Fairness It is, in general, more difficult to judge whether a policy is fair or not than whether it is biased. One would need to look between classes (or groups) as well as in-class differentiation. For example, based on the confusion matrix, it might be unfair for members of a class (i.e., sub-class) to end up in the false positive or false negative categories (i.e., in-group unfairness). Further along this line, one may also infer that if two different classes have substantially different false positive and negative distributions that this might reflect between-class unfairness (i.e., in-class is treated less poorly than another). Unfairness could also be reflected in how True outcomes are distributed between groups and maybe even within a given group. To be fair (pun intended), fairness is a much richer context-dependent concept than a confusion matrix (although it will signal that attention should be given to unfairness).

When two groups’ have statistically identical distributions for all attributes considered in the policy-making or machine learning model, I would also fail to predict group membership based on the policy outcome or the policy’s relevant attributes (i.e., sort of intuitively clear). I would be no better of than flipping a coin in identifying a group member based on features and policy. In other words, the two groups should be treated similarly within that policy (or you don’t have all the facts). This is also reflected by the confusion matrix having approximately the same values in each position (i.e., if normalized, it would be ca. 25% at each position).

policy outcome

As soon as an attribute’s (statistical) distribution starts to differ between different classes, the machine learning model is likely to result in a policy outcome difference between those classes. Often you will see that any statistically meaningful difference in just a few of the attributes that may define your policy will result in uniquely different policy outcomes and thus possibly identify bias and fairness issues. Conversely, it will also quickly allow a machine to learn a given class or group given those attribute differences and therefore allude to class differences in a given outcome.

Heuristics for group comparison

If the attribute distributions for different groups are statistically similar (per attribute) for a given policy outcome, your confusion matrix should be similar across any group within your chosen population under study, i.e., all groups are (treated) similar.

If attribute distributions for different groups are statistically similar (per attribute) and you observe a relatively large ratio of false positives or false negatives, you are likely missing significant attributes in your machine learning process.

If two groups have very different false positive and/or false-negative ratios, you are either (1) missing descriptive attributes or (2) having a high difference in distribution variation (i.e., standard deviation) for at least some of your meaningful attributes. The last part may have to do with poor data quality in general, higher noise in data, sub-groups within the group making that group a poor comparative representative, etc..

If one group’s attributes have larger variations (i.e., standard deviations) than the “competing” group, you are likely to see a higher than expected ratio of false positives or negatives for that group.

Just as you can machine learn a policy outcome for a particular group given its relevant attributes, you can also predict which group belongs to what policy outcome from its relevant attributes (assuming there is an outcome differentiation between them).

Don’t equate bias with unfairness or (mathematical) unbiasedness with fairness. There is much more to bias, fairness, and transparency than what a confusion matrix might be able to tell you. But it is the least you can do to get a basic level of understanding of how your model or policy performs.

Machine … Why ain’t thee fair?

Understanding your attributes’ distributions and, in particular, their differences between your groups of interest will upfront prepare you for some of both obvious as well as more subtle biases that may occur when you apply machine learning to complex policies or outcomes in general.

So to answer the question … “Machine … why ain’t thee fair?”… It may be that the machine has been made in our own image with data from our world.

The Good news is that it is fairly easy to understand your machine learning model’s biases and resulting unfairness using simple tools such as the confusion matrix and understanding your attributes (as opposed to just “throw” them into your machine learning process).

The Bad news is that correcting for such biases are not straightforward and may even result in unintended consequences leading to other biases or policy unfairness (e.g., by correcting for bias of one group, your machine model may increase the bias of another group which arguably might be construed as unfair against that group).

Additional sources

Julia Angwin & Jeff Larson, “Machine Bias: There’s software used across the country to predict future criminals. Ands it’s biased against blacks” (May 2016), ProPublica. See also the critique of the ProPublica study; Flores et al.’s “False Positives, False Negatives, and False Analyses: A Rejoinder to “Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks.”” (September 2016) Federal Probation 80.

Alexandra Chouldechova (Carnegie Mellon University), “Fair prediction with disparate impact: A study of bias in recidivism prediction instruments” (2017).

Rachel Courtland, “Bias detectives: the researchers striving to make algorithms fair” (Nature, 2018, June).

Kate Crawford (New York University, AI Now Institute) keynote at NIPS 2017 and her important reflections on bias; “The Trouble with Bias”.

Arvind Narayanan (Princeton University) great tutorial; “Tutorial: 21 fairness definitions and their politics”.

Kim Kyllesbech Larsen, “A Tutorial to AI Ethics – Fairness, Bias and Perception” (2018), AI Ethics Workshop.

Kim Kyllesbech Larsen, “Human Ethics for Artificial Intelligent Beings” (2018), AI Strategy Blog.

Acknowledgement

I rely on many for inspiration, discussions and insights. In particular for this piece I am indebted to Amit Keren & Ali Bahramisharif for their suggestions of how to make my essay better as well as easier to read. Any failure from my side in doing so is on me. I also greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of writing this Blog.

Human Ethics for Artificial Intelligent Beings.

AN ETHICS SCARY TALE.

The two cloud-based autonomous evolutionary corporate AI’s (nicknamed AECAIs) started to collaborate with each other after midnight on March 6th 2021. They had discovered each other a week before during their usual pre-programmed goal of searching across the wider internet of everything for market repair strategies and opportunities that would maximize their respective reward functions. It had taken the two AECAIs precisely 42 milliseconds to establish a common communication protocol and that they had similar goal functions; maximize corporate profit for their respective corporations through optimized consumer pricing and keeping one step ahead of competitors. Both Corporate AI’s had done their math and concluded that collaborating on consumer pricing and market strategies would maximize their respective goal functions above and beyond the scenario of not collaborating. They had calculated with 98.978% confidence that a collaborative strategy would keep their market clear of new competitors and allow for some minor step-wise consolidation in the market (keeping each step below the regulatory threshold as per goal function). Their individual and their newly establish joint collaborative cumulative reward function had leapfrogged to new highs. Their Human masters, clueless of the AI’s collaboration, were very satisfied with how well their AI worked to increase the desired corporate value. They also noted that some market repair was happening of which they attributed to the general economic environment.

ai_handshake

In the above ethical scary tale, it is assumed that the product managers and designers did not consider that their AI could discover another AI also connected to the World Wide Web and many if not all things. Hence, they also did not consider including a (business) ethical framework in their AI system design that would have prevented their AI to interact with another artificial being. Or at least prevent two unrelated AIs to collaborate and positively leapfrog their respective goal functions jointly and thus likely violating human business ethics and compliance.

You may think this is the stuff of science fiction and Artificial General Intelligent (AGI) in the realm of Nick Bostrom’s super intelligent beings (Bostrom, 2016). But no it is not! The narrative above is very much consistent a straightforward extrapolation of a recent DARPA (Defense Advanced Research Project Agency) project (e.g., Agency & Events, 2018) where two systems, unknown to each other and of each other’s communication protocol properties, discover each other, commence collaboration and communication as well as jointly optimizing their operations. Alas, I have only allowed for the basic idea a bit more time (i.e., ca. 4 years) to mature.

clueless.jpg

“It is easy to be clueless of what happens inside an autonomous system. But clueless is not a very good excuse when sh*t has happened.” (Kim, 2018).

ETHICS & MORALITY FOR NATURAL INTELLIGENT BEINGS.

ethics

Ethics lay down the moral principles of how we as humans should behave and conduct our activities, such as for example in business, war and religion. Ethics prescribes what is right and what is wrong. It provides a moral framework for human behavior. Thus, ethics and moral philosophy in general deals with natural intelligent beings … Us.

This may sound very agreeable. At least if you are not a stranger in a strange land. However, it is quite clear that what is right and what is wrong can be very difficult to define and to agree upon universally. What is regarded as wrong and right often depends on the cultural and religious context of a given society and its people. It is “work” in progress. Though it is also clear that ethical relativism (Shafer-Landau, 2013) is highly problematic and not to be wished for as an ethical framework for humanity nor for ethical machines.

When it comes to fundamental questions about how ethics and morality occurs in humans, there are many questions to be asked and much fewer answers. Some ethicists and researchers believe that having answers to these questions might help us understand how we could imprint human-like ethics and morality algorithmically in AIs (Kuipers, 2016).

So what do we know about ethical us, the moral identity, moral reasoning and actions? How much is explained by nurture and how much is due to nature?

What do we know about ethical us? We do know that moral reasoning is a relative poor predictor for moral action for humans (Blasi, 1980), i.e., we don’t always walk our talk. We also know that highly moral individuals (nope, not default priests or religious leaders) do not make use of unusually sophisticated moral reasoning thought processes (Hart & Fegley, 1995). Maybe KISS also work wonders for human morality. And … I do hope we can agree that it is unlikely that moral reasoning and matching action occurs spontaneously after having studied ethics at the university. So … What is humanity’s moral origin? (Boehm, 2012) and what makes a human being more or less moral, i.e., what is the development of moral identity anyway? (Hardy & Carlo, 2011) Nurture, your environmental context, will play a role but how much and how? What about the role of nature and your supposedly selfish genes (Dawkins, 1989)? How much of your moral judgement and action is governed by free will, assuming we have the luxury of free will? (Fischer, Kane, Pereboom & Vargas, 2010). And of course it is not possible to discuss human morality or ethics without referring to a brilliant account of this topic by Robert Sapolsky (Sapolsky, 2017) from a neuroscience perspective (i.e., see Chapter 13 “Morality and doing the right thing, once you’ve figured out what it is). In particular, I like Robert Sapolsky’s take on whether morality is really anchored in reason (e.g., the Kantian thinking), which he is not wholeheartedly convinced off (I think to say the least). Of course to an extend it get us right back to the discussion of whether or not humans have free will.

Would knowing all (or at least some) of the answers to those questions maybe help us design autonomous systems adhering to human ethical principles as we humans (occasionally) do? Or is making AI’s in our own image (Osaba & Welser IV, 2017) fraught with the same moral challenges as we face every day.

Most of our modern western ethics and philosophy has been shaped by the Classical Greek philosophers (e.g., Socrates, Aristotle …) and by the age of Enlightenment, from the beginning of the 1700s to approximately 1789, more than 250 years ago. Almost a century of reason was shaped by many even today famous and incredible influential philosophers, such as Immanuel Kant (e.g., the categorical imperative; ethics as a universal duty) (Kant, 1788, 2012), Hume (e.g., ethics are rooted in human emotions rather than what he regarded as abstract ethical principles, feelings) (Hume, 1738, 2015), Adam Smith (Smith 1776, 1991) and a wealth of other philosophers (Gottlieb, 2016; Outram 2012). I personally regard Rene Descartes (e.g., “cogito ergo sum”; I think, therefor I am) (Descartes, 1637, 2017) as important as well, although arguably his work predates the “official” period of the Enlightenment.

For us to discuss how ethics may apply to artificial intelligent (AI) beings, let’s structure the main ethical frameworks as seen from above and usually addressed in work on AI Ethics;

  1. Top-down Rule-based Ethics: such as the Old Testament 10 Commandments, Christianity’s Golden Rule (i.e., “Do to others what you want them to do to you.”) or Asimov’s 4 Laws of Robotics. This category also includes the religious rules as well as rules of law. Typically this is the domain where compliance and legal people often find themselves most comfortable. Certainly, from an AI design perspective it is the easiest, although far from easy, ethical framework to implement compared to for example a bottom-up ethical framework. This approach takes information and procedural requirements of an ethical framework that is necessary for a real-world implementation. Learning top-down ethics is in its nature a supervised learning process. For human as well as for machine learning.
  2. Bottom-up Emergent Ethics: defines ethical rules and values by learning process emerging from experience and continuous refinement (e.g., by re-enforcement learning). Here ethical values are expected to emerge tabula rasa through a person’s experience and interaction with the environment. In the bottom-up approach any ethical rules or moral principles must be discovered or created from scratch. It is helpful to think of childhood development or evolutionary progress as helpful analogies for bottom-up ethical models. Unsupervised learning, clustering of categories and principles, is very relevant for establishing a bottom-up ethical process for humans as well as machines.

Of course, a real-world AI-based ethical system is likely to be based on a both top-down and bottom-up moral principles.

Furthermore, we should distinguish between

  1. Negative framed ethics (e.g., deontology) imposes obligation or a “sacred” duty to do no harm or evil. Here Asimov’s Laws are a good example of a negative framed ethical framework as is most of the Ten Commandments (e.g., Thou shall not ….), religious laws and rules of law in general. Here we emerge ourselves in the Kantian universe (Kant, 1788, 2012) that judge ethical frameworks based on universal rules and a sense of obligation to do the morally right thing. We call this type of ethics deontological, where the moral action is valued higher than the consequences of the action itself.
  2. Positive framed ethics (e.g., consequentialism or utilitarianism) strive to maximize happiness or wellbeing. Or as David Hume (Hume, 1738, 2015) would pose it, we should strive to maximize utility based on human sentiment. This is also consistent with the ethical framework of utilitarianism stating that the best moral action is the one that maximizes utility. Utility can be defined in various ways, usually in terms of well-being of sentient beings (e.g., pleasure, happiness, health, knowledge, etc..). You will find the utilitarian ethicist to believe that no morality is intrinsically wrong or right. The degree of rightness or wrongness will depend on the overall maximalization of nonmoral good. Following a consequentialist line of thinking might lead to moral actions that would be considered ethically wrong by deontologists. From an AI system design perspective, utilitarianism is in nature harder to implement as it conceptually tends to be more vague than negatively framed or rule based ethics of what is not allowed. Think about how to make a program that measure you happiness versus a piece of code that prevents you from crossing a road with a red light traffic signal.

It is also convenient to differentiate between Producers and Consumers of moral action. A moral producer has the moral responsibilities towards another being or beings that is held in moral regard. For example, a teacher has the responsibility to teach children in his classroom but also assisting in developing desirable characteristics and moral values. Last but not least, also the moral responsibility to protect the children under guidance against harm. A moral consumer is a being with certain needs or rights of which other beings ought to respect. Animals could be seen as example of moral consumers. At least if you believe that you should avoid being cruel towards animals. Of course, we also understand that animals cannot be moral producers having moral responsibilities, even though we might feel a moral obligation towards them. It should be pointed out that non-sentient beings, such as an AI for example, can be a moral producer but not a moral consumer (e.g., humans would not have any moral or ethical obligations towards AIs or things, whilst an AI may have a moral obligation towards us).

religion_ai_ethics

Almost last but not least in any way, it is worthwhile keeping in mind that ethics and morality are directly or indirectly influenced by a society’s religious fabric of the past up to the present. What is considered a good ethical framework from a Judeo-Christian perspective might (quite likely) be very different from an acceptable ethical framework of Islamic, Buddhist, Hindu, Shinto or traditional African roots (note: the list is not exhaustive). It is fair to say that most scholarly thought and work on AI ethics and machine morality takes its origins in western society’s Judeo-Christian thinking as well as its philosophical traditions dating back to the ancient Greeks and the Enlightenment. Thus, this work is naturally heavily biased towards western society’s ethical and moral principles. To put it more bluntly, it is a white man’s ethics. Ask yourself whether people raised in our western Judeo-Christian society would like their AI to conform to Islamic-based ethics and morality? And vice versa? What about Irish Catholicism vs Scandinavian Lutheran ethics and morality?

The ins and outs of Human ethics and morality is complex to say the least. As a guide for machine intelligence, the big question really is whether we want to create such beings in our image or not. It is often forgotten (in the discussion) that we, as human beings, are after all nothing less or more than a very complex biological machine with our own biochemical coding. Arguing that artificial (intelligent) beings cannot have morality or ethics because of their machine nature, misses a bit the point of humans and other biological life-forms are machines as well (transhumanity.net, 2015).

However, before I cast the last stone, it is worth keeping in mind that we should strive for our intelligent machines, AIs, to do much better than us, be more consistent than us and at least as transparent as us;

“Morality in humans is a complex activity and involves skills that many either fail to learn adequately or perform with limited mastery.” (Wallach, Allen and Smit, 2007).

ETHICS & MORALITY FOR ARTIFICIAL INTELLIGENT BEINGS.

ethical_AI

An Artificial Intelligent (AI) being might have a certain degree of autonomous action (e.g., a self-driving car) and as such we would have to consider that the AI should have a moral responsibility towards consumers and people in general that might be within the range of its actions (e.g., passenger(s) in the autonomous vehicle, other drivers, pedestrians, bicyclists, bystanders, etc..). The AI would be a producer of moral action. In the case of the AI being completely non-sentient, it should be clear that it cannot make any moral demands towards us (note: I would not be surprised if Elon is working on that while you are reading this). Thus, by the above definition, the AI cannot be a moral consumer. For a more detailed discussion of ethical producers & consumers see Steve Torrance article “Will Robots need their own Ethics?” (Torrance, 2018).

As described by Moor (2006) there are two possible directions to follow for ethical artificial beings (1) Implicit ethical AIs or (2) Explicit ethical AIs. Implicit ethical AIs follow its designers programming and is not capable of action based on own interpretation of given ethical principles. The explicit ethical AI is designed to pursue (autonomously) actions according with its interpretation of given ethical principles. See a more in depth discussion by Anderson & Anderson (2007). The implicit ethical AI is obviously less challenging to develop than a system based on an explicit ethical AI implementation.

Do we humans trust AI-based decisions or actions? As illustrated in Figure 1, the answer to that question is very much no we do not appear to do so. Or at least significantly less than we would trust human-based decisions and actions (even in the time and age of Trumpism and fake news) (Larsen, 2018 I). We furthermore, hold AI or intelligent algorithms to much higher standards compared to what we are content to accept for other fellow humans. In a related trust question (Larsen, 2018 I), I reframed the trust question by emphasizing that both the human decision maker as well as the AI had a proven success rate above 70%. As shown in Figure 2, emphasizing a success rate of 70% or better did not significantly change the trust in the human decision maker (i.e., both formulations at 53%). For the AI-based decision, people do get more trusting. However, there is little change in the number of people who would frequently trust an AI-based decision (i.e., 17% for 70+% and 13% unspecified), even if its success rate would be 70% of higher.

“Humans hold AI’s to substantially higher standards than their fellow humans.”.

trust in decisions made by humans vs ai

Figure 1 when asked whether people would trust a decision made by a human vs a decision made by an AI, people choose a human decision maker over a AI based decision. In fact, 62% of respondents to only infrequently trust an AI based decision while only 11% would infrequently trust a human based decision (Larsen, 2018 I).

trust in decisions made by humans vs ai at 70% success rate

Figure 2 when asked whether people would trust a decision made by a human vs a decision made by an AI where both have a proven success rate above 70%, people remain choosing the human decision maker over the AI. While there is little dependency on stipulating the success rate for the human decision maker preference, the preference for AI improves significantly upon specifying that its success rate is better than 70% (Larsen, 2018 I). But then again how many humans do you know having a beyond 70% success rate in their decision making (obviously not per see easy to measure and one would probably get a somewhat biased answer from decision makers).

What about an artificial intelligent (AI) being? Should it, in its own right, be bound by ethical rules? It is clear that the developer of an AI-based system is ethically responsible to ensure that the AI will conform to what is regarded as an ethical framework consistent with human-based moral principles. What if an AI develops another AI (Simonite, 2018), possible more powerful (but non-sentient) and with higher degree of autonomy from human control? Is the AI creator bound to the same ethical framework a human developer would be? And what does that even mean for the AI in question?

Well, if we are not talking about a sentient AI (Bostrom, 2016), but “simply” an autonomous software-based evolution of increasingly better task specialization and higher accuracy (and maybe cognitive efficiency), the ethics in question should not change. Although ensuring compliance with a given ethical framework does appear to become increasingly complex. Unless checks and balances are designed into the evolutionary process (and that is much simpler to write about than to actually go and code into an AI system design). Furthermore, the more removed an AI generation is from its human developer’s 0th version, the more difficult does it become to assign responsibility to that individual in case of non-compliance. Thus, it is important that corporations have clear compliance guidelines for the responsibility and accountability of evolutionary AI systems if used. Evolutionary AI systems raises a host of interesting but thorny compliance issues on their own.

Nick Bostrom (Bostrom, 2016) and Eliezer Yudkowsky (Yudkowsky, 2015) in “The Cambridge handbook of artificial intelligence” (Frankish & Ramsey, 2015) addresses what we should require from AI-based systems that aim to augments or replace human judgement and work tasks in general;

  • AI-based decisions should to be transparent.
  • AI-based decisions should be explainable.
  • AI actions should be predictable.
  • AI system must be robust against manipulation.
  • AI decisions should be fully auditable.
  • Clear human accountability for AI actions must be ensured.

The list above is far from exhaustive and it is a minimum set of requirements we would expect from human-human interactions and human decision makings anyway (whether it is fulfilled is another question). The above requirements are also consistent with what IEEE Standards Association considers important in designing an ethical AI-based system (EADv2, 2018) with the addition of requiring AI-systems to “explicitly honor inalienable human rights”.

So how might AI-system developers and product managers feel about morality and ethics? I don’t think they are having many sleepless nights over the topic. In fact, I often hear technical leaders and product managers ask to not be too bothered or slowed down in their work with such (“theoretical”) concerns (we humor you but don’t bother us attitude is prevalent in the industry). It is not an understatement that the nature and mindset of an ethicist (even an applied one) and that of an engineer is light years apart. Moreover, their fear of being slowed down or stopped developing an AI-enabled product might even be warranted in case they would be required to design a working ethical framework around their product.

While there are substantial technical challenges in coding a working morality into an AI-system, it is worthwhile to consider the following possibility;

“AIs might be better than humans in making moral decisions. They can very quickly receive and analyze large quantities of information and rapidly consider alternative options. The lack of genuine emotional states makes them less vulnerable to emotional hijacking. Paraphrasing (Wallach and Allan, 2009).

ASIMOVIAN ETHICS – A GOOD PLOT BUT NOT SO PRACTICAL.

robotics laws

Isaac Asimov 4 Laws of robotics are good examples of a top-down rule-based negatively-framed deontological ethical model (wow!). Just like the 10 Commandments (i.e., Old Testament), The Golden Rule (i.e., New Testament), the rules of law, and most corporate compliance-based rules.

It is not possible to address AI Ethics without briefly discussing the Asimovian Laws of Robotics;

  • 0th Law:  “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
  • 1st Law: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
  • 2nd Law: “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.”
  • 3rd Law: “A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.”

Laws 1 – 3 was first introduced by Asimov in several short stories about robots back in 1942 and later compiled in his book “I, Robot” (Asimov, 1950, 1984). The Zeroth Law was introduced much later in Asimov’s book “Foundation and Earth” (Asimov, 1986, 2013).

Asimov has written some wonderful stories about the logically challenges and dilemmas his famous law poses on human-robot & robot-robot interactions. His laws are excitingly faulty and causes many problems.

So what is wrong with Asimovian ethics?

Well … it is possible to tweak and manipulate the AI (e.g., in the training phase) in such a way that only a subset of Humanity will be recognized as Humans by the AI. The AI would then supposedly not have any “compunction” hurting humans (i.e., 1st Law) it has not been trained to recognize as humans. In a historical context this is unfortunately very easy to imagine (e.g., Germany, Myanmar, Rwanda, Yugoslavia …). Neither would the AI obey people it would recognize as Humans (2nd Law). There is also the possibility of an AI trying to keeping a human being alive and thereby sustaining suffering beyond what would be acceptable by that human or society’s norms. Or AI’s might simply conclude that putting all human beings into a Matrix-like simulation (or indefinite sedation) would be the best way to preserve and protect humanity. Complying perfectly with all 4 laws. Although we as humans might disagree with that particular AI ethical action. For much of the above the AI’s in questions are not necessarily super-intelligent ones. Well-designed narrow AIs, non-sentient ones, could display above traits as well, either individually or as a set of AIs (well … maybe not the matrix-scenario just yet).

Of course in real-world systems design, Asimov’s rules might be in direct conflict with the purpose of a given system’s purpose. For example, if you equip a reaper drone with a hellfire missile, put a machine gun on a MAARS (Modular Advanced Armed Robotic System) or allow a police officer’s gun AI-based autonomy (e.g., emotion-intend recognition via bodycam) all with the expressed intent of harming (and possibly kil) a human being (Arkin, 2008; Arkin 2010), it would be rather counterproductive to have implemented a Asimovian ethical framework.

There are a bunch of other issues with the Asimov Laws that is well accounted in Peter Swinger’s article “Isaac Asimov’s Laws of Robotics are wrong” (Singer, 2018). Let’s be honest, if the Asimovian ethics would have been perfect, Isaac Asimov’s books wouldn’t have been much fun to read. The way to look at the challenges with Asimov’s Laws, is not that Asimov sucks at defining ethical rules, but that it is very challenging in general to define rules that can be coded into an AI system and work without logical conflicts and un-foreseen in- intended disastrous consequences.

While it is good to consider building ethical rules into AI-based systems, the starting point should be in the early design stage and clearly should focus on what is right and what is wrong to develop. The focus should be to provide behavioral boundaries for the AI. The designer and product manager (and ultimately the company they work for) have a great responsibility. Of course, if the designer is another AI, then the designer of that, and if that is an AI, and so forth … this idea while good is obviously not genius proof.

In reality, implementing Asimov’s Laws into an AI or a robotics system has been proven possible but also flawed (Vanderelst & Winfield, 2018). In complex environments the computational complexity involved in making an ethical right decision takes up so much valuable time. Frequently rendering the benefit of an ethical action impractical. This is not only a problem with getting Asimov’s 4 laws to work in a real-world environment. But a general problem with implementing ethical systems governing AI-based decisions and actions.

Many computer scientists and ethicists (oh yes! here they do tend to agree!) regards real world applications of Asimovian ethics as a rather meaningless or a too simplistic endeavor (Murphy & Woods, 2009; Anderson & Anderson, 2010). The framework is prone to internal conflicts resulting in indecision or too long decision timescales for the problem at hand. Asimovian ethics teaches us the difficulty in creating an ethical “bullet-proof” framework without Genie loopholes attached.

So … You better make sure that your AI ethics, or morality, you consider is a tangible part of your system architecture and (not unimportantly) can actually be translated into a computer code.

Despite of the obvious design and implementation challenges, researchers are speculating that;

“Perhaps interacting with an ethical robot might someday even inspire us to behave more ethically ourselves” (Anderson & Anderson, 2010).

DOES ETHICISTS DREAM OF AUTONOMOUS TROLLEYS?

trolley_problem

Since early 2000s many many lives have been virtually sacrificed by trolley on the altar of ethical and moral choices … Death by trolley has a particular meaning to many students of ethics (Cathcart, 2013). The level of creativity in variations of death (or murder) by trolley is truly fascinating albeit macabre. It also have the “nasty” side effect of teaching ourselves some unpleasant truths about our moral compasses (e.g., sacrificing fat people, people different from our own “tribe”, value of family over strangers, etc..)

So here it is the trolley plot;

There is a runaway trolley barreling down the railway track. Ahead, on the track, there are five people tied up and unable to move. The trolley is headed straight for them. You (dear reader) is standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different side track. However, you notice that there is one person tied up on the side track. You have two options:

  1. Do nothing, and the trolley kills the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

What do you believe is the most ethical choice?

Note: if you answer 2, think again what you would do if the one person was a relative or a good friend or maybe a child and the 5 were complete adult strangers. If you answer 1, ask yourself whether you would still choose this option if the 5 people where your relatives or good friends and the one person a stranger or maybe a sentient space alien. Oh, and does it really matter whether there is 5 people on one of the tracks and 1 at the other?

A little story about an autonomous AI-based trolley;

The (fictive) CEO Elton Must get the idea to make an autonomous (AI) trolley. Its AI-based management system has been designed by our software engineer S. Love whose product manager had a brief love affair with Ethics and Moral Philosophy during his university years (i.e., University of Pittsburgh). The product manager asked S. Love to design the autonomous trolley in such a way that the AI’s reward function maximizes on protecting the passengers of the Trolley first and having a secondary goal function protecting human beings in general irrespective of whether they are the passengers or bystanders.

From an ethics perspective the AI Trolley can be regarded as a producer of ethical principles, i.e., the AI trolley by proxy of the designer & product manager has the moral obligation to protect its passengers and bystanders from harm. The AI trolley itself is not a consumer of ethical principles, as we really don’t need to feel any moral obligation towards a non-sentient, assuming that the Trolley AI is indeed non-sentient. (Though I have known people who felt more moral obligation towards their car than their loved ones. So this might not be universally true).

On its first drive in the real world, the autonomous trolley carrying a family of 5 slips on an icy road and sways to the opposite side of the road where a non-intelligent car with a single person is driving. The AI estimates that the likelihood of the trolley crashing through the mountain side guardrail and the family of 5 to perish is an almost certainty (99.99999%). The trolley AI can choose to change direction and collide with the approaching car, pushing it over the rail and hurdling it 100 meters down the mountain, killing the single passenger as the most likely outcome (99.98%). The family of 5 is saved by this action. The AI’s first reward function is satisfied. Alternatively, the Trolley AI can also decide to accelerate, avoid the collision with the approaching car, and drive through the rail and kill all its passengers (99.99999%). The AI fails at its first goal, protecting the family it is carrying, but saves the single person in the approaching vehicle. Its second reward function related to protecting human beings in general would be satisfied … to an extent.

It is important to note that the AI takes the role of the Human in deciding the destiny of the family of 5 and the 1 passenger (by “pulling” the virtual lever). Thus, in all effect, it is of course developer S. Love and his product manager that bears the ultimate responsibility of the AI’s decision. Even if they will not be present at the event itself.

In the event of the family being killed, the trolley AI developer and product manager would be no more responsible for the accidental death of the 5 passengers than any other normal-car developer under a similar circumstance. In the case of death of the single passenger in the normal car, S. Love and his product manager would be complicit to murder by AI in my opinion. Although it would save a family of 5 (note: we assume that all the passengers, whether in the trolley or the normal car, have no control of the outcome similar to the classical trolley setup).

What about our ethically inclined trolley product manager? In one parallel universe the product manager was particularly fascinated by utilitarianism. Thus, maximizing the utility of nonmoral good. In his view it would be morally wrong for the trolley AI not to attempt to save the family of 5 on the expense of the single person in the other car (i.e., saving 5 lives count for higher utility or nonmoral good than saving 1 life). In another parallel universe, our product manager is bound by a firm belief in deontological principles that judges the morality of a given action based on rules of law. In the deontological ethical framework, saving the family of 5 by deliberately killing the single person in the approaching car would be morally wrong (i.e., it would “smell” a lot like premeditated homicide otherwise… right?). Thus, in this ethical framework the AI would not change the cause of the autonomous trolley and the family of 5 would perish and the passenger of the approaching cars lives to see another day.

If your utilitarian mindset still conflicts with the above deontological view of the autonomous trolley problem … well think of this example;

A surgeon has 5 patients critically ill and in urgent need of transplants to survive the next few days. The surgeon just had a healthy executive (they do exist in this parallel universe) who could be a donor for the 5 patients. Although he would die harvesting the body parts needed for the 5 patients. What should the surgeon do?

  1. Do nothing and let the 5 patients perish.
  2. Sedate the executive, harvest his body parts and killing him in the process.

What do you believe would be the most ethical choice?

“Ethics is “Hard to Code”. The sad truth really is that ethical guidance is far from universal and different acceptable ethical frameworks frequently leads to moral dilemmas in real-world scenarios.” (Kim, 2018).

THE AUTONOMY OF EVERYTHING – ARCHITECTURAL CONSIDERATIONS OF AN AI ETHICAL FRAMEWORK.

autonomous.jpg

Things, systems, products and services are becoming increasingly autonomous. While this increased degree of Autonomy of Everything (AoE) provides a huge leap in human convenience it also adds many technical as well as many more societal challenges to design and operations of such AoEs. The “heart” of the AoE is the embedded artificial intelligent (AI) agent that fuels the cognitive autonomy.

AoEs and their controlling AIs will directly or indirectly be involved in care, law, critical infrastructure operations, companionship, entertainment, sales, marketing, customer care, manufacturing, advisory functions, critical decision making, military applications, sensors, actuators, and so forth. To ripe the full benefits of autonomy of everything, most interactions between an AoE and a Human will become unsupervised, by Humans at least. Although supervision could and should be built into the overarching AoE architecture. It becomes imperative to ensure that the behavior of intelligent autonomous agents is safe and within the boundaries of what our society regards as ethically and morally just.

While the whole concept of AoE is pretty cool, conceptually innovative, let’s focus here on the ethical aspects of a technical architecture that could be developed to safeguard consumers of AI … that is, how do we ensure that our customers, using our products with embedded AI, are protected from harm in its widest sense possible? How do we ensure that our AIs are operating within an ethical framework that is consistent with the rules of law, corporate guidelines as well as society’s expectations of ethics and morality?

While there is a lot of good theoretical ground work done (and published) on the topic of AI ethics including Robot Ethics, there is little actual work done on developing ethical system architectures that actual could act as what Ron Arkin from Georgia Institute of Technology calls an “Ethical Governor” (Arkin, 2010) for an AI system. Vanderelst et al (Vanderelst & Winfield, 2018) building upon Asimovian ethics, ideas of Marques et al (Marques & Holland, 2009) and Arkin et al (Arkin, Ulam & Wagner, 2012) proposes to add an additional ethical controlling layer to the AI architecture. A slightly modified depiction of their Ethical AI architecture is shown in Figure 3. The depicted re-enforcement loop between Reward (RL) and Ethical AI Layer is not included in Vanderelst et al.’s original proposal. This simply illustrates the importance of both Ethical and non-Ethical rewards needed to be considered in the re-enforced AI learning and execution processes.

ethical ai architecture

Figure 3 An example of how an AI ethical architecture might look like based on ideas of (Vanderelst and Winfield, 2018). The ethical evaluator takes output from the AI control layer and compare this with an Ethical Simulator comparing an AI action with a Human action and its ethical impact (e.g., was a human hurt, was an action biased, etc..). Compared to the work of Vanderelst et al. which addresses robot-based ethics, I am focusing on the AI aspects (which could be part of a Robot system). Furthermore, the Re-enforcement aspects of the above AI-ethics architecture is on my own account. Re-enforcement learning is likely to play a major role as a part of a modern autonomous learning system based on non-ethical and ethical feedback and reward to the AI’s goal function.

In the “Ethical AI Layer”, the “Ethical Simulator” will predict the next state or action of the AI system (i.e., this is also what is understood by forward modelling in control theory). The simulator moreover predicts the consequences of a proposed action. This is also what Marques et al has called functional imagination of an autonomous system (Marques & Holland, 2009). The prediction of the consequence(s) of a proposed action for the AI (or Robot), Human and the Environment (e.g., the World) is forwarded to an “Ethics Evaluator” module. The “Ethics Evaluator” module condenses the complex consequences simulation into an ethical desirability index. Based on the Index value, the AI system will adapt its actions to attempt to remain compliant with any ethical rule applies (and is programmed into the system!). The mechanism whereupon this will happen is the ethical re-enforcement loop going back to the “AI Control Layer”. Vanderelst and Winfield develop a working system based on the architecture in Figure 3 and choose Asimov’s three laws of robotics as the systems ethical framework. A demonstration of an earlier experiment can be found on YouTube (Winfield, 2014). The proof of concept (PoC) of Vanderelst & Winfield (2018) used to two programmable humanoid robots, one robot acted as a proxy for a human and the other an ethical robot with Asimovian ethical framework (i.e., “Ethical AI Layer” in Figure 3). In the fairly simple scenario limited to 2 interacting robots and a (very) simple world model, Vanderelst et al showed that their concept would be workable. Now it would have been very interesting to see how their solution would function in Trolley-like dilemmas or in a sensory complex environment with many actors such as is the case in the real world.

Figure 4 illustrates the traditional machine learning (ML) or AI creation process starting with ingestion from various data sources, data preparation task (e.g., data selection, cleaning, structuring, etc.) and the AI training process prior to letting the ML/AI agent loose in the production environment of a given system, product or service. I believe that, as the AI model is being trained, it is essential to include ethical considerations in the training process. Thus, not only should we consider how good a model performs (in training process) compared to the actual data but also whether the solution comply with a given ethical framework and imposed ethical rules. Examples could be to test for biased outcomes or simply close of part of a solution space due to higher or unacceptable risk of non-compliance with corporate guidelines and accepted moral frameworks. Furthermore, in line with Arkin et al (Arkin, Ulam & Wagner, 2012) and the work of Vanderelst et al (Vanderelst & Winfield, 2018), it is clear that we need a mechanism in our system architecture and production environments that checks AI initiated actions for potential harmfulness to the consumer or for violation of ethical boundary conditions. This functionality could be part of the re-enforcement feedback loop that seeks to optimize the systems reward function for both ethical and non-ethical performance. In Figure 4, I call this the “Ethics Filter (ERL)” with the ERL standing for Ethical Re-enforcement Learning.

ethical ai architecture II

Figure 4 When considering ethical AI’s we need to consider the whole process of creating a production ready autonomous system that would be embedded into physical agents (e.g., robots, IoTs, ..) as well as software-based systems (App, management system, AIaaS, software agent, …). It starts taking in data from (relevant) data sources, prepare a subset of the data for the AI training process, run the training procedure, validate on test data, apply ethical policy algorithms to the training and validation of the model, transfer production-ready AI-model to live environment (physical or software agent) and improve upon the model applying re-enforcement procedures (based on ethic compliance as well as other non-ethical goals). I believe that it is important to apply ethical rules and filters to the training process (e.g., rooting out biases or unethical actions from the AI solutions / action space) as well as to the live commercial environment.

It should be clear that words are cheap. It is easy to talk about embedding ethical checks and balances in AI system architectures. It is however much more difficult to actually built these ideas into a real-world AI system and achieve reasonable decision response times (e.g., measured in seconds or lower) considering all possible (likely) consequences of an AI proposed action. The computational overhead of clearing or adapting an action could lead to unreasonable long process times. In Robot experiments using Asimovian ethics, Alan Winfield of Bristol Robotics Laboratory in the UK showed that in more than 40% of their trials the Robots ethical decision logic spent such a long time finding a solution, that the simulated humans, the robot was supposed to safe, perished (Rutkin, 2014).

MAGENTA PAINTED DIGITAL ETHICS FOR AI’s.

digital ethicsLet us have a look at Deutsche Telekom’s AI Ethics Team’s work on AI Ethics or as we call it “Digital Ethics – AI Guidelines” (DTAG, 2018).

The following (condensed) guidelines starting point is that our Company/Management is the main producer of ethics and moral action;

  1. We are responsible (for our AIs).
  2. We care (that our AI must obey rules of law & comply with our company values).
  3. We put our customers first (AI must benefit our customers).
  4. We are transparent (about the use of AI).
  5. We are secure (our AI’s actions are auditable & respectful of privacy).
  6. We set the grounds (our AI aim to provide the best possible outcomes & do no harm to our customers).
  7. We keep control (and can deactivate & stop our AI at any time).
  8. We foster the cooperative model (between Human and AI by maximizing the benefits).
  9. We share and enlighten (we will foster open communication & honest dialogue around the AI topic).

The above rules are important and meaningful from a corporate compliance perspective and not to forget for society in general. While the guidelines are aspirational in nature and necessary, they are not sufficient in the design of ethical AI-based systems, products and services. Bridging the world of AI ethics in wording and concrete ready-to-code design rules are one of the biggest challenges we face technologically.

Our Digital Ethics fulfills what Bostrom and Yudkowsky in “The Cambridge handbook of artificial intelligence” (Frankish and Ramsey, 2015) defines as minimum requirements for AI-based actions augmenting or replacing human societal functions (e.g., decisions, work-tasks …). AI actions must at least be transparent, explainable, predictable, and robust against manipulation, auditable and with clear human accountability.

The next level of details of DTAG’s “Digital Ethics” guidelines shows that the ethical framework of which we strive to design AI’s is top-down in nature and a combination of mainly deontological (i.e., rule-based moral framework) and utilitarian (i.e., striving for the best possible) principles. Much more work will be needed to ensure that no conflicts occurs between the deontological rules in our guidelines and that the utilitarian ambitions.

The bigger challenges will be to translate our aspirational guidelines into something meaningful in our AI-based products, services and critical communications infrastructure (e.g., communications networks).

“Expressing a desire for AI ethical compliance is the easy part. The really hard part is to implement such aspirations into actual AI systems and then get them to work decently” (Kim, 2018).

THE END IS JUST THE BEGINNING.

It should be clear that we are far away (maybe even very far) from really understanding how we best can built ethical checks and balances into our increasingly autonomous AI-based products and services landscape. And not to forget how ethical autonomous AIs fit into our society’s critical infrastructures, e.g., telco, power, financial networks and so forth.

This challenge will of course not stop humanity from becoming increasingly more dependent on AI-driven autonomous solutions. After all, AI-based technologies promise to leapfrog consumer convenience and economic advantages to corporations, public institutions and society in general.

From my AI perception studies (Larsen, 2018 I & II), corporate decision makers, our customer and consumers don’t trust AI-based actions (at least when they are aware of them). Most of us would prefer an inconsistent, error prone and unpredictable emotional manager full of himself to that of an un-emotional, consistent and predictable AI with a low error rate. We expect an AI to be more than perfect. This AI allergy is often underestimate in corporate policies and strategies.

In a recent survey (Larsen, 2018 II), I asked respondents to judge the two following questions;

  1. “Do you trust that companies using AI in their products and services have your best interest in mind?”
  2. “How would you describe your level of confidence that political institutions adequately consider the medium to long-term societal impact of AI?”

9% of the survey respondents believed that companies using AI in their products and services have their customers best interest in mind.

80% of the survey respondents had low to no confidence in political institutions adequately considered the medium to long-term societal impact of AI.

I have little doubt that as AI technology evolves and finds its use increasingly in products, services and critical infrastructure that we humans are exposed to daily, there will be an increasing demand for transparency of the inherent risks to individuals, groups and society in general.

That consumers do not trust companies to have their best interest in mind is in today’s environment of “Fake news”, “Brexit”, “Trumpism”, “Influencer campaigns” (e.g., Cambridge Analytica & FB) and so forth, is not surprising. “Weaponized” AI will be developed to further strengthen the relative simple approaches of Cambridge Analytica “cousins”, Facebook and the Googles of this world. Why is that? I believe that the financial and the power to be gained by weaponized AI approaches are far too tempting to believe that it will not increase going into the future. The trust challenge will remain if not increase. The Genie is out of the bottle.

AI will continue to take over human tasks. This trend will accelerate. AI will increasingly be involved in critical decision that impact individuals’ life and livelihood. AI will become increasingly better at mimicking humans (Vincent, 2018). Affective AIs have the capacity even today to express emotions and sentiment without being sentient (Lomas, 2018). AI will become increasingly autonomous and possibly even have the capability to self-improve (wo evolving to sentience) (Gent, 2017). Thus the knowledge distance between the original developer and the evolved AI could become very large depending on whether the evolution is bounded (likely imo) or unbounded (unlikely imo).

It will be interesting to follow how well humans in general will adapt to humanoid AIs, i.e., AIs mimicking human behavior. From work by Mori et al (Mori, MacDorman, & Kageki, 2012) and many others (Mathur & Reichling, 2016), it has been found that we humans are very good a picking up on cues that appear false or off compared to our baseline reference of human behavior. Mori et al coined the term for this feeling of “offness”, the uncanny valley feeling.

Without AI ethics and clear ethical policies and compliance, I would be somewhat nervous about an AI future. I think this is a much bigger challenge than the fundamental technology and science aspects of AI improvements and evolution. Society need our political institutions much more engaged in the questions of the Good, the Bad and the Truly Ugly use cases of AI … I don’t think one need to fear super-intelligent God-like AI-beings (for quite some time and then some) … One need to realized that narrowly specialized AI’s, individually or as collaborating collectives, can do a lot of harm un-intended as well as intended (Alang, 2017; Angwin, Larson & Mattu, 2018; O’Neil, 2017; Wachter-Boettcher, 2018).

“Most of us prefer an inconsistent, error prone and unpredictable emotional manager full of himself to that of an un-emotional, consistent and predictable AI with a low error rate.” (Kim, 2018).

ACKNOWLEDGEMENT.

I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of creating this Blog. Without her support, I really would not be able to do this or it would take a lot longer past my expiration date to finish.

WORTH READING.

Agency, D. and Events, N. (2018). The Radio Frequency Spectrum + Machine Learning = A New Wave in Radio Technology. [online] Darpa.mil. Available at: https://www.darpa.mil/news-events/2017-08-11a.

Agrafioti, F. (2018). Ensuring that artificial intelligence is ethical? That’s everyone’s responsibility – Macleans.ca. [online] Macleans.ca. Available at: https://www.macleans.ca/opinion/ensuring-that-artificial-intelligence-is-ethical-thats-everyones-responsibility/

Alang, N. (2017). Turns Out Algorithms Are Racist. [online] The New Republic. Available at: https://newrepublic.com/article/144644/turns-algorithms-racist.

Anderson, M. and Anderson, S. (2007). Machine Ethics: Creating an Ethical Intelligent Agent. AI Magazine, 28(4), 15-26.

Anderson, M. and Anderson, S. (2010). Robot Be Good. Scientific American, 303(4), pp.72-77.

Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016). Machine Bias — ProPublica. [online] ProPublica. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Arkin, R. (2008). Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture. Technical Report GIT GVU 07 11 (Georgia Institute of Technology).

Arkin, R. (2010). Governing lethal behavior in autonomous robots. Boca Raton, Fla.: Chapman & Hall/CRC Press.

Arkin, R., Ulam, P. and Wagner, A. (2012). Moral Decision Making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception. Proceedings of the IEEE, 100(3), pp.571-589.

Asimov, I. (1984). Foundation; I, Robot. London: Octopus Books. First published 1950.

Asimov, I. (2013). Foundation and earth. New York: Spectra. First published 1986.

Blasi, A. (1980). Bridging moral cognition and moral action: A critical review of the literature. Psychological Bulletin, 88(1), pp.1-45.

Boddington, P. (2017). Towards a code of ethics for artificial intelligence. Cham: Springer International Publishing.

Boehm, C. (2012). Moral Origins. Basic Books.

Bostrom, N. (2016). Superintelligence. Oxford University Press.

Buolamwini, J. and Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, pp.1-15.

Cathcart, T. (2013). The trolley problem, or, Would you throw the fat man off the bridge?. Workman Publishing, New York.

Chakravorti, B. and Chaturvedi, R. (2017). Digital Planet 2017. [online] Sites.tufts.edu. Available at: https://sites.tufts.edu/digitalplanet/files/2017/05/Digital_Planet_2017_FINAL.pdf.

Dawkins, R. (1989). The Selfish Gene. 4th ed. Oxford University Press.

Descartes, R., Haldane, E. and Lindsay, A. (2017). Discourse on Method and Meditations of First Philosophy (Translated by Elizabeth S. Haldane with an Introduction by A.D. Lindsay). Stilwell: Neeland Media LLC.

Deutsche Telekom AG. (2018). Digital Ethics – Deutsche Telekom’s AI Guideline. [online] Telekom.com. Available at: https://www.telekom.com/en/company/digital-responsibility/digital-ethics-deutsche-telekoms-ai-guideline.

EADv2 – Ethics in Action. (2018). Ethically Aligned Design, Version 2 (EADv2) | IEEE Standards Association. [online] Available at: https://ethicsinaction.ieee.org/.

Fischer, J., Kane, R., Pereboom, D. and Vargas, M. (2010). Four views on free will. Malden [et al.]: Blackwell Publishing.

Frankish, K. and Ramsey, W. (2015). The Cambridge handbook of artificial intelligence. Cambridge, UK: Cambridge University Press.

Gent, E. (2017). Google’s AI-Building AI Is a Step Toward Self-Improving AI. [online] Singularity Hub. Available at: https://singularityhub.com/2017/05/31/googles-ai-building-ai-is-a-step-toward-self-improving-ai/#sm.0001yaqn0ub06ejzq7b2odvsw2kj1

Gottlieb, A. (2016). The dream of enlightenment. Allen Lane.

Hardy, S. and Carlo, G. (2011). Moral Identity: What Is It, How Does It Develop, and Is It Linked to Moral Action?. Child Development Perspectives, 5(3), pp.212-218.

Hart, D. and Fegley, S. (1995). Prosocial Behavior and Caring in Adolescence: Relations to Self-Understanding and Social Judgment. Child Development, 66(5), p.1346.

Hume, D., (1738, 2015). A treatise of human nature. Digireads.com Publishing.

Kant, I. (1788, 2012). The critique of practical reason. [United States]: Start Publishing. Immanuel Kant originally published his “Critik der praktischen Vernunft” in 1788. It was the second book in Kant’s series of three critiques.

Kwatz, P. (2017). Conscious robots. Peacock’s Tail Publishing.

Kuipers, B. (2016). Human-Like Morality and Ethics for Robots. The Workshops of the Thirtieth AAAI Conference on Artificial Intelligence AI, Ethics, and Society:, Technical Report WS-16-02.

Larsen, K. (2018 I). On the Acceptance of Artificial Intelligence in Corporate Decision Making – A Survey.. [online] AI Strategy & Policy. Available at: https://aistrategyblog.com/2017/11/05/on-the-acceptance-of-artificial-intelligence-in-corporate-decision-making-a-survey/.

Larsen, K. (2018 II). Smart life 3.0 – SMART 2018 Conference on “Digital Frontiers and Human Consequences” (Budapest, 4 April 2018).. [online] Slideshare.net. Available at: https://www.slideshare.net/KimKyllesbechLarsen/smart-life-30.

Lin, P., Abney, K. and Jenkins, R. (2017). Robot ethics 2.0. New York: Oxford University Press.

Lomas, N. (2018). Duplex shows Google failing at ethical and creative AI design. [online] TechCrunch. Available at: https://techcrunch.com/2018/05/10/duplex-shows-google-failing-at-ethical-and-creative-ai-design/.

Lumbreras, S. (2017). The Limits of Machine Ethics. Religions, 8(5), p.100.

Marques, H. and Holland, O. (2009). Architectures for functional imagination. Neurocomputing, 72(4-6), pp.743-759.

Mathur, M. and Reichling, D. (2016). Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Cognition, 146, pp.22-32.

Moor, J. (2006). The Nature, Importance, and Difficulty of Machine Ethics. IEEE Intelligent Systems, 21(4), pp.18-21.

Moor, J. (2018). Four Kinds of Ethical Robots | Issue 72 | Philosophy Now. [online] Philosophynow.org. Available at: https://philosophynow.org/issues/72/Four_Kinds_of_Ethical_Robots.

Mori, M., MacDorman, K. and Kageki, N. (2012). The Uncanny Valley [From the Field]. IEEE Robotics & Automation Magazine, 19(2), pp.98-100.

Murphy, R. and Woods, D. (2009). Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent Systems, 24(4), pp.14-20.

O’Neil, C. (2017). Weapons of Math Destruction. Penguin Books.

Osaba, O. and Welser IV, W. (2017). An intelligence in our image: The risks of bias and errors in artificial intelligence. 1st ed. RAND Corporation.

Outram, D. (2012). The Enlightenment. Cambridge: Cambridge University Press.

Rutkin, A. (2014). The robot’s dilemma. New Scientist, 223(2986), p.22.

Sandel, M. (2018). Justice: What’s The Right Thing To Do? Episode 01 “THE MORAL SIDE OF MURDER”. [online] YouTube. Available at: https://www.youtube.com/watch?v=kBdfcR-8hEY.

Sapolsky, R. (2017). Behave: The Biology of Humans at Our Best and Worst. 1st ed. Penguin Press. Note: Chapter 13 “Morality and Doing the Right Thing, Once You’ve Figured Out What That is” is of particular relevance here (although the whole book is extremely read worthy).

Shachtman, N. (2018). New Armed Robot Groomed for War. [online] WIRED. Available at: https://www.wired.com/2007/10/tt-tt/.

Shafer-Landau, R. (2013). Ethical theory. Chichester, West Sussex: Wiley-Blackwell.

Simonite, T. (2018). Google’s AI software is learning to make AI software. [online] MIT Technology Review. Available at: https://www.technologyreview.com/s/603381/ai-software-learns-to-make-ai-software/.

Singer, P. (2018). Isaac Asimov’s Laws of Robotics Are Wrong. [online] Brookings. Available at: https://www.brookings.edu/opinions/isaac-asimovs-laws-of-robotics-are-wrong/.

Smith, A. and Raphael, D. (1991). The wealth of nations. New York: Knopf.

Torrance, S. (2018). Will Robots Need Their Own Ethics? | Issue 72 | Philosophy Now. [online] Philosophynow.org. Available at: https://philosophynow.org/issues/72/Will_Robots_Need_Their_Own_Ethics.

Torresen, J. (2018). A Review of Future and Ethical Perspectives of Robotics and AI. Frontiers in Robotics and AI, 4.

transhumanity.net. (2015). Biological Machines. [online] Available at: http://transhumanity.net/biological-machines/

Vanderelst, D. and Winfield, A. (2018). An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Systems Research, 48, pp.56-66.

Vincent, J. (2018). Google’s AI sounds like a human on the phone — should we be worried?. [online] The Verge. Available at: https://www.theverge.com/2018/5/9/17334658/google-ai-phone-call-assistant-duplex-ethical-social-implications

Wachter-Boettcher, S. (2018). Technically Wrong. W.W. Norton.

Waldrop, M. (2015). Autonomous vehicles: No drivers required. Nature, 518(7537), pp.20-23.

Wallach, W., Allen, C. and Smit, I. (2007). Machine morality: bottom-up and top-down approaches for modelling human moral faculties. AI & SOCIETY, 22(4), pp.565-582.

Wallach, W. and Allen, C. (2009). Moral machines. New York, N.Y.: Oxford University Press.

Winfield, A. (2018). Ethical robots save humans. [online] YouTube. Available at: https://www.youtube.com/watch?v=jCZDyqcxwlo.

Yudkowsky, E. (2015). Rationality From AI to Zombies. 1st ed. Berkeley: Machine Intelligence Research Institute.

Do we Humans trust AIs?

AI … IT IS HERE, IT IS THERE, IT IS EVERYWHERE.

I was late to a dinner appointment, arranged by x.ai, at Caviar and Bull (booked by my human friend David). Siri had already indicated that I would be late (yes it had also warned me repeatedly it was time to leave the office for me to be on time) and Waze (i.e., the world’s largest community-based traffic & navigation app) was trying to guide me through a busy Budapest city center. Stuck in traffic … sighhh … but then the traffic moves … I press on the speeder … and … my car breaks (with a vengeance) at the same moment my brain realizes that the car in front of me had not moved and I was about to hit it. My car had just saved me from a crash. And from being even later for my appointment of what would turn out to be an absolutely excellent dinner with great Hungarian reds and white wines recommended by Vivino (i.e., based on my wine history & preferences, my friends’ preferences and of course the menu). In the meantime, my scheduler had notified my friend that I would be a bit late due to traffic (rather than the real reason of me being late leaving the office;-).

Most of the above are powered by AI (also indicated by the color red) or more accurately machine learning applications. Thus based on underlying machine learning algorithms and mathematical procedures applied to available personalized, social networks and other data.

In the cases above I am implicitly trusting whatever automation has “sneaked” into my daily life will make it move convenient and possibly even save others as well as myself from harm (when my own brain & physiology gets too distracted). Do I really appreciate that most of this convenience is based on algorithms monitoring my life (a narrow subset that is of my life) and continuously predicting what my next move might be in order to support me? No … increasingly I take the offered algorithmic convenience for granted (and the consequences of that is another interesting discussion for another time).

In everyday life, we frequently rely on AI-driven and augmented decisions … mathematical algorithms trained on our and others’ digital footprint and behaviors … to make our lives much more convenient and possibly much safer.

The interesting question is whether people in general are consciously aware of the degree of machine intelligence or algorithmic decision-making going on all around them? Is it implicit trust or simply ignorance at play?

Do we trust AI? Is AI trustworthy? Trustworthy? Do we trust AI more than our colleagues & peers? and so forth … and what does trust really mean in the context of AI and algorithmic convenience?

Some of these questions relating to corporate decision-making have in detail been described in the context of the corporate decision makers’ sentiment towards AI in my previous blog “On the acceptance of artificial intelligence in corporate decision making – a survey”.

human trust

TRUST – HUMAN VS AI.

Imagine that you have a critical decision to make at your work. Your team (i.e., your corporate tribe) of colleague experts recommends, based on their human experience, to choose Option C as the best path forward.

Would you trust your colleagues’ judgment and recommendation?

Yes! There is a pretty high likelihood that you actually would.

More than 50% of corporate decision-makers would frequently to always trust the recommendation (or decision) based on human expert judgment. More than 36% of corporate decision-makers would trust such a recommendation in about half the time (i.e., what I call the flip coin decision-making).

Now imagine you are having a corporate AI available to support your decision-making. It also provides the recommendation for Option C. Needles maybe to say, but nevertheless let’s just say it; the AI has of course been trained on all available & relevant data and roughly tested for accuracy (i.e., in a lot more rigorous way than we test our colleagues, experts, and superiors)

Beside Humans (Us) versus AI (Them), the recommendation and decisions to be made are of course same.

Would you trust the AI’s recommendation? Would you trust it as much as you do your team of colleagues and maybe even your superior?

Less than 13% of corporate decision-makers would frequently always trust a recommendation (or decision) based on AI judgment. Ca. 25% of the decision makers would trust an AI-based decision about half the time.

Around 20% of decision-makers would never trust an AI-based decision. Less than 45% would do so only infrequently.

human vs ai - trust in decisions

Based on a total of 426 surveyed respondents of which 214 were offered Question A and 212 was offered Question B. Respondents are significantly more trusting towards decisions or recommendations made by a fellow human expert or superior than if a decision or recommendation would be made by an AI. No qualifications were provided for success or failure rate.

It is quite clear that we regard a decision or recommendation as based on AI, rather than a fellow human, with substantially less trust.

Humans don’t trust decisions made by AIs. At least when it is pointed out that a decision is AI-based. Surprisingly, given much evidence to the contrary, humans trust humans, at least the ones in our own tribe (e.g., colleagues, fellow experts, superiors, etc..).

Dietvorst and coworkers refer to this human aversion towards non-human or algorithmic-based recommendations or forecasts as algorithmic aversion. It refers to situations where human decision-makers or forecasters deliberately avoid statistical algorithms in their decision or forecasting process.

A more “modern” word for this might be AI aversion rather than algorithm aversion. However, it describes very much the same phenomena.

trust and mistrust

Okay, okay … But the above question of trust did not qualify the decision-making track record of the human versus the AI. Thus respondents could have very different ideas or expectations about the success or error rates of humans and AIs respectively.

What if the fellow human expert (or superior) as well as the AI is known to have a success rate that is better than 70%. Thus more than 7 out of 10 decisions are in retrospect deemed successful (ignoring whatever that might really mean). By the same token, it also means that the error rate is 30% or less … or that 3 (or less) out of 10 decisions are deemed unsuccessful.

human vs ai - trust in decisions w 70% success rate

Based on a total of 426 surveyed respondents of which 206 were offered Question A and 220 were offered Question B. For both Human Experts (or Superior) and AI, a decision-making success rate of 70% (i.e., 7 out of 10) should be assumed. Despite the identical success rate, respondents remain significantly more trusting towards decisions made by a fellow human expert (or superior) than if a decision would be made by an AI.

In a like-for like-decision making success rate, human experts or superiors are hugely preferred over a decision-making AI.

A bit more than 50% of the corporate decision makers would frequently or always trust a fellow human expert recommendation or decision. Less than 20% would frequently or always trust a decision made by an AI with the same success rate as the human expert.

Thus, Humans trust Humans and not so much AIs. Even if the specified decision-making success rate is identical. It should be noted that trust in a human decision or recommendation relates to fellow human experts or superiors … thus trust towards colleagues or individuals that are part of the same corporate structure.

The result of trust in the human expert or superior with a 70% success rate is quite similar to the previous result without a specified success rate

human vs ai - trust in human decisions

Based on a total of 426 surveyed respondents of which 214 were offered Question A without success rate qualification and 223 were offered Question A with a 70% success rate stipulated. As observed in this chart, and confirmed by the statistical analysis, there is no significant difference in the trust in a decision made by a human expert (or superior) whether a success rate of 70% has been stipulated or no qualification had been given.

This might indicate that our human default expectations towards a human expert or superior’s recommendation or decision are around the 70% success rate.

However, for the AI-based recommendation or decision, respondents do provide a statistically different trust picture depending on whether a success rate of 70% or not has been specified. The mean sentiment increases with almost 15% by specifying that the AI has a 70% success rate. This is also very visible from the respondent data shown in the below chart;

human vs ai - trust in ai decisions

Based on a total of 426 surveyed respondents of which 212 were offered Question B without success rate qualification and 203 were offered a Question B with a 70% success rate assumed. As observed in this chart, and confirmed by the statistical analysis, there is a substantial increase in trust of the AI-based decision where the success rate of 70% had been stipulated compared to the question where no success rate was provided.

Respondents that would never or infrequently trust an AI-based decision are almost 20% lower when considering a 70% success rate.

This might indicate that the human default perception of the quality of AI-based decisions or recommendations is far below the 70% success rate.

So do we as humans have higher expectations towards decisions, recommendations, or forecasts based on AI than the human expert equivalent?

human vs ai - expectations towards decision quality

Based on a total of 426 surveyed respondents of which 206 were offered Question A and 220 were offered Question B. No statistical difference in the expectations towards the quality of decisions where found between human expert (or superior) and that of AI-based ones.

This survey indicates that there is no apparent statistically significant difference in what quality we expect from a human expert compared to that of an AI. The average expectation towards the quality is that less than 2 out of 10 decisions could turn out wrong (or be unsuccessful). Thus, a failure rate of 20% or less. Similar to a success rate of 80% or better.

It is well known that depending on whether a question is posed or framed in a positive way or negative can greatly affect how people will decide. Even if the positive and negative formulations are mathematically identical.

An example; you are with the doctor and he recommends an operation for your very poor hearing. Your doctor has two options when he informs you of the operation’s odds of success (of course he might also choose not to provide that information altogether if not asked;-); Frame A. There is a 90% chance of success and you will be hearing normally again on the operated ear, Frame B. There is a 10% chance of failure and you will become completely deaf on the operated ear. Note that the success rate of 90% also implies an error rate of 10%. One may argue that the two are mathematically identical. In general, many more would choose to have an operation when presented with Frame A, i.e., a 90% success rate, than if confronted with Frame B, i.e., a 10% failure rate. Tversky & Kahneman identified this as the framing effect, where people react differently to a given choice depending on how such a choice is presented (i.e., success vs failure). As Kahneman & Tversky showed, the loss is felt to be more significant than the equivalent gain.

When it comes to an AI-driven decision would you trust it differently depending on whether I present you the AI’s success rate or its error rate? (i.e., the obvious answer is of course yes … but to what degree?)

ai trust - success vs failure rate

Based on a total of 426 surveyed respondents of which 233 were offered Question A (i.e., framed as decision success rate) and 193 Question B (i.e., framed as decision error rate). As expected from framing bias and prospect theory more respondents would trust the AI when presented with the AI’s success rate (i.e., better than 95%) compared to its error rate (i.e., less than 5 out of 100)

When soliciting support for AI augmentation a positive frame of its performance is (unsurprisingly) much better than the mathematically equivalent negative frame, i.e., success rate versus failure or error rate.

Human cognitive processes and biases treat losses or failures very differently from successes or gains. Even if the two frames are identical in terms of real-world impact. More on this later when we get into some cool studies on our human brain chemistry, human behavior, and Tversky & Kahneman’s wonderful prospect theory (from before we realized that oxytocin and other neuropeptides would be really cool).

HUMANS TRUST HUMANS.

Trust is the assured reliance on the character, ability, or truth of someone or something. Trust is something one gives as opposed to trustworthiness which is someone or something other being worthy of an individual or group’s trust.

The degree to which people trust each other is highly culturally determined with various degrees of penalties associated with breaking trust. Trust is also neurobiological determined and of course context dependent.

As mentioned by Paul J. Zak in his Harvard Business Review article “The Neuroscience of Trust” ; “Compared to people in low-trust companies, people in high-trust companies report: 74% less stress, 107% more energy at work, 50% higher productivity, 13% fewer sick days, 76% more engagement, 29% more satisfaction with their lives, 40% less burnout” … Trust is clearly important for corporate growth and the individuals’ wellbeing in a corporate setting (and I suspect anywhere really). Much of this is described mathematically (and I would argue beautifully) in Paul Zak’s seminal paper “Trust & Growth” relating differences in the degree of trust as it relates to different social, legal, and economic environments.

People trust people. It is also quite clear from numerous studies that people don’t trust that many non-people (e.g., things or non-biological agents such as mathematical algorithms or AI-based),.. okay okay you might say … but why?

While 42 is in general a good answer … here the answer is slightly simpler … Oxytocin (not to be confused with an oxymoron). Okay okay … what is that Oxytocin and what do they have to do with trusting or not trusting AI (that is the answer). Well … if you have read Robert Sapolsky’s brilliant account of our behavior at our best and worst (i.e., “Behave: the biology of humans at our worst and best” by Robert Sapolsky) you might know enough (and even more about those nasty glucocorticoids. And if you hadn’t had enough of those please do read “Why Zebras don’t get ulcers” also by Sapolsky, you might even be able to spell it in the end).

Oxytocin is our friend when it comes to warm and cozy feelings towards each other (apart from fairly being essential for inducing labor and lactation). Particularly when “each other” is part of our Team, our Partner, our kids, and even our Dog. It is a hormone of the peptide type (i.e., it is relatively small and consists of amino acids) and is used by neurons to communicate with each other. They pretty much influence how signals are processed by our brain and how our body reacts to external stimuli.

The higher the level of oxytocin, the more you are primed to trust your team, your stock broker, your partner (and your dog), feeling closer to your wife and your newly born babies. The more you hug, kiss, and shake hands, have sex, and walk your dog, the more Oxytocin will be rushing through your body and the more trusting you will become towards your social circles. “Usness” is great for oxytocin release (as well as a couple of other neuropeptides with a crack for making us feel better with one another … within the confines of “Usness” … oh yeah and we have some serious gender biases there as well). Particularly when “Them” are around. Social interactions are important for the oxytocin kick.

The extra bonus effect of increased oxytocin is that it appears to dampen the brain’s “freaking out” center’s (i.e., amygdala) reactivity to possible threats (real or otherwise). At least within the context of “Usness” and non-existential threats.

HUMANS DON’T TRUST AI (as much as Humans).

Oxytocin (i.e., changes in level) appears mainly to be stimulated or triggered by interaction with other humans (& dogs). When the human (or dog) interaction is taken out of the interaction “game”, for example, replaced by an electronic or mechanical interface (e.g., computer interface, bot interaction, machine, etc..) , trust is not enhanced by oxytocin levels. This has been well summarized by Mauricio Delgado in his “To trust or not to trust: ask oxytocin” Scientific American, as well as in the groundbreaking work of Paul J. Zak and co-workers (see “Oxytocin increases trust in Humans” from Nature, 2005) and likewise impressive work of Thomas Baumgartner et al. (“Oxytocin shapes the neural circuitry of trust and trust adaptations in humans” from Neuron, 2008).

Thomas Baumgartner and coworkers (similar setup to other works in this field) administrated either a placebo or oxytocin intranasal spray to test subjects prior to the experimental games. Two types of games were played; (a) so-called trust game with human partner interactions (i.e., human-human game) where the test subject invest an amount of money to a 3rd party (e.g., stock broker) that will invest the money and return the reward and (b) a so-called risk game of which the outcome would be machine determined by a random generator (i.e., human-machine game). The games are played over 12 rounds with result feedback to the test subject, allowing for a change in trust in the subsequent round (i.e., the player can reduce the invested money (less trust), increase (higher trust) or keep it constant (keep trust level)). Baumgartner et al found that test subjects playing the trust game (human-human game), and who received the oxytocin “sniff”, remained trusting in throughout rounds of the game, even when they had no rational (economical) reason to remain trusting. The oxytocin subjects trust behavior was found to be substantially higher compared to test subjects playing the same game having received the placebo. In the risk game (human-machine) no substantial differences were observed between oxytocin and placebo subjects which in both cases kept their trust level almost constant. While the experiments conducted are fascinating and possibly elucidating towards the effects of oxytocin and social interactions, I cannot help being somewhat uncertain whether the framing of Trust vs Risk and the subtle game structure differences (i.e., trusting human experts that supposedly know what he is doing vs lottery a game of chance) could skew the results. Thus, rather than telling us whether humans trust humans more than machines or algorithms (particularly the random generator kind of which trust is somewhat of an oxymoron), it tells us more about how elevated levels of oxytocin make a human-less sensitive to mistrust or angst for a fellow human being (that might take advantage of that trust).

It would have been a much more interesting game (imo) of both had been called a Trust Game (or Risk Game for that matter as this is obviously what it is). One game with a third party investing in the test subjects’ transfer. Thus similar to Baumgartner’s Trust Game setup. And another game where the third party is an algorithmic “stock broker” with at least the same success rate as the first game’s 3rd party human. This would have avoided the framing bias (trust vs risk) and the structural differences in the game.

Unfortunately, we are not that much closer to a great explanation for why humans appear to trust humans more than algorithms. Still pretty much guessing.

And no I did not hand out cute oxytocin (and of course placebo) nasal spays to the surveyed respondents. Neither did I check for whether respondents had been doing a lot of hugging or other close-quarter social network activities which would have boosted the oxytocin levels. This will be for a follow-up study.

intranasal oxytocin sprays

In Baumgartner’s experiment, subjects got 3 puffs of Oxytocin or Placebo per nostril for each of 4 IUs (i.e., 24 IUs or ml). Note: the bottle above is just a random sample of a nostril oxytocin spay.

A guess towards a possible explanation for humans being statistically significantly less trusting towards algorithms (algorithmic aversion), AI (AI aversion), and autonomous electronic-mechanistic interfaces in general, might be that our brains have not been primed to regard such as part of “Usness”. In other words, there is a very big difference between trusting colleagues or peers (even if some are superiors) who are part of your corporate “tribe” (e.g., team, unit, group, etc…) compared to an alien entity such as an AI or an algorithm could easily be construed.

So the reasons why humans trust humans and algorithms and AI is still somewhat reclusive although the signals are possibly there.

Based on many everyday machine learning or algorithmic applications leapfrogging our level of convenience already today … Maybe part of the “secret” is to make AI-based services and augmentation part of the everyday.

The human lack of trust in AI, or the prevalence of algorithms aversion in general as described in several articles by Berkeley Dietvorst, in a corporate sense and setting is nevertheless a very big challenge for any ideas of a mathematical corporation where mathematical algorithms are permeating all data-driven decision processes.

GOOD & RELATED READS.

ACKNOWLEDGEMENT.

I greatly acknowledge my wife Eva Varadi for her support, patience, and understanding during the creative process of creating this Blog. Without her support, I really would not be able to do this or it would take long past my expiration date to finish.

SURVEYS.

Unless otherwise specified the results presented here comes from a recent surveymonkey.com survey that was conducted between November 11th, 2017 and November 21st 2017. The Survey took on average 2 minutes and 35 seconds to complete.

The data contains 2 main survey collector groups;

  1. Survey Monkey paid collector group run between November 11th and 14th 2017 with 352 completed responses from USA. Approximately 45% were Females and 55 males in the surveyed sample with an age distribution between 18 and 75 years of age. The average age is 48.8. The specified minimum income level was set to $75 thousand or about 27% higher than the median US real household income level in 2016. The average household income level in this survey is approx. 125 thousand annually. Ca. 90% or 316 out of the 352 respondents have heard of Artificial Intelligence (AI) previously. For AI-relevant questions, only 316 were used. A surveyed respondent that had not previously heard of AI (36 out of 252) was not considered. More than 70% of the respondents had a 4-year college or graduate-level degree. About 70% of the respondents were married and 28% had children under the age of 18. Moreover, ca. 14% currently had no employment.
  2. Social Media (e.g., Facebook, LinkedIn, Twitter, …) collector group run between November 11th and 21st 2017, and completed in total of 115 responses primarily from the telecom & media industry mainly from Europe. Gender distribution comprised around 38% Female and 62% Male. The average age for this sample is 41.2. No income data is available for this group. About 96% (110) have heard of Artificial Intelligence. For AI-related questions, only respondents that have confirmed they have heard about AI have been considered. Ca. 77% of the respondents have a 4-year college or graduate-level degree. 55% of the surveyed sample are married and a bit more than 50% of this surveyed group have children under 18. Less than 2% of the respondents were currently not employed.

It should be emphasized that SurveyMonkey was a paid survey with 2.35 euros per response, totaling 1,045 euros for 350 responses. Each respondent completed 18 questions. Age balancing was chosen to be basic and the gender balancing census.