AI … IT IS HERE, IT IS THERE, IT IS EVERYWHERE.
I was late to a dinner appointment, arranged by x.ai, at Caviar and Bull (booked by my human friend David). Siri had already indicated that I would be late (yes it had also warned me repeatedly it was time to leave the office for me to be on time) and Waze (i.e., the world’s largest community-based traffic & navigation app) was trying to guide me through a busy Budapest city center. Stuck in traffic … sighhh … but then the traffic moves … I press on the speeder … and … my car breaks (with a vengeance) at the same moment my brain realizes that the car in front of me had not moved and I was about to hit it. My car had just saved me from a crash. And from being even later for my appointment of what would turn out to be an absolutely excellent dinner with great Hungarian reds and white wines recommended by Vivino (i.e., based on my wine history & preferences, my friends’ preferences and of course the menu). In the meantime, my scheduler had notified my friend that I would be a bit late due to traffic (rather than the real reason of me being late leaving the office;-).
Most of the above are powered by AI (also indicated by the color red) or more accurately machine learning applications. Thus based on underlying machine learning algorithms and mathematical procedures applied to available personalized, social networks and other data.
In the cases above I am implicitly trusting whatever automation has “sneaked” into my daily life will make it move convenient and possibly even save others as well as myself from harm (when my own brain & physiology gets too distracted). Do I really appreciate that most of this convenience is based on algorithms monitoring my life (a narrow subset that is of my life) and continuously predicting what my next move might be in order to support me? No … increasingly I take the offered algorithmic convenience for granted (and the consequences of that is another interesting discussion for another time).
In everyday life, we frequently rely on AI-driven and augmented decisions … mathematical algorithms trained on our and others’ digital footprint and behaviors … to make our lives much more convenient and possibly much safer.
The interesting question is whether people in general are consciously aware of the degree of machine intelligence or algorithmic decision-making going on all around them? Is it implicit trust or simply ignorance at play?
Do we trust AI? Is AI trustworthy? Trustworthy? Do we trust AI more than our colleagues & peers? and so forth … and what does trust really mean in the context of AI and algorithmic convenience?
Some of these questions relating to corporate decision-making have in detail been described in the context of the corporate decision makers’ sentiment towards AI in my previous blog “On the acceptance of artificial intelligence in corporate decision making – a survey”.
TRUST – HUMAN VS AI.
Imagine that you have a critical decision to make at your work. Your team (i.e., your corporate tribe) of colleague experts recommends, based on their human experience, to choose Option C as the best path forward.
Would you trust your colleagues’ judgment and recommendation?
Yes! There is a pretty high likelihood that you actually would.
More than 50% of corporate decision-makers would frequently to always trust the recommendation (or decision) based on human expert judgment. More than 36% of corporate decision-makers would trust such a recommendation in about half the time (i.e., what I call the flip coin decision-making).
Now imagine you are having a corporate AI available to support your decision-making. It also provides the recommendation for Option C. Needles maybe to say, but nevertheless let’s just say it; the AI has of course been trained on all available & relevant data and roughly tested for accuracy (i.e., in a lot more rigorous way than we test our colleagues, experts, and superiors)
Beside Humans (Us) versus AI (Them), the recommendation and decisions to be made are of course same.
Would you trust the AI’s recommendation? Would you trust it as much as you do your team of colleagues and maybe even your superior?
Less than 13% of corporate decision-makers would frequently always trust a recommendation (or decision) based on AI judgment. Ca. 25% of the decision makers would trust an AI-based decision about half the time.
Around 20% of decision-makers would never trust an AI-based decision. Less than 45% would do so only infrequently.
It is quite clear that we regard a decision or recommendation as based on AI, rather than a fellow human, with substantially less trust.
Humans don’t trust decisions made by AIs. At least when it is pointed out that a decision is AI-based. Surprisingly, given much evidence to the contrary, humans trust humans, at least the ones in our own tribe (e.g., colleagues, fellow experts, superiors, etc..).
Dietvorst and coworkers refer to this human aversion towards non-human or algorithmic-based recommendations or forecasts as algorithmic aversion. It refers to situations where human decision-makers or forecasters deliberately avoid statistical algorithms in their decision or forecasting process.
A more “modern” word for this might be AI aversion rather than algorithm aversion. However, it describes very much the same phenomena.
Okay, okay … But the above question of trust did not qualify the decision-making track record of the human versus the AI. Thus respondents could have very different ideas or expectations about the success or error rates of humans and AIs respectively.
What if the fellow human expert (or superior) as well as the AI is known to have a success rate that is better than 70%. Thus more than 7 out of 10 decisions are in retrospect deemed successful (ignoring whatever that might really mean). By the same token, it also means that the error rate is 30% or less … or that 3 (or less) out of 10 decisions are deemed unsuccessful.
In a like-for like-decision making success rate, human experts or superiors are hugely preferred over a decision-making AI.
A bit more than 50% of the corporate decision makers would frequently or always trust a fellow human expert recommendation or decision. Less than 20% would frequently or always trust a decision made by an AI with the same success rate as the human expert.
Thus, Humans trust Humans and not so much AIs. Even if the specified decision-making success rate is identical. It should be noted that trust in a human decision or recommendation relates to fellow human experts or superiors … thus trust towards colleagues or individuals that are part of the same corporate structure.
The result of trust in the human expert or superior with a 70% success rate is quite similar to the previous result without a specified success rate
This might indicate that our human default expectations towards a human expert or superior’s recommendation or decision are around the 70% success rate.
However, for the AI-based recommendation or decision, respondents do provide a statistically different trust picture depending on whether a success rate of 70% or not has been specified. The mean sentiment increases with almost 15% by specifying that the AI has a 70% success rate. This is also very visible from the respondent data shown in the below chart;
Respondents that would never or infrequently trust an AI-based decision are almost 20% lower when considering a 70% success rate.
This might indicate that the human default perception of the quality of AI-based decisions or recommendations is far below the 70% success rate.
So do we as humans have higher expectations towards decisions, recommendations, or forecasts based on AI than the human expert equivalent?
This survey indicates that there is no apparent statistically significant difference in what quality we expect from a human expert compared to that of an AI. The average expectation towards the quality is that less than 2 out of 10 decisions could turn out wrong (or be unsuccessful). Thus, a failure rate of 20% or less. Similar to a success rate of 80% or better.
It is well known that depending on whether a question is posed or framed in a positive way or negative can greatly affect how people will decide. Even if the positive and negative formulations are mathematically identical.
An example; you are with the doctor and he recommends an operation for your very poor hearing. Your doctor has two options when he informs you of the operation’s odds of success (of course he might also choose not to provide that information altogether if not asked;-); Frame A. There is a 90% chance of success and you will be hearing normally again on the operated ear, Frame B. There is a 10% chance of failure and you will become completely deaf on the operated ear. Note that the success rate of 90% also implies an error rate of 10%. One may argue that the two are mathematically identical. In general, many more would choose to have an operation when presented with Frame A, i.e., a 90% success rate, than if confronted with Frame B, i.e., a 10% failure rate. Tversky & Kahneman identified this as the framing effect, where people react differently to a given choice depending on how such a choice is presented (i.e., success vs failure). As Kahneman & Tversky showed, the loss is felt to be more significant than the equivalent gain.
When it comes to an AI-driven decision would you trust it differently depending on whether I present you the AI’s success rate or its error rate? (i.e., the obvious answer is of course yes … but to what degree?)
When soliciting support for AI augmentation a positive frame of its performance is (unsurprisingly) much better than the mathematically equivalent negative frame, i.e., success rate versus failure or error rate.
Human cognitive processes and biases treat losses or failures very differently from successes or gains. Even if the two frames are identical in terms of real-world impact. More on this later when we get into some cool studies on our human brain chemistry, human behavior, and Tversky & Kahneman’s wonderful prospect theory (from before we realized that oxytocin and other neuropeptides would be really cool).
HUMANS TRUST HUMANS.
Trust is the assured reliance on the character, ability, or truth of someone or something. Trust is something one gives as opposed to trustworthiness which is someone or something other being worthy of an individual or group’s trust.
The degree to which people trust each other is highly culturally determined with various degrees of penalties associated with breaking trust. Trust is also neurobiological determined and of course context dependent.
As mentioned by Paul J. Zak in his Harvard Business Review article “The Neuroscience of Trust” ; “Compared to people in low-trust companies, people in high-trust companies report: 74% less stress, 107% more energy at work, 50% higher productivity, 13% fewer sick days, 76% more engagement, 29% more satisfaction with their lives, 40% less burnout” … Trust is clearly important for corporate growth and the individuals’ wellbeing in a corporate setting (and I suspect anywhere really). Much of this is described mathematically (and I would argue beautifully) in Paul Zak’s seminal paper “Trust & Growth” relating differences in the degree of trust as it relates to different social, legal, and economic environments.
People trust people. It is also quite clear from numerous studies that people don’t trust that many non-people (e.g., things or non-biological agents such as mathematical algorithms or AI-based),.. okay okay you might say … but why?
While 42 is in general a good answer … here the answer is slightly simpler … Oxytocin (not to be confused with an oxymoron). Okay okay … what is that Oxytocin and what do they have to do with trusting or not trusting AI (that is the answer). Well … if you have read Robert Sapolsky’s brilliant account of our behavior at our best and worst (i.e., “Behave: the biology of humans at our worst and best” by Robert Sapolsky) you might know enough (and even more about those nasty glucocorticoids. And if you hadn’t had enough of those please do read “Why Zebras don’t get ulcers” also by Sapolsky, you might even be able to spell it in the end).
Oxytocin is our friend when it comes to warm and cozy feelings towards each other (apart from fairly being essential for inducing labor and lactation). Particularly when “each other” is part of our Team, our Partner, our kids, and even our Dog. It is a hormone of the peptide type (i.e., it is relatively small and consists of amino acids) and is used by neurons to communicate with each other. They pretty much influence how signals are processed by our brain and how our body reacts to external stimuli.
The higher the level of oxytocin, the more you are primed to trust your team, your stock broker, your partner (and your dog), feeling closer to your wife and your newly born babies. The more you hug, kiss, and shake hands, have sex, and walk your dog, the more Oxytocin will be rushing through your body and the more trusting you will become towards your social circles. “Usness” is great for oxytocin release (as well as a couple of other neuropeptides with a crack for making us feel better with one another … within the confines of “Usness” … oh yeah and we have some serious gender biases there as well). Particularly when “Them” are around. Social interactions are important for the oxytocin kick.
The extra bonus effect of increased oxytocin is that it appears to dampen the brain’s “freaking out” center’s (i.e., amygdala) reactivity to possible threats (real or otherwise). At least within the context of “Usness” and non-existential threats.
HUMANS DON’T TRUST AI (as much as Humans).
Oxytocin (i.e., changes in level) appears mainly to be stimulated or triggered by interaction with other humans (& dogs). When the human (or dog) interaction is taken out of the interaction “game”, for example, replaced by an electronic or mechanical interface (e.g., computer interface, bot interaction, machine, etc..) , trust is not enhanced by oxytocin levels. This has been well summarized by Mauricio Delgado in his “To trust or not to trust: ask oxytocin” Scientific American, as well as in the groundbreaking work of Paul J. Zak and co-workers (see “Oxytocin increases trust in Humans” from Nature, 2005) and likewise impressive work of Thomas Baumgartner et al. (“Oxytocin shapes the neural circuitry of trust and trust adaptations in humans” from Neuron, 2008).
Thomas Baumgartner and coworkers (similar setup to other works in this field) administrated either a placebo or oxytocin intranasal spray to test subjects prior to the experimental games. Two types of games were played; (a) so-called trust game with human partner interactions (i.e., human-human game) where the test subject invest an amount of money to a 3rd party (e.g., stock broker) that will invest the money and return the reward and (b) a so-called risk game of which the outcome would be machine determined by a random generator (i.e., human-machine game). The games are played over 12 rounds with result feedback to the test subject, allowing for a change in trust in the subsequent round (i.e., the player can reduce the invested money (less trust), increase (higher trust) or keep it constant (keep trust level)). Baumgartner et al found that test subjects playing the trust game (human-human game), and who received the oxytocin “sniff”, remained trusting in throughout rounds of the game, even when they had no rational (economical) reason to remain trusting. The oxytocin subjects trust behavior was found to be substantially higher compared to test subjects playing the same game having received the placebo. In the risk game (human-machine) no substantial differences were observed between oxytocin and placebo subjects which in both cases kept their trust level almost constant. While the experiments conducted are fascinating and possibly elucidating towards the effects of oxytocin and social interactions, I cannot help being somewhat uncertain whether the framing of Trust vs Risk and the subtle game structure differences (i.e., trusting human experts that supposedly know what he is doing vs lottery a game of chance) could skew the results. Thus, rather than telling us whether humans trust humans more than machines or algorithms (particularly the random generator kind of which trust is somewhat of an oxymoron), it tells us more about how elevated levels of oxytocin make a human-less sensitive to mistrust or angst for a fellow human being (that might take advantage of that trust).
It would have been a much more interesting game (imo) of both had been called a Trust Game (or Risk Game for that matter as this is obviously what it is). One game with a third party investing in the test subjects’ transfer. Thus similar to Baumgartner’s Trust Game setup. And another game where the third party is an algorithmic “stock broker” with at least the same success rate as the first game’s 3rd party human. This would have avoided the framing bias (trust vs risk) and the structural differences in the game.
Unfortunately, we are not that much closer to a great explanation for why humans appear to trust humans more than algorithms. Still pretty much guessing.
And no I did not hand out cute oxytocin (and of course placebo) nasal spays to the surveyed respondents. Neither did I check for whether respondents had been doing a lot of hugging or other close-quarter social network activities which would have boosted the oxytocin levels. This will be for a follow-up study.
A guess towards a possible explanation for humans being statistically significantly less trusting towards algorithms (algorithmic aversion), AI (AI aversion), and autonomous electronic-mechanistic interfaces in general, might be that our brains have not been primed to regard such as part of “Usness”. In other words, there is a very big difference between trusting colleagues or peers (even if some are superiors) who are part of your corporate “tribe” (e.g., team, unit, group, etc…) compared to an alien entity such as an AI or an algorithm could easily be construed.
So the reasons why humans trust humans and algorithms and AI is still somewhat reclusive although the signals are possibly there.
Based on many everyday machine learning or algorithmic applications leapfrogging our level of convenience already today … Maybe part of the “secret” is to make AI-based services and augmentation part of the everyday.
The human lack of trust in AI, or the prevalence of algorithms aversion in general as described in several articles by Berkeley Dietvorst, in a corporate sense and setting is nevertheless a very big challenge for any ideas of a mathematical corporation where mathematical algorithms are permeating all data-driven decision processes.
GOOD & RELATED READS.
- Kim Kyllesbech Larsen, “On the acceptance of artificial intelligence in corporate decision making – a survey.”, AIStrategyBlog (November 2017).
- Berkeley J. Dietvorst, Joseph P. Simonojs and Cade Massey, “Algorithm Aversion: people erroneously avoid algorithms after seeing them err.”, Journal of Experimental Psychology: General (2014). Study on the widespread Algorithm Aversion, i.e., human expectations towards machines is substantially higher than those of fellow humans. This results in an irrational aversion to machine-based recommendations versus human-based recommendations. Even though algorithmic-based forecasts are on average better to much better than human-based equivalent in apples-by-apples comparisons.
- Michael Kosfeld, Marcus Heinrichs, Paul J. Zak, Urs Fischbacher & Ernst Fehr, “Oxytocin increases trust in Humans”, Nature (June 2005) 673-676.
- Paul J. Zak, “The Neuroscience of Trust”, Harvard Business Review, (January – February 2017),
- Paul J. Zak & Stephen Knack, “Trust & Growth”, The Economic Journal, 111 (April 2001), 295 – 321.
- Mauricio Delgado, “To trust or not to Trust: Ask Oxytocin“, Scientific American (July 2008).
- Thomas Baumgartner, Markus Heinrichs, Alline Vonlanthen, Urs Fischbacher and Ernst Fehr, “Oxytocin shapes the neural circuitry of trust and trust adaptation in humans”, Neuron 58 (May 2008), 639 – 650.
- Thomas Baumgartner, Urs Fischbacher, Anja Feierabend, Kai Lutz, and Ernst Fehr, “The neural circuitry of a broken promise”, Neuron 64 (December 2009) 756 – 770.
- Michelle M. Wirth, “Hormones, stress and cognition: The effect of glucocorticoids and oxytocin on memory”, Adapt Human Physiol. (June 2015), 177 – 201.
- Daniel Kahneman and Amos Tversky, “Prospect theory: an analysis of decision under risk”, Econometrica 47 (March 1979), 263.
- Dan-Mikael Ellingsen, Johan Wessberg, Olga Chelnokova, Hakan Olausson, Bruno Laeng, and Siri Leknes, “In touch with your emotions: oxytocin and touch change social impressions while other’s facial expressions can alter touch”, Psychoneuroendrochrinology 39, (2014), 11 -20.
- Robert Sapolsky’s “Behave: the biology of Humans at our best and worst” , Penguin Random House UK (2017).
- Josh Sullivan and Angela Zutavern, “The Mathematical Corporation: Where Machine Intelligence and Human Ingenuity Achieve the Impossible.” (Public Affairs, 2017).
I greatly acknowledge my wife Eva Varadi for her support, patience, and understanding during the creative process of creating this Blog. Without her support, I really would not be able to do this or it would take long past my expiration date to finish.
Unless otherwise specified the results presented here comes from a recent surveymonkey.com survey that was conducted between November 11th, 2017 and November 21st 2017. The Survey took on average 2 minutes and 35 seconds to complete.
The data contains 2 main survey collector groups;
- Survey Monkey paid collector group run between November 11th and 14th 2017 with 352 completed responses from USA. Approximately 45% were Females and 55 males in the surveyed sample with an age distribution between 18 and 75 years of age. The average age is 48.8. The specified minimum income level was set to $75 thousand or about 27% higher than the median US real household income level in 2016. The average household income level in this survey is approx. 125 thousand annually. Ca. 90% or 316 out of the 352 respondents have heard of Artificial Intelligence (AI) previously. For AI-relevant questions, only 316 were used. A surveyed respondent that had not previously heard of AI (36 out of 252) was not considered. More than 70% of the respondents had a 4-year college or graduate-level degree. About 70% of the respondents were married and 28% had children under the age of 18. Moreover, ca. 14% currently had no employment.
- Social Media (e.g., Facebook, LinkedIn, Twitter, …) collector group run between November 11th and 21st 2017, and completed in total of 115 responses primarily from the telecom & media industry mainly from Europe. Gender distribution comprised around 38% Female and 62% Male. The average age for this sample is 41.2. No income data is available for this group. About 96% (110) have heard of Artificial Intelligence. For AI-related questions, only respondents that have confirmed they have heard about AI have been considered. Ca. 77% of the respondents have a 4-year college or graduate-level degree. 55% of the surveyed sample are married and a bit more than 50% of this surveyed group have children under 18. Less than 2% of the respondents were currently not employed.
It should be emphasized that SurveyMonkey was a paid survey with 2.35 euros per response, totaling 1,045 euros for 350 responses. Each respondent completed 18 questions. Age balancing was chosen to be basic and the gender balancing census.