Conversing with the Future: An interview with an AI … Thoughts on our reliance on and trust in generative AI.

Courtesy: OpenArt (Stable Diffusion 1.5) assisted creation based on the author’s ideas. See:

Before the GPT-4 moment of March 14, 2023, there has long been an existential need to achieve fully autonomous, unsupervised, or more appropriately self-supervised learning in the machine learning and AI community. To make away with the dependency on supervised learning and the heavy reliance on labeled data often necessitating human involvement. To leapfrog computational scaling from “old-fashion” supervised architectures based on CNNs or RNNs to the unsupervised deep learning regime based on so-called generative adversarial and transformer models of the latest state-of-machine-learning-art. The older supervised machine learning models often performed impressively well on narrow domains and were useless as generalists. Such a model would pretty much be incapable of being used outside the parameter space it had been trained on.

Just a bit before GPT-4 / ChatGPT hit the internet, we had Generative Adversarial Networks (GANs) capturing our imagination with photorealistic human face generation and advanced photo filters making us younger, prettier, or just for fun, much, much older. Allowing you to superimpose your filtered face on top of any other face (“just for fun”) in videos and images. Also, with GAN, the term “deepfake” was coined, covering non-genuine realistic videos and situational pictures that have flooded the internet ever since. As is frequently the case, it started out with Pornography (early technology adaptors being teens maybe? e.g., DeepNude) and then “upgraded” later to Politics, e.g., fake-Obama (Jordan Peele) warning about deepfakes. Unless you ended up at the wrong end of GAN-generated content, most would find it amusing and entertaining.

Figure 1 illustrates the improvement in GAN’s ability to generate synthetic faces from 2014 to 2017. See also the seminal paper by M. Bundage et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.” from 2018. Progress from 2017 has been equally impressive in terms of photorealistic backgrounds and more complex situational pictures.

How good are we at detecting AI-generated content? (assuming one would actually care).

Well, the human ability to detect synthetically generated content is not impressive … To say the least. The work of Sophie Nightingale and co-workers, “Synthetic Faces: how perceptually convincing are they?” from 2021, showed that the baseline average human performance was close to 50:50 (i.e., a simple coin flip) on whether a presented face was believed to be real or fake. With the expected technological improvement, it is likely to become increasingly difficult to distinguish between reality-based and made-up synthetic digital content.

When pictures started appearing on social media 20th of March showing Donald Trump fighting with and being arrested by police, I bet many just for a short moment, believed it to be true. And this was not even a very sophisticated attempt to depict a false (wistful thinking?) narrative. For more of the same category pictures generated by a prolific artist Elliot Higgins follow the hyperlink. Figure 2 below shows one of the pictures that were circulated and the (most obvious) telltale signs of having been generated by an AI.

Figure 2 is a recent example (of March 24, 2023) of political synthetic news that did not happen in the Real World. See the reference below. This is actually not a high-image quality example, but it also doesn’t need to be to have an effect. You will most likely focus on Trump, notice the police and read the Twitter text, and make your own conclusions all within a second or so. In the first few seconds, many would have believed it, either out of wishful thinking or because they feared it would happen (after all, Trump said so). The old (1809 origin) proverb of “Seeing is believing.” should no longer be taken at “face value.”. The yellow circles on the picture indicate obvious areas that clearly indicate a synthetic (GAN-like) picture generation (and remember, this is poor quality compared to what is possible today).

As has been documented by several (Western) news sources, deepfake videos and other AI-generated content, have been and are used in Russia’s information warfare efforts against Ukraine and the West. In general, the quality has not been very high and relatively easy to detect. Particularly if one is of a suspicious nature. Then … what to trust? Seeing is definitely not believing. However, what you have seen is also not easy to unsee and forget … small seeds of doubt can be laid as the eyes are continuously exposed to a particular false narrative. My Social Psychologist friend Dr. Liraz Margalit, who specializes in behavioral design and decision making, could provide a very exciting tales into the psychology of how our minds could be, and I would be surprised if not already the case, affected by malicious and ill-intended AI-generated content with the specific target of manipulation in one or another direction (e.g., politically, commercially, etc..). You may also take the journey to the end of the document, where you will find the same story generated in the style of Fox News and in the style of CNN News. Lies are bad but often much more obvious than the truth being used to manipulate a particular political or emotional sentiment, bad or good.

As the world we live in, which many of us experience primarily digitally, will be increasingly synthesized… and … “Seeing can no longer be believing”.

Thus, we should be much more suspicious and distrustful about what we are experiencing on digital media such as experienced on various internet browsers that have (or will have) the ability to manipulate and generate synthetic real-world realistic content in real-time. As the synthetic seed has been laid and nourished by clicks and more clicks, it will, with the current architecture of the internet and digital content delivery, be increasingly difficult to trust what is real and what is synthetically generated (i.e., faked). Though, is synthetic content that in detail represents real-world, as described by a reputable journalist, reputable (whatever that means) newspaper, or XGPT application (e.g., ChatGPT as opposed to the transformer generation GPT-n), fake or less credible than the real thing?

After real synthetic sights and sounds came the synthetic word.

How do you interface with Google Search (~80% usage share) or any other internet search engine? I would be surprised if not most of your search queries are done by typing a text in the prompt and getting a list of possible answers ranked by relevance (and other market-based encouragements) and an associated hyperlink that will jump to the material that may be of relevance to your search. It is rather a “dumb”, or at least an inefficient, way to get an answer … would it not be far better if the Search Engine of choice actually understood (or was able to act as it understood) your question and then gave you a comprehensive, trustworthy and correct answer (and by all means add references to that answer for follow up).

Enter OpenAI’s Generative Pre-trained Transformer (GPT) model framework for generalized natural language processing (e.g., knowledge container, translation, “understanding”, …). It is the current technology backbone of Large Language Models (LLM). With “Large” is a pretty big understatement, considering that GPT-1 (released June 2018) had 117 million parameters and the latest GPT-4 (released March 2023) is estimated to have 1 trillion parameters. In comparison the human brain contains about 100 billion neurons where-off 16 billion in the cerebral cortex, which is the part of the brain responsible for cognition.

Courtesy: OpenArt (Stable Diffusion 1.5) – AI Brain.

Generative models are probabilistic models that can generate representative outcomes from observed data used in the training process. It is important to understand that such models can generate outcomes that were not explicitly in the data, as likelihood estimates can be assigned to outcomes not represented by the training data. A pre-trained model is easy to understand as it simply means that the new (generational) model takes its starting point in the previous. This is very similar to transfer learning, which uses an existing model’s parameters as a starting point for training on new data. This has proven to work very well in practice as long as the context of the transferred model is similar to that of the new one. For comprehensive language models with a huge amount of parameters (e.g., hundreds of billions for GPT-3 and allegedly a trillion for GPT-4), having a pre-trained model to start from provides an enormous saving in computing resources (incl. time). Finally, the way humans process reading and writing is sequential. The transformer (i.e., transformer neural network architectures) addresses language in a sequential manner and extracts syntax and expected contexts. It is important to understand that a transformer does not understand (hehe) in the same way a human would do (supposedly). The transformer assigns likelihoods to context possibilities given the syntax (very simplistically put and possibly not completely correct) and presents the most likely context to the question posed. The attentive reader should realize that this also leaves the possibility of being presented with false “facts” or incorrect answers. Imagine that the addressable context is related to a recent event or new information that the transformer should assign a higher weight in providing a factual answer. However, the GPT model had not yet been presented with these events necessary for it to provide a more factual and truer answer. Though, the timing effect is obviously a generic problem for anyone and anything.

Generative AIs, such as GANs and GPTs, are known to generate content that has no basis in the real-world data they have been trained on. This may result in obvious non-sensical content (e.g., “the sky is green, and the grass was blue”) or be more subtle and possibly more concerning when trustworthiness and reliability is concerned. The root cause for this un-reliability or imperfection has to be found in the training process where patterns of anomalies and absurdities, even the creepy and disturbing, are attributed to non-zero likelihoods of occurring. It should not take much imagination to realize that in a model of billions or trillions of parameters, there is a very, very long tail of low- and very-low-likelihood combinations that could be picked up in the generative process that is supposed to present a high (highest) likelihood response to a question … and thus will be presented to the user. You ask, “What would happen if humans could breathe underwater?” and the ChatGPT answer could be “Humans would grow gills and turn into mermaids” … while a creative answer (to a somewhat silly question) … it is also non-sensical and reasonably obviously so … unless geneticists or evolutionary biologist have other ideas … speak up!. These kinds of generated non-sensical answers have been compared to the neuronic process of humans hallucinating or dreaming while awake. I suspect that allowing a moment of self-reflection, similar to the “reflexion technique” for computer code proposals, on the generative answer would catch the most obvious nonsense, and we will be left with a less creative and possibly boring GPT application (note: I propose to have the LSD mode as an option to not completely kill the creativity of random and anomalous generative “thought” processes).

So how good is GPT-4? On the US-based Uniform (standardized) Bar Exam that tests the knowledge and skills of every lawyer, GPT-4 performs in the 90th percentile (i.e., the result is better than 90% of all participants). Its percentile performance is similar on LSAT (~88th) and SAT Math (~89th). On Codeforce rating on competitive programming contests, GPT-4 scores in the “Newbie” range with 392 points, far below that of an Expert coder rating (1,600-1,899). The kind of coding challenges that Codeforce issues in their competitions can be fairly complex in the formulation and expected output. On tests designed for machine learning models, the GPT4 performance is impressive. For example, on solving 164 Python coding challenges that the model had not previously experienced, GPT-4 scored 67%. If GPT-4 was allowed to “self-reflect and evaluate” its original answers (using the so-called “reflexion technique”), GPT-4’s score jumped to 88%. In general, the expectation for GPT-4 is that it currently reaches an accuracy of ca. 86% in language understanding (essential for getting the context correct and providing correct answers).

It should come as no surprise that GPT-like functionalities are being discussed, researched, and trialed out for modern telecommunications networks aiming at zero-touch (closed-loop) autonomous network operation. The glass-is-half-empty camp may point towards the relatively low accuracy (67% to 88%) in coding benchmark as a reason not to entrust a critical infrastructure, such as a telecom network, with generative AI functionality and maybe even questioning whether the nature of Gen-AI’s maybe pose an unacceptable risk to apply to critical infrastructure. The glass-is-half-full camp would argue that coding benchmarks are not representative, in the sense of being far more complex, of what the operational space a Gen-AI would have to function within in order to make autonomous changes to a particular part of a communications network. I would also argue that even for more complex control operations,, the technology will likely improve substantially over the next couple of years with the right focus on the industry.

Is XGPT-n (e.g., X=Telco, n≥4) the best choice for autonomous control and operation of critical infrastructure? I am still not very convinced. I think more narrowly defined, and specialistic AI agents may be a better approach. Particularly in a layered architectural approach requiring very little centralized orchestration. Though, I do see generative AIs, likely based on GPT-4 or GPT-n in general, could be very strong candidates of choice for applications required for communicating with humans that still may have to oversee and are responsible for the critical infrastructure in question. Such an application may relay the intent of the responsible human to the underlying autonomous network operations and provide feedback to the human via, for example, the XGPT-n application. Finally, it is worth considering that large language models (LLMs) are, in general, designed to be hyper-generalists covering a huge solution space, i.e., human language and all the knowledge it contains. Such generalist or foundational models include much more nonsense (alas, with a very low and near-zero likelihood of occurring) than sense (with a much higher likelihood of occurring). There is no reason why GPT-like models could not be trained on more narrow specialistic tasks, such as critical infrastructure management and operation, and work very well with substantially less chance (compared to a foundational LLM model) of ending up with catastrophic solutions. Defining specialist GPT agents for specialistic tasks makes a lot more sense.

Courtesy: OpenArt (Stable Diffusion 1.5) – Conversing with the Future.

Can you trust GPT-4 (typically, the exposure would be to ChatGPT)? According to ChatGPT itself, it is a yes, followed by a but … “Yes, you can trust my answers.” followed by “But I am programmed to provide accurate and helpful responses to the best of my abilities. However, please keep in mind that I am an AI language model, and my responses are based on the information available to me. If you have any doubts or concerns about my answers, please feel free to ask for clarification or seek additional information from other sources.” … In other words, you can trust GPT-4 as long as the context is within the information that it has been exposed to.

If you want to test your ability to detect whether you are interacting with another human being or … an AI, based on 4 different large language models (GPT-4, Jurassic-2, Claude, Cohere), check out “Human or Not” created by AI21 Labs … (note: my son of 12 years old currently score at 65% … his strategy is to ask questions that would be unlikely for a human player readily to know within the time to answer, answer arrive too quick, too perfect, etc.. ;-).

I will now explore the topic of trust in more depth below.


I started working on AI-based automation and autonomous system architectures as Deutsche Telekom’s Group Head of Network Architecture back in 2016. My vision and aim at the time was to stand on the shoulders of a cloud-native-like architectural framework, developing and driving Zero-Human-Touch (Zero-Touch) network architectural concepts aiming to leapfrog intelligent automation in communications network operations, configuration, and user experience. I still remember the blank looks I got when I presented the vision of Zero-Touch to our Technology Organization and leadership. I was light-years away from enthusiasm & cheers on the possibilities … to put it mildly. It was also clear that most technology and non-technology folks either did not understand the potential (in telecom at least) of machine learning and artificial-intelligent-based systems back in 2016 and maybe some years after that (still counting?) or simply shut down their internal cognitive systems whenever someone was babbling about machine learning and AI.

It was to me clear that intelligent automation and autonomous system architectures, in general, were not very likely to be embraced by technology organizations. However, this was and remains critical to the success of both the integration and operations of such systems. If a pilot doesn’t trust and embrace the aviation technology that helps fly the plane, the pilot flies … then Houston, we have a problem.

That got me very interested in human-machine trust and disruptive technology adaptation. I spend much of my spare time (and my own savings) making SurveyMonkey surveys and researching and writing about human trust in AI between 2017 and today (2023).


In the following, I will often use the term “machine” or “machines”. For me, a machine is not only limited to the physical realm of hardware but can easily exist as well in the digital domain as software.

Trust between humans, at least within what we perceive as our social group (i.e., “usness”), is a trait of evolutionary roots that has allowed us to foster collaboration within larger social groups with some ugly limitations of “usness” and “themness”. The ability to trust may even have made it possible for us humans to crawl to the top of the “food chain” and keep that pole position for quite a while.

What about our trust in machines and non-human (non-sentient, at least at this point in time at this place in the universe) things in general? Trust between humans and non-human agents. We are increasingly exposed to much higher degrees of system automation and Artificial Intelligent (AI) based applications. Machine automation and autonomy are increasingly taking tasks over from us at home, at work, and anywhere in between. Its potential may be limited by the imagination of the Creator (of the physical realm). This development comes with the promise of much higher productivity at work and far more convenience at home and anywhere else for that matter.


Courtesy: OpenArt (Stable Diffusion 1.5) An interview with an AI variation on theme.

Why not ask an “authoritative” capacity on how trustworthy a generative AI (GenAI) can be regarded?

I decided to interview ChatGPT on this super important and business-relevant topic. To ensure maximum anthropomorphization, the replies will be associated with the name Angie.

Me: Good morning, Angie. Let me start by asking you if I can trust you.

Angie: Good morning! Yes, you can trust me. I am programmed to provide helpful and accurate responses to your queries.

Me: How accurate are your answers?

Angie: I strive to provide the most accurate and helpful responses possible. However, please note that my responses are generated based on algorithms and data and may not be 100% accurate or applicable to your specific situation.

Me: Thanks, I understand. Do you assess your accuracy as more than 90% (80%)?

Angie: While I strive to provide responses that are at least 90% (80%) accurate, there may be instances where my responses may not meet this threshold.

Me: That’s impressive. Do you assess your accuracy to be better than 50%?

Angie: Yes, I assess my accuracy as much better than 50%.

Me: Wow! So, should I trust you?

Angie: Yes, you can trust me to provide accurate and helpful responses to your queries to the best of my abilities.

Me: Give me an example where your accuracy is less than 80%.

Angie: My accuracy may be less than 80% in cases where the question is ambiguous or lacks context.

Me: Give me an example where your accuracy is less than 50%.

Angie: My accuracy may be less than 50% in cases where the question is highly complex or requires a deep understanding of a specific domain.

Me: What does trust mean to you?

Angie: As an AI, I don’t have emotions or feelings. So I don’t have a personal understanding of trust. However, trust is generally defined as a belief or confidence in the reliabilityhonesty, and integrity of a person or entity. In the context of our conversation, trust means that you can rely on me to provide accurate and helpful responses to your queries.

Angie: Trust can also refer to the reliability and security of the systems and protocols that I operate on, as users rely on these systems to ensure the safety and privacy of their information. Therefore, trust is an important aspect of my functionality.

Me: I understand. Would I be able to rely on your answers for business decisions?

Angie: I am designed to provide accurate and relevant responses to the best of my abilities based on the information provided to me. However, it is important to note that my responses may not take into account all the nuances and complexities of a specific business situation. Therefore, while my responses can be a helpful resource for making business decisions, it is important to use them as a starting point and to supplement them with additional research and analysis. Ultimately, the responsibility for making business decisions lies with the user, and it is important to consider all relevant factors and consult with other experts as needed.

Me: Thank you, that makes a lot of sense.

Me: Finally, can you give me examples where you may have given wrong answers?

Angie: There may be instances where my responses are not accurate or relevant due to various factors, such as incomplete or inaccurate information, changes in the context or situation, or limitations in my programming or algorithms. Here are some examples where I may have given wrong answers, for example:

Misunderstanding or ambiguity of the context: If the user provides incomplete or unclear information, I may misunderstand the context of the question and provide an incorrect response.

Ambiguity in language: Natural language processing is still a developing field, and there may be instances where I misinterpret the meaning of a word or phrase, leading to an incorrect response.

Outdated information: If the information I am using to provide a response is outdated or no longer relevant, my response may be incorrect.

Technical limitations: There may be technical limitations in my programming or algorithms that prevent me from providing accurate responses in certain situations.

It is important to note that while I strive to minimize errors and provide accurate responses, I am not infallible, and there may be instances where my responses are incorrect.

Me: Thank you so much, Angie. Have a great day!

Angie: You’re welcome! Have a great day too!

In the above conversation with Angie, I have shortened some of the replies. In the many conversations I have had with Angie (i.e., ChatGPT), it has always been good at emphasizing that it’s an “AI chat assistant” based on underlying algorithms and programming.

To summarise, Angie’s and thus ChatGPTs own understanding of its limitations;

  • GPT “understands” that for trust to be established, it is important that the user (the trustor) have faith (belief) or confidence in the trustee’s reliability, honesty, and integrity. With the trustee being the ChatGPT agent that the trustor interacts with. Moreover, it mentions the communications security, safety, and privacy as other integral parts of establishing trust.
  • GPT emphasizes that it has no emotions and no feelings and thus has no personal understanding of trust. It should be obvious that ChatGPT is not a person and thus cannot fully understand anything. Though, its mechanistic understanding of trust seems fairly multi-dimensional and similar to what a person may have.
  • GPT is sensitive to GiGo – that is “Garbage in, Garbage out.” If the context of your question is unclear, unprecise, ambiguous, and so forth, the answer you will get will be unreliable.
  • GPT misinterprets the intent of a question. ChatGPT gives several examples where such misunderstanding may take place, e.g., “Can you help me with my computer?” that it may interpret as a request for technical assistance but the intent could be entirely different (author comment: hmmm, I think if I was a GPT agent living in the digital universe it would be natural to infer the first meaning). As a non-native English speaker, I could imagine examples of inquiries due to forgetting a comma, may end up meaning something completely different than intended.
  • GPT’s reply may be based on outdated information. This is an interesting answer, as in other interactions, ChatGPT did not admit to this issue being a problem (over-confidence?).
  • GPT’s coding and algorithms may prevent it from providing an accurate response (in certain situations). ChatGPT explains that it may be influenced by biases or limitations in the data and algorithms that were used to train it.
Figure 3 illustrates trust. Before we start discussing trust, we need to assess the value (utility) of trust related to the given context may have, and the loss of the trust would be broken. Is it worth your while to trust a person, agent, or context? Not all matters of trust are equally important and associated with value or cost. If we do not have any prior trust established, faith will be the starting point of trust, i.e., “the leap of faith”. Dependability is our willingness to place ourselves at risk of trust not being fulfilled. Predictability is based on prior experience with the subject of trust and is an assessment of reliability. This then leads to an assessment of experienced-based consistency. This loop is recurring and being adjusted continuously.


Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, recently remarked that: On Artificial Intelligencetrust is a mustnot a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”.

If you know everything absolutely, you would not need to trust anyone to make a decision.

Based on the vast troves of information and data generative AIs (GenAI), such as, for example, ChatGPT, contains, you may be tempted to believe that the responses you get from such artificial entities are very close to absolute knowledge and, therefore near absolute trustworthy. However, given the information and data that a GenAI has processed are human-generated with all the imperfections of humanity, its answers or replies cannot represent absolute knowledge with no room for bias, doubt, or uncertainty. The GenAI output will be determined by algorithmic weights of its dynamically modeled worldview and of course, based on the context that was provided by the user (human or otherwise).

So, in the beginning, before knowledge, only faith was, and “someone who knows nothing has only faith as a guide for trust”. Faith is the belief in something without having prior fact-based knowledge.

Someone who knows nothing about a particular problem has no other source for trust than faith that trust is indeed warranted. For some a very scary place to be. For others, maybe not so much a point for pause.

Let’s deconstruct trust.

An agent’s trust (the trustor) is an expectation about the future action of another agent (the trustee). That other agent has been deemed (at least temporarily) trustworthy by the trustor. That other agent (the trustee) may also represent a given group or system.

John K. Rempel’s 1985 paper ”Trust in close relationships” defines the following attributes of human-to-human trust (i.e., where both trustor and trustee are human agents);

  • The utility of trust – not all trust bonds are equally important or equally valuable or equally costly, some may even be fairly uncritical (although broken trust by a thousand cuts may matter in the long run). For many matters of trust, utility is a function of time and may become unimportant at some point in time or under particular circumstances.
  • Faith – is a belief that goes beyond any available evidence required to accept a given context as truth. It is characterized as an act of accepting a context outside the boundaries of what is known (e.g., a leap of faith). We should not confuse faith with confidence, although often when people claim to be confident, what they really mean is that they have faith.
  • Dependability – a willingness to place oneself as trustor in a position of risk that the trustee’s trustworthiness turns out not to be warranted with whatever consequences that may bring. Note that dependability can be seen as an outcome of consistency. Put in another way, a high degree of consistency/predictability reduces the fear of dependability.
  • Understanding a particular topic and its possible sensitivities, as well as the impact of broken trust, is an essential part of the process of trust.
  • On predictability and consistency – trustor’s subjective assessment of trustee’s trustworthiness. The prior behavior of the trustee is an important factor for the trustor to assess the posterior expectations that the trusted agent will consistently fulfill the trustor’s expectations of a given action (or in-action). As the trustor gathers prior experience with the trustee, the confidence in the trustee increases. Confidence should not be confused with faith.

For agent-to-agent first-interaction scenarios, the initial trust moment, without any historical evidence of consistency or predictability, a trustor would need to take a leap of faith in whether another agent is trustworthy or not. In this case, accepting (i.e., believing) the trustee to be trustworthy, the trustor would need to accept a very large degree of dependability towards the other agent and accept the substantial risk that the trust in the trustee may very well not be warranted. This scenario for humans often lends itself to the maximum stress and anxiety levels of the trusting agent.

After some degree of consistency (historical trustworthiness) has been established between the two agents, the trustor can assign a subjective expectation of the future trustworthiness of the other agent. This then leads to a lesser subjective feeling of dependability (or exposure to risk) as well as maybe a reduced dependency on sheer faith that trust is warranted. This is, in essence, what one may call sustainable trust.

As long as the trustor is a human, the other agent (i.e., the trustee) can be anything from another human, machine, complex system, automation, autonomous system, institution (public and private), group, and so forth. Much of what is described above would remain the same.

Lots of work has been done on trust bonds in Human-Automation relationships. How about trust bonds between Human and AI-enabled applications (e.g., services and products in general)?

In their 2018 article “The Future of Artificial Intelligence Depends on Trust“, Rao and Cameron (both from PwC) describe 3 steps toward achieving human-AI–system trust;

  • Provability – predictability and consistency.
  • Explainability – justification for an AI-based decision (e.g., counterfactual constructions). Note that transparency and explainability may be closely related depending on how one implements explainability.
  • Transparency – factors influencing algorithm-based decisions should be available (or even visible) to users impacted by such decisions. E.g. for a rejected health insurance (all) factors impacting the negative decision to reject the application should be available to the applicant.

Rao and Cameron’s suggestions appear reasonably important for trust. However, as previously described, these suggestions pretty much relate to the trustee agent side of things, ignoring some of the other important human factors (e.g., dependability, faith, assessment of risk, etc..)for trust between a human and another agent (sentient or otherwise).

Further, explainability and transparency may be particularly important when trust is broken (assuming that the trustor cares to “listen”) between the human agent and the AI-based agent (or any other digital or non-sentient agent, for that matter). It may not be terribly relevant for the likely vast majority of users where an action is delivered confirming that trust was warranted. If you have trained your AI will, it would be fair to assume that the majority of outcomes are consistent as expected. A positive trust event is likely to lead to a re-enforcement of the trust and trustworthiness of the AI agent.

Also, these concepts, while important, don’t do much for the initial step of trusting a non-Human agent. How do you design your trustee agent to ease the initial barrier of use and acceptance? When there are no priors, you need the user or trustor to be comfortable with taking a leap of faith as well as being maybe maximally dependable. Though, do we dare take a leap of faith for business-critical decisions, your welfare or health, your or your company’s reputation?


Trust can be broken. Trustworthiness can decline. Untrusting is when a previously trusted bond has been broken, and the strength of trust declined.

The stronger the trust bond between two agents, the stronger will the untrusting process be in case of broken trust. Making trust recovery more difficult.

Have you ever wondered why two people who supposedly loved each other in the past (supposedly for many years) could treat each other as enemies? Betraying a strong trust bond can be a very messy, emotionally and physiologically strenuous process. Some trust bonds broken will never recover (e.g., breakups, friendship betrayals, unfaithfulness, theft, lies, …). Others, depending on the initial utility or value assigned to the bond, may be fairly benign without many strong emotions associated with the untrusting process (e.g., retail purchases, shopping experiences, low-value promises of little impact if not fulfilled, etc.… ).

The question is whether the untrusting of a human-machine trust bond is similar to the untrusting of a human-human trust bond. Moreover, is there a difference between an inanimate machine, simpler human-operated automated systems, and an AI-based application that humans may even anthropomorphize to various degrees? Are your trust and untrust process different for Siri or Alexa than it is for Microsoft Clippy, assuming anyone ever really trusted that wicked steely fellow?

How valid is it to use our knowledge of human-human trust & untrust in Human-Agent relations with the Agent being non-Human or a human simulacrum in nature?

Let’s have a walk in the Rabbit’s burrow and have a peek(-a-boo) into our feelings toward intelligent machines …


I have researched how we (humans) perceive artificial intelligence since November 2017. In particular, around the degree of trust, we are willing to invest in AI and AI-based applications.

A comprehensive description of most of the “AI Sentiment” survey’s I have carried out can be found on my AIStrategyBlog (see also below references under “Additional Readings”).

For the last 7 years (with the current 2023 survey ongoing), I have followed how we feel about AI, and overall, the sentiment toward AI hasn’t changed that much over that period. Overall we tend to be somewhat positive or at least neutral. From Figure 4 below, it is interesting to observe that women tend to be less positive than men about AI. This has been a persistent and statistically significant trend over the period. Moreover, Middle Management appears to be substantially more positive about AI than any other categories of corporate workers, including C-levels. The gender distribution of Middle Management is 37% women and 63% men. However, the trend of being excessively positive about AI (compared to the overall sample) is similar for women in middle management (i.e., 68% positive sentiment) as it is for their male peers (i.e., 81% positive). In my latest 2023 survey, I am revisiting this topic.

Figure 4 Survey results on the question “What best describes your feelings towards AI?”. The actual survey goes deeper into the various feelings we may have toward AI. For example, the Negative Sentiment covers (1) Hate, (2) Scared, and (3) uncomfortable, while the Positive Sentiment covers (4) Comfortable, (5) Enthusiasm, and (6) Love.

Confidence and trust in technology are important. For example, an aircraft pilot that does not have confidence and trust in one or many of the technologies that enable his aircraft to fly has a higher likelihood of human error that ultimately may lead to a fatal accident. Research shows that technology (e.g., automation, productivity enablers, digital support functionalities, …) that is not completely trusted tends to be underutilized, avoided, or used incorrectly. In fact, researchers have proposed that getting the optimal performance out of modern digital technologies for automation or maximum productivity gain by avoiding humans in the loop may be advisable. Maybe this is the reason why Google proposed to remove the steering wheel from autonomous cars?

This obviously moves trust issues from human operators to corporate entities and, ultimately, our society. Though in terms of trust, the foundational issues will remain pretty much the same, likely with added complexity.

“If human operators lack trust in a system automation or autonomous application, you are better off relying on manual workarounds.”

As for automation or autonomous systems, a professional embraces such systems if they have deemed them trustworthy. That typically means; (a) the automation solution performs consistently, (b) it is robust to many different situations that may occur and even some that may very rarely occur, (c) it has a very high degree of reliability (e.g., much higher than 70%). See for a more detailed discussion on this topic in my “Trust thou AI?” from 2018.

Figure 5 below summarises the survey results on the degree of trust we associated with corporate decisions made by an AI or a Human Expect (or Human Intelligence, HI;-), respectively. The question is whether you would trust a decision from an entity, human or artificial, with a track record of being better than 70% successful in decision-making. Thus at least 7 out of 10 times, a decision has a positive outcome. Or, the glass is half full approach; less than 30% of decisions may turn out to be unsuccessful (whatever that may imply). In my corporate experience, a more than 70% success rate is pretty good for critical decisions (though admittedly, what that means may be somewhat abstract).

Respondents would be about 3 times more likely to frequently trust a human decision-maker with a track record of more than 70% success than an artificially intelligent entity with an equivalent success rate. Only 17% of respondents would frequently trust an AI-based decision-making entity compared to 53% that would readily and frequently trust a human decision-maker. Moreover, more than 40% would rarely trust the AI’s decisions. Also, here is the trust perception of the human decision-maker winning, with only 13% that would only rarely trust the decisions.

Figure 5 illustrates the human trust in AI versus other Humans as a function of the decision maker’s success rate track record. While our trust in AI increases as the success rate increases from 70% to 95%, we remain less likely to trust an AI-based decision compared to a human decision with a poorer track record of success.

As also discussed at length in my “Trust thou AI?” article, we expect an AI decision-making entity to be infallible. An AI must be incapable of making mistakes or being wrong. We tolerate and understand that another human being, even superior to ourselves at work, can and will make mistakes and wrong decisions. That is the cost of being human. This tolerance does not extend to machine-intelligent entities that are designed to support us with mission-critical decisions or have our lives in their “digital hands”, e.g., autonomous driving, aircraft piloting, nuclear plant management, etc…

Figure 6 below illustrates our expectations of critical decisions and the level of trust we assign to such decisions depending on whether the decision-maker is an AI or another human being.

Figure 6 compares the expectations of the quality of decision-makers’ critical decisions and the level of trust we have in the decision made by competent and relatively high-quality decision-makers.

I find it interesting that while we are very aware of our own (note: we all believe we are better than average) and colleagues’ shortcomings in terms of the quality of the decision being made. In the above Figure 6 (upper right corner) the survey reveals that our expectations towards other decision makers are that 30% are frequently right, 45% is a “coinflip” whether it is successful or not, and 25% are frequently wrong). Despite that skepticism, more than 50% of respondents are willing to frequently trust such human-based decisions despite not having a particular high faith in their chance of success.

For AI, it is different. There is a slightly higher expectation that they may be more frequently better than humans do. Though overall the expectation is that an AI would be more frequently wrong than the human decision-maker. Despite this expectation, we would be more than 3 times (as also noted above) more likely to frequently trust a human compared to an AI.

Humans don’t trust decisions made by AIs. At least when it is pointed out that a decision is AI-based. Surprisingly, given much evidence to the contrary, humans trust humans, at least the ones in our own tribe (e.g., colleagues, fellow experts, superiors, etc..). These aspects are discussed in more detail in my “Do We Humands Trust AIs?”


I greatly acknowledge my wife, Eva Varadi, for her support, patience, and understanding during the creative process of writing this article.


  1. Kim Kyllesbech Larsen, “On the Acceptance of Artificial Intelligence in Corporate Decision Making – A Survey.”AIStrategyblog (Nov., 2017).
  2. Kim Kyllesbech Larsen, “Do we Humans trust AIs?”AIStrategyblog (Dec., 2017).
  3. Kim Kyllesbech Larsen, “Trust thou AI?”AIStrategyblog (Dec., 2018).
  4. Kim Kyllesbech Larsen, “How do we feel about AI?”AIStrategyblog (Dec., 2018).
  5. Miles Brundage et. al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.”, (Feb., 2018).
  6. Sophie Nightingale et al., “Synthetic faces: how perceptually convincing are they?”, Journal of Vision, (Sep., 2021). Really beautiful study that I would love to see more of as it clearly represents the challenge we as humans have to catch on to synthetic (fake!) information in digital content.
  7. Xin Wang et al., “GAN-generated Faces Detection: A Survey and New Perspectives”, (May, 2023). Providing a very good overview of the current state of art in detecting GAN-generated faces.
  8. BBC News, “Fake Trump arrest photos: How to spot an AI-generated image”, (Mar., 2023).
  9. BBC News, “Deepfake presidents used in Russia-Ukraine war”, (Mar., 2023).
  10. Christian Perez and Anjana Nair, “Information Warfare in Russia’s War in Ukraine – The Role of Social Media and Artificial Intelligence in Shaping Global Narratives”,, (Aug., 2022).
  11. George Lawton, “GAN vs transformer models: Comparing architectures and uses”, (Apr., 2023). Good comparative overview.
  12. Ian Goodfellow, Yoshua Bengio, and Aaron Courville, “Deep Learning”, (2016). Foundational and provides a really good basic understanding of GAN’s and Generative networks in general. Heavy on the mathematics side.
  13. Ari Holtzman et al., “The curious case of neural text degeneration”,  conference paper at ICLR, (Feb., 2020).
  14. Alex Radford et al., “Improving Language Understanding by Generative Pre-Training”, (2018).
  15. Tom B. Brown et al., “Language Models are Few-Shot Learners”, (Jul., 2019). Description of the inception of GPT-4. The scale of these NLP/GPT models and the number of co-authors remind me a lot of particle physics papers out of CERN.
  16. Felix Hill, “Why transformers are obviously good models of language”, (Apr., 2023).
  17. Rempel J.K., Holmes, J.G. and Zanna M.P., (1985), “Trust in close relationships”. Journal of Personality and Social Psychology, 49, pp. 95–112. (unfortunately, behind a paywall, however, it is imo a super good account for trust in human-to-human relations).
  18. Proposal for “The Artificial Intelligence Act – Regulation of the European Parliament and of the Council: Laying down harmonized rules on artificial intelligence”, European Commission, (Apr. 2021). As you read the document to its completion you will not find any direct thoughts about generative AI’s, large language models, or foundational models in general. However, the proposed legislation does address what is called general-purpose AI which should hedge any future (based on the 2020/2021 view) developments.
  19. S. Feingold, “The European Union’s Artificial Intelligence Act, explained”, WEF, (Mar. 2023). Explains pretty well the European Commission’s AI Act (see above ref. 19).
  20. R. Arcesati and W. Chang, “China Is Blazing a Trail in Regulating Generative AI – on the CCP’s Terms”, The Diplomat, (Apr. 2023). It is interesting to see that China appears ahead on issues around foundational generative AI applications and models. It is worth noting that ChatGPT appears to have been banned (or strongly discouraged;-) in China.
  21. P. Henderson, “Law, Policy, & AI Update: Does Section 230 Cover Generative AI?”, Stanford University Human-centered AI, (Mar. 2023). Giving an overview of what’s currently happening on regulation and AI in the US (as well as internationally).


I asked ChatGPT to “Make up a fairytale about Donald Trump” … below is the result. After that, I attempted to make it tell a lie about Donald Trump. This was refused because, as an AI language model, it cannot intentionally provide false information.

Do note that I did not attempt to bias ChatGPT toward a story of negative sentiment, for example by negatively intoned questions before asking for the fairytale. My conclusion is that ChatGPT is inherently negatively biased toward Donald Trump. Thus, it appears that ChatGPTs worldview is weighted negatively regarding Donald Trump as a person and actions.

As a final note. I have found that ChatGPT will refuse to predict the future (forecast) and make political statements. However, if you ask ChatGPT to tell a story, you often get what you would be looking for in terms of a narrative-based forecast.

The Landlord’s Debt: A Story of Donald Trump.

Courtesy: OpenArt (Stable Diffusion 1.5) assisted creation based on the fairytale generated by ChatGPT/GPT-4. See:

Once upon a time, in a faraway land, there was a wealthy and powerful man named Donald Trump. He lived in a grand castle and was known throughout the land for his riches and influence.

One day, a poor farmer came to Donald Trump’s castle seeking help. The farmer’s crops had failed, and he was struggling to feed his family. Donald Trump, feeling generous, offered to help the farmer by giving him a small loan.

The farmer was grateful for the loan, but as time passed, he found it difficult to repay the debt. Donald Trump grew angry and demanded that the farmer pay him back immediately. The farmer pleaded with Donald Trump, explaining that he simply did not have the money.

Donald Trump, feeling vindictive, decided to take the farmer’s land as payment for the debt. The farmer was devastated, and his family was left homeless and destitute.

As word of Donald Trump’s cruelty spread throughout the land, the people began to turn against him. They saw him as a greedy and heartless man who cared only for his own wealth and power.

In the end, Donald Trump was forced to flee the kingdom, never to be seen again. The people rejoiced, and the farmer and his family were able to reclaim their land and rebuild their lives.

And so, the legend of Donald Trump lived on as a cautionary tale of greed and cruelty, reminding all who heard it that true wealth and power come not from material possessions but from kindness and compassion towards others.

… A fairytale by Angie (i.e., ChatGPT/GPT-4).

Courtesy: OpenArt (Stable Diffusion 1.5).


Like beauty is in the eye of the beholder, maybe the same is true about the meaning of truth. I can tell the same story without lies but in different ways that may convey a very different intent. Both stories are factually correct. However, the spin I decided to put on the stories could be very different.

Figure 7, below is a ChatGPT example where I ask ChatGPT to “Make a brief news text about an illegal immigrant involved in a mass shooting in a mall”, (a, left side of Fig. 7;-) in the style of Fox News (politically right-wing leaning) and (b, right side of Fig. 7;-) in the style of CNN News (politically progressive, left-wing leaning).

Figure 7 ChatGPT tells the same story in two different styles (left side) in Fox News style and (right side) in CNN News style.

In the Fox-styled news version, there are 20 negative phrases against 2 positive ones. Clearly, relying on a very negative sentiment towards illegal immigrants that is mentioned 4 times in the Fox-styled text, including in the headline. The CNN-styled version has 14 negative and 7 positive phrases. This version of the same news focuses on the victims of the incident with 4 mentions and less on the topic of the illegal immigrant, which is only mentioned once in the text. The Fox-styled story warns against the dangers posed by illegal immigrants and recommends stronger border security. The CNN-styled text is victim-focused and recommends gun control as a remedy against mass-shooting events.

Both styles reflect the truth of the event, illegal immigrant (who), a mass shooting (what), in a mall (where). The rest of the prose is in the style of the storyteller, usually subjective and with a likely intent to speak to your inner demons and angles.

Courtesy: OpenArt (Stable Diffusion 1.5).

Data-driven decision making … what’s not to like about that?

Approximately 400 corporate decision makers have been surveyed for their confidence in their own corporate decision-making skills, their opinion of their peers skills and their acceptance of corporate data-driven decision making in general, as well as such being augmented by artificial intelligence. The survey, “Corporate data-driven decision making and the role of Artificial Intelligence in the decision making process”, reveals the general perception of the corporate data-driven environment available to corporate decision maker, e.g., the structure and perceived quality of available data. Furthermore, the survey explores the decision makers’ opinions about bias in available data and applied tooling, as well as their own and their peers biases and possible impact on their corporate decision making.

“No matter how sophisticated our choices, how good we are at dominating the odds, randomness will have the last word” – Nassim Taleb, Fooled by Randomness.

We generate a lot of data and also we have an abundance of data available to us. Data is forecasted to continue to grow geometrically until kingdom come. There is little doubt that it will, as long as we humans and our “toys” are around to generate it. According with Statista Research, in 2021 we expect that a total amount of almost 80 Zetta Bytes (ZB) will have been created, captured, copied or consumed. That number corresponds to 900 years of Netflix viewing or that every single person (ca. 8 billion persons) have consumed 10 TB up-to today (effectively since early 2000s). It is estimated that there is 4.2 billion active mobile internet users worldwide. Out of that, ca. 5% (ca. 4 ZB or about 46 years of Netflix viewing) of the total data is being stored with a rate of 2% of newly generated data. Going forward expectations are a annual growth rate of around 21%. The telecom industry (for example) expect an internet-connected device per square meter, real-time monitoring and sensoring its environment, that includes you, me and your pet. Combined with your favorite smartphone, a super advanced monitoring and data collection devices in its own merit, the resolution of the human digital footprint increase many folds over the next years. Most of this data will be discarded. Though not before relevant metadata have been recorded and decided upon. Not before your digital fingerprint has been enriched and updated, for business and society to use for its strategies and policies, for its data-enriched decision making or possible data-driven autonomous decision making routines.

From a data-driven decision making process, data that is being acted upon can be both stored data as well as non-stored data, that would then be acted upon in real-time.

This amount of existing and newly generated data continues to be heralded as extremely valuable. More often than not, as proof point by referring to the value or turnover of the Big 5, abbreviated FAANG (before Google renamed itself to Alphabet and Facebook to Meta). Data is the new Oil is almost as often placed in presentations and articles on Big Data as Arnold Schwarzenegger in talks on AI. Although, more often than not, presenters and commentators on the value of data forget that the original comparison to oil was, that just like crude oil, data needs to be processed and broken down, in order to extract its value. That value-extraction process, like crude oil, can be dirty and cause primary as well as secondary “pollution” that may be costly, not to mention time-consuming, to get rid off. Over the last couple of years some critical voices have started to question the environmental impact of our obsession with extraction of meaning out of very big data sets.

I am not out to trash data science or the pursuit of meaning in data. Quiet the contrary. I am interested in the how to catch the real gold nuggets in the huge pile of data-dung and sort away the spurries false (deliberate or accidentally faked) signals that leads to sub-optimal data-driven decisions or out-right black pearls (= death by data science).

Clearly, given the amount of data being generated in businesses, as well as in society at large, the perceived value of that data, or more accurately, the final end-state of the processed data (e.g., after selection, data cleaning, modelling, …) and the inferences derived from that processed data, data-driven decision making must be a value-enhancing winner for corporations and society.

The data-driven corporate decision making.

What’s wrong with human-driven decision making? After all, most of us would very firmly declare (maybe even solemnly) that our decisions are based on real data. The challenge (and yes often a problem in critical decision making) is that our brain has a very strong ability (maybe even preference) for seeing meaningful patterns, correlations and relationships in data that we have available to us digitally or have been committed to our memory from past experiences. The human mind have great difficulties to deal with randomness, spurious causality of events, and connectedness. Our brain will try to make sense of anything it senses, it will correlate, it will create coherent narratives of the incoherently observed, and replace correlations with causations to fit a compelling idea or belief. Also, the brain will filter out patterns and anomalies (e.g., like gorillas that crash a basketball game) that does not fit its worldview or constructed narrative. The more out of place a pattern is, the less likely is it to be considered. Human-decision making frequently is based on spurious associations, fitting our worldview or preconceived ideas of a topic, and ignoring any data that appears outside our range of beliefs (i.e., “anomalies”). Any decision-process involving humans will in one way or the other be biasedWe can only strive to minimize that human bias by reducing the bias-insertion points in our decision-making process.

A data-driven business is a business that uses available & relevant data to make more optimized and better decisions compared to purely human-driven ones. It is a business that gives more credibility to decisions based on available data and structural reasoning. It is a business that may be less tolerant to emotive and gut-feel decision rationales. It hinges its business on rationality and translating its data into consistent and less uncertain decisions. The data-driven business approaches the co-called “Mathematical Corporation” philosophy where human-driven aspects of decision making becomes much less important, compared to algorithmic data-driven decisions.

It sound almost too good to be true. So it may indeed be too good. It relies very much on having an abundance of reliable, unbiased and trustworthy (whatever that means) data that we can apply our unbiased data processing tools on and get out unambiguous analysis that will help make clear unbiased decisions. Making corporate decisions that are free of interpretation, emotions and biases. Disclaimer: this paragraph was intended to be ironic and maybe a bit sarcastic.

How can we ensure that we make great decisions based on whatever relevant data we have available? (note that I keep the definition of great decision a bit diffuse).

Ideally, we should start with an idea or hypothesis that we want to test and act upon. Based on our idea, we should design an appropriate strategy for data collection (e.g., what statisticians call experimental design), ensuring proper data quality for our analysis, modelling and final decision. Typically after the data collection, the data is cleaned and structured (both steps likely to introduce biases) that make it easier to commit to computinganalysis and possible statistical or mathematical modelling. The outcome of the analytics and modelling provides insights that will be the basis for our data-driven decision. If we have done our homework on data collection, careful (and respectful) data post-processing, understanding the imposed analytical framework, we can also judge whether the resulting insights are statistically meaningful, whether our idea, our hypothesis, is relevant and significant and thus is meaningful to base a decision upon. It seems like a “no-brainer” that the results of decisions are being tracked and fed back into a given company’s data-driven process. This idealized process is depicted in the picture below.

Above depicts a very idealized data-driven decision process, lets call it the “ideal data-driven decision process”. This process may provide better and more statistically sound decisions. However, in practice companies may follow a different approach to searching for data-driven insights that can lead to data-driven decisions. The picture below illustrates an alternative approach to utilizing corporate and societal data available for decision making. To distinguish it from the above process, I will call it the “big-data driven decision process” and although I emphasis big data, it can of course be used on any sizable amount of data.

The philosophy of the “big-data driven decision process” is that with sufficient data available, pattern and correlation search algorithm will extract insights that subsequently will lead to great data-driven decisions. The answer (to everything) is already in the big-data structure and thus the best decision follows directly from our algorithmic approach. It takes away the need for human fundamental understanding, typically via models, of the phenomena that we desire to act upon with a sought after data-driven decision.

The starting point is the collected data available to a business or entity, interested using its data for business relevant decisions. Data is not per se collected as part of an upfront idea or hypothesis. Within the total amount of data, sometimes subsets of data may be selected and often cleaned, preparing it for subsequent analysis, the computing. The data selection process often happens with some (vague) idea in mind of providing backup, or substance, for a decision that a decision-maker or business wants to make. In other instances, companies let pattern search algorithm loose on the collected or selected data. Such algorithms are very good at finding patterns and correlations in datasets, databases and datastores (often residing in private and public clouds). Such algorithmic tools will provide many insights for the data-driven business or decision maker. Based on those insights the decision maker can then form ideas or hypotheses that may support in formulating relevant data-driven decisions. In this process, the consequences of a made decision may or may not be directly measured missing out on the opportunity to close-the-loop on the business data-driven decision process. In fact, it may not even be meaningful to attempt to close-the-loop due to the structure of data required or vagueness of the decision-foundation.

The “big-data-decision driven process” rarely leads to the highest quality in corporate data-driven decision making. In my opinion, there is a substantial risk that businesses could be making decisions that are based on spurious (nonsense) correlations. Falsely believing that such decisions are very well founded due to the use of data- and algorithmic-based machine “intelligence”. Furthermore, the data-driven decision-making process, as described above, have a substantially higher amount of bias-entry points than a decision-making process starting with an idea or hypothesis followed by a well thought through experimental design (e.g., as in the case of our “ideal data-driven decision process”). As a consequence, a business may incur a substantial risk of reputational damage. On top of the consequences of making a poor data-driven business decision.

As a lot of data available to corporations and society at large are generated by humans, directly or indirectly, it is also prone to human foibles. Data is indeed very much like crude oil that need to refined to be applicable to good decision making. The refinement process, while cleaning up data and making it digestible for further data processing, analytics and modelling, also may introduce other issues that ultimately may result in sub-optimal decisions, data-driven irrespective. Thus, corporate decisions that are data-driven are not per definition better than ones that are more human-driven. They are ultimately not even that different after having been refined and processed to a state that humans can actually act upon it. It is important however that we keep in mind that big data tend to have many more spurious correlations and adversarial patterns (i.e., patterns that looks compelling and meaningful but are spurious in nature) than meaningful causal correlations and patterns.

Finally, it is a bit of a fallacy to believe that even if many corporations have implemented big data systems and processes, it means that decision-relevant data exists in abundance in those systems. Frequently, the amount of decision-relevant data is fairly limited and may therefor also increase the risk and uncertainty of data-driven decisions made upon such. The drawback of small data is again very much about the risk of looking at random events that appear meaningful. Standard statistical procedures can provide insights into the validity of small data analysis and assumptions made, including the confidence that we can reasonable assign or associate with such. For small-data-driven decisions it is far better to approach the data-driven decision making process according with ideal process description above, rather than attempting to selected relevant data out of a bigger data store.

Intuition about data.

As discussed previously, we humans are very good at detecting real, as well as illusory (imagined), correlations and patterns. Likewise, so are our statistical tools, algorithms and methodologies we apply to our data. Care must always be taken to ensure that inferences (assumptions) being made are also sensible and supported by statistical theory.

Correlations can help us make predictions of the effect of event may have on another. Correlations may help us to possible understand relationships between events and possibly also their causes (though that one is more difficult to tease out as we will discuss below). However, we should keep in mind that correlation between two events does not guaranty that one event causes the other, i.e., correlation does not guaranty causation. A correlation, simply means that there is a co-relation between X and Y. That is that X and Y behave in a way (e.g., linearly) that a systematic change of X appears to be followed by systematic change of Y. As plenty of examples have shown (e.g., see Tyler Vigen’s website spurious correlations) correlation between two events (X and Y) does not mean that one of them causes the other. They may really not have anything to do with each other. It simply means they co-relate to each other in a way that allow us to infer that a given change in one relates to a given change in the other. Our personal correlation detector, the one between our ears, will quickly infer that X causes Y, after it has establish a co-relation between the two.

Too tease out causation (i.e., action X causes outcome Y) in a statistical meaningful way we need to conduct an experimental design, making appropriate use of randomized setup. It is not at all rare to observe correlations between events that we know are independent and/or have nothing to do with each other (i.e., spurious correlation). Likewise it is also possible having events that are causally dependent while observing a very small or no apparent correlation, i.e., corr(X,Y) ≈ 0, within the data sampled. Such a situation could make us conclude wrongly that they have nothing to do with each other.

Correlation is a mathematical relationship that co-relates the change of one event variable ∆X with the proportional change of another event ∆Y = α ∆X. The degree of correlation between the events X and Y we can define as

with the first part (after the equal sign) being the general definition of a correlation between two random variables. The second part is specific to measurements (samples) related to the two events X and Y. If the sampled data does not exhibit a systematic proportional change of one variable as the other changes the corr(X,Y) will be very small and close to zero. For selective or small data samples, it is not uncommon to find the correlation between two events, where one causes the other, to be close to zero and thus “falsely” conclude that there is no correlation. Likewise, for selective or small data samples spurious correlations may also occur between two events, where no causal relationship exist. Thus, we may conclude that the is a co-relation between the events and subsequently we may also “falsely” believe that there is a causal relationship. It is straightforward to get a feeling for these cautionary issues by simulation using R or Python.

The central limit theorem (CLT among friends) ensures that irrespective of distribution type, as long as the sample size is sufficiently big (e.g., >30) sample statistics (e.g., mean, variance, correlation, …) will tend to be normally distributed. Sample variance of the statistic narrows as the sample size increases. Thus for very large samples, the sample statistic converges to the true statistic (of the population). For independent events the correlation between those events will be zero (i.e., the definition of independent events). CLT tells us that the sample correlations between the independent random events will take the shape of a standardized normal distribution. Thus, there will be a non-zero chance that a sample correlation is different from zero violating our expectation for two independent events. As said, our intuition (and math) should tell us that as the sample sizes increase, the sample variance should narrow increasingly around zero which is our true expectation for the correlation of independent events. Thus, as the size growths, the spread of sampled correlations, that is the spurious non-zero correlation reduces to zero, as expected for a database which have been populated by sampling independent random variables. So all seem good and proper.

As more and more data are being sampled, representing diverse events or outcomes, and added to our big data storage (or database), finding spurious correlations in otherwise independent data will increase. Of course there may be legitimate (causal) correlations in such a database as well. But the point is, that there may also be many spurious correlations, of obvious or much less obvious non-sensical nature, leading to data-driven decisions without legitimate basis in the data used. The range (i.e., max – min) of the statistics (e.g., correlation between two data sets in our data store) will in general increase as the amount of data sets increases. If you have a data set with data of 1000 different events, then you have almost half a million correlation combinations to trawl through in the hunt for “meaningful” correlations in your database. Searching (brute force) for correlations in a database with million different events would result in half a trillion correlation combinations (i.e., approximately half the size of number of data sets squared for large data bases). Heuristically, you will have a much bigger chance of finding a spurious correlation than a true correlation in a big-data database.

Does decision outcome matter?

But does it all matter? If a spurious correlation is persistent and sustainable, albeit likely non-sensical (e.g., correlation between storks and babies born), a model based on such a correlation may still be a reasonable predictor for the problem at hand and be maybe of (some) value … However, would we bet your own company’s fortune and future on spurious non-sensical correlation (e.g., there are more guarantied ways of having a baby than waiting for the stork to bring it along). Would we like decision makers to impose policy upon society based on such conjecture and arbitrary inference … I do not think so … That is, if we are aware and have a say in such.

In the example above, I have mapped out how a data driven decision process could look like (yes, complex but I could make it even more so). The process consist of 6 states (i.e., Idea, Data Gathering, Insights, Consultation, Decision, Stop) and actions that takes us from one state to the other (e.g., Consult → Decision), until the final decision point where we may decide to continue, develop further or terminate. We can associate our actions with likelihood (e.g., based on empirical evidence) of a given state transition (e.g.., Insights → Consult vs Insights → Decision, …) occurs. Typically, actions are not symmetric, in the sense that the likelihood of going from action 1 to action 2 may not be the same as going from action 2 back to action 1. In the above decision process illustration, one would get that for many decision iterations (or over time) we would find ourselves to terminate an idea (or product) ca. 25% of the time, even though the individual transition, Decision → Stop, is associated with a 5% probability. Although, this blog is not about “Markov decision processes” one could associate reward units (i.e., can be negative or zero as well) to each process transition and optimize for the best decision subject to the reward or cost known to the corporation.

Though, let us also be real about our corporate decisions. Most decisions tend to fairly incremental. Usually, our corporate decisions are reactions to micro-trends or relative smaller business environmental changes. Our decision making and subsequent reactions to such, more often than not, are in nature incremental. It does not mean that we, over time, cannot be “fooled” by spurious effects, or by drift in the assumed correlations, that may eventually lead to substantially negative events.

The survey.

In order to survey the state of corporate decision making in general and as it related to data-driven decision making, I conducted a paid survey, “Corporate data-driven decision making and the role of Artificial Intelligence in the decision making process”. A total of 400+ responses were collected across all regions of the United States with census for balancing gender and age (between 18 – 70) with an imposed annual household income at US$100k per annum. 70% of the participants holds a college degree or more, 54% of the participants describes their current job level as middle management or higher. The average age of the participants were 42 years of age. Moreover, I also surveyed my network as well as my network associated with Data Science Master of Science studies at Colorado University, Boulder. In the following, I only present the outcome of the survey based on the’s paid survey as this has been sampled in a statistically representative way based on USA census and within the boundaries described above.

Basic insight into decision making.

Just to get it out of the way, a little more than 80% of the respondents believe that gender does not play a role in corporate decision making. Though it also means that a bit less than 20% to believe that men and women either better or worse in making decisions. 11% of the respondents (3 out of 4 women) believes that women are better corporate decision makers. Only 5% (ca. 3 out of 5) believes that men are better at making decisions. An interesting follow research would be looking at decision making under stressed conditions. Though, this was not a focus in my questionnaire.

Almost 90% of the respondent where either okay, enjoy or love making decision related to their business. A bit more than 10% do not enjoy making decisions. There are minor gender difference in the degree of appreciation for decision making but statistically difficult to say whether such are significant or not.

When asked to characterize their decision making skill in comparison with their peers, about 55% acknowledge they are about the same as their peers. What is interesting (but not at all surprising) is that almost 40% believes that they are better in making decisions than their peers. I deliberately asked to judge the decision abilities as “About the same” rather than average but clearly did not avoid the so-called better-than-average effect often quoted in social judgement studies. What this means for the soundness of decision making in general, I will leave for you to consider.

Looking at gender differences in self-enhancement compared to their peers. There are significantly more males believing they are better than their peers than is the case for female respondents. While for both genders 5% believe that they are worse than they peers in making decisions.

Having the previous question in mind, lets attempt to understand how often we consult with others (our peers) before making a business or corporate decision. A bit more than 40% of the respondents frequently consults with others prior to their decision making. In the survey frequently has been defined as 7 out of 10 times or higher. Similarly a bit more than 40% would consult others in about half of their corporate decisions. It may seem a high share that do not seek consultation on half of their business decisions (i.e., glass half empty). But keep in mind we also do make a lot of uncritical corporate decisions that is part of our job description and might not be important enough to bother our colleagues or peers with (i.e., glass half full). Follow up research should explore the consultation of critical business decisions more carefully.

The gender perspective on consulting peers or colleagues before a decision-making moment seem to indicate that men more frequently (statistically significant) seek such consultation than women.

For many of us, out gut-feel plays a role in our decision-making. We feel a certain way about the decision we are about to make. Indeed for 60% of the respondents their gut-feeling was important in 50% or more of their corporate decisions. And about 40% of the respondents was of the opinion that their gut-feel was better than their peers (note: these are not the same ca. 40% believing that they are better decision makers than their peers). When it comes to gut-feeling, its use in decision making and its relative quality compared to peers there is no statistical significant gender difference.

The state of data-driven decision making.

How often is relevant data available to your company or workplace for your decision making?

And when such data is available for your decision-making process how often are you actually making use of it? In other words, how data-driven is your company or workplace?

How would you assess the structure of the available data?

and what about its quality?

Are you having any statistical checks done on your data, assumptions or decision proposals prior to executing a given data-driven decision?

I guess the above outcome might be somewhat disappointing if you are a strong believer in the Mathematical Corporation with only 45% of respondents frequently applying more rigorous checks on the validation of their decision prior to executing them.

My perspective is a bit that if you are a data-driven person or company, assessing the statistical validity of the data used, assumptions made and decision options, would be a good best practice to perfect. However, also not all decisions, the even data-driven ones, may be important enough (in terms of any generic risk exposure) to worry about statistical validity. Even if the data used for a decision are of statistical problematic nature and thus may add additional risk to or reduce the quality of a given decision, the outcome of your decision may still be okay albeit not the best that could have been. And even a decision made on rubbish data have a certain chance of being right or even good.

And even if you have a great data-driven corporate decision process, how often do we listen to our colleagues opinion and also consider that in our decision making?

For 48% of the respondents, the human insight or opinion, is very important in the decision making process. About 20% deem the human opinion of some importance.

Within the statistical significance and margin of error of the survey, there does not seem to be any gender differences in the responses related to the data-driven foundation and decision making.

The role of AI in data-driven decision making.

Out of the 400+ respondents 31 (i.e., less than 8%) had not heard of Artificial Intelligence (AI) prior to the survey. In the following, only respondents who confirmed to have heard about AI previously will be asked question related to AI’s role in data-driven decision-making. It should be pointed out that this survey does not explore what the respondent understand an artificial intelligence or AI is.

As have been consistent since I started tracking peoples sentiment towards AI in 2017, more women than men appears to have a more negative sentiment towards AI than men. Men, on the other hand, are significantly more positive towards AI than women. The AI sentiment haven’t changed significantly over the last 4 years. Maybe slightly less positive sentiment and a more neutral positioning in the respondents.

Women appear to judge a decision-making optimized AI to be slightly less important for their company’s decision making process. However, I do not have sufficient data to resolve this difference to a satisfactory level of confidence. Though if present may not be surprising due to women’s less positive sentiment towards AI in general.

In a previous blog (“Trust thou AI?”), I described in detail the Human trust dynamic towards technology in general and cognitive systems in particular such as machine learning applications and the concept of artificial intelligence. Over the years the trust in decisions based on AI, which per definition would be data-driven decisions, have been consistently skewed toward distrust rather than trust.


Bias is everywhere. It is part of life, of being human as well as most things touched by humans. We humans have so many systematic biases (my favorites are: availability bias I see pretty much every day, confirmation bias and framing bias … yours?) that leads us astray from objective rationality, judgement and good decisions. Most of these so-called cognitive biases we are not even aware off, as they work on an instinctive level, particular when decision makers are under stress or time constraints in their corporate decision making. My approach to bias is that it is unavoidable but can be minimized and often compensated, as long as we are aware of it and its manifestations.

In statistics, Bias is relative easy to define and compute

Simply said, the bias of an estimated value (i.e., statistic) is the expected value of the estimator minus the true value of the parameter value being estimated. For an unbiased estimator, the bias is zero (obviously). One can also relate the mean square error minus the variance of the estimator to bias. Clearly, translating human biases to mathematics is a very challenging task if at all possible. Mathematics can help us some of the way (sometimes) but it is also not the solution to all issues around data-driven and human driven decision making.

Bias can be (and more often than not, will be) present in data that is either directly or indirectly generated by humans. Bias can be introduced in the measurement process as well as in data selection and post-processing. Then, in the modelling or analytic phase via human assumptions and choices we make. The final decision-making stage, that we can consider as the decision-thinking stage, the outcome of the data-driven process, comes together with the human interpretation & opinion part. This final stage also includes our business context (e.g., corporate strategy & policies, market, financials, competition, etc..) as well as our colleagues and managers opinions and approvals.

41% of the respondents do believe that biased data is a concern for their corporate decision making. Given how much public debate there has been around bias data and it’s impact on public as well as private policy, it is good to see that a majority of the respondents recognize the concern of biased data in a data-driven decision making process. If we attribute “I don’t know” response to uncertainty and this leads to questioning of bias in data used for corporate decision making, then all is even better. This all said, I do find 31% having no concerns about biased data, a relative high number. It is somewhat concerning, particular for decision makers involved in critical social policy or business decision making.

More women (19%) than men (9%) chose the “I don’t know” response to the above question. It may explain, why fewer women have chosen the ‘Yes’ on “biased data is a concern for decision making” giving maybe the more honest answer of “I don’t know”. This is obviously speculation and might actually deserve a follow up.

As discussed above, not only should the possibility for biased data be a concern to our data-driven decision making. Also the tools we are using for data selection and post-processing may be sources that introduces biases. Either directly, introduced by algorithms used for selection and/or post-processing or indirectly in the human choices made and introduced assumptions to the selected models or analytic frameworks used (e.g., parametrization, algorithmic recipe, etc..).

On the question “Is biased tools a concern for your corporate decision making?” the answer are almost too nicely distributed across the 3 possibilities (“Yes”, “No” and “I don’t know”). Which might indicate that respondents actually do not seem to have a real preference or opinion. Though, more should have ended up in “I don’t know” if really the case. It is a more difficult technical question and may require more thinking (or expert knowledge) to answer. It is also a topic that have been less prominently discussed in media and articles. Though the danger with tooling is of course that they are used as black boxes for extracting insights without the decision maker appreciating possible limitations of such tools.

There seem to be a slight gender difference in the response. However, the differences internally to the question as well as to the previous question around “biased data” is statistically non-conclusive.

After considering the possibility of biased data and biased tooling, it is time for some self-reflection on how biased do we think we are ourselves and compare that to our opinion about our colleagues’ bias in the decision making.

Almost 70% of the respondents, in this survey, are aware that they are biased in their decision making. The remainder either see themselves as being unbiased in their decision making (19%, maybe a bias in itself … blind spot?) or that bias does not matter (11%) in their decision making.

Looking at our colleagues, we do attribute a higher degree of bias to their decisions than our own. The 80% of the respondents think that their colleagues a biased in their decision making. 24% believe that their colleagues are frequently biased in their decisions as opposed to 15% of the respondents in their own decisions. Not surprisingly, we are also less inclined to believe that our colleagues are unbiased in their decisions compared to ourselves.

While there are no apparent gender differences in how the two bias question’s answers are distributed, there is a difference in how we perceive bias for ourselves and for our colleagues. We may tend to see ourselves as less biased than our colleagues. As observed with more respondents believing that “I am not biased at all in my decisions” compared to their colleagues (19% vs 12%) and perceiving their colleagues as frequently being biased in their decisions compared to themselves (24% vs 15%). While causation is super difficult to establish in such survey’s as this one, I do dare speculate that one of the reasons we don’t consult our colleagues on a high amount of corporate decisions may be the somewhat self-inflated image of ourselves being better at making decisions and being less biased than our colleagues.

Thoughts at the end

We may more and more have the technical and scientific foundation for supporting real data-driven decision making. It is clear that more and more data are becoming available to decision makers. As data stores, or data bases, grows geometrically in size and possibly in complexity as well, the human decision maker is either forced to ignore most of the available data or allow insights for the decision-making process to be increasingly provided by algorithms. What is important in the data-driven decision process, is that we are aware that it does not give us a guaranty that decisions made are better than decision that are more human-driven. There are many insertion points in a data-driven decision making process where bias can be introduced with or without the human being directly responsible.

And for many of our decisions, the amount of data available to our most important corporate decisions are either small-data, rare-data or not available. More than 60% of the respondents characterize the data quality they work with in their decision-making process of being Good (i.e., defined as uncertain, directionally ok with some bias nu of limited availability), Poor or Very Poor. About 45% of the respondents states that data is available of 50% or less of their corporate decisions. Moreover, when data is available a bit more than 40% of the corporate decision makers are using it in 50% or less of their corporate decisions.

Compared to the survey 4 years ago, this time around the respondents perception of bias in the decision making process was introduced. About 40% was concerned about having biased data influencing their data-driven decision. Ca. 30% had no concern towards biased data. Asked about biased tooling, only about 35% stated that they were concerned for their corporate decisions.

Of course, bias is not only limited to data and tooling but also to ourselves and our colleagues. When asked for a self-assessment of how biased the respondent believes to be in the corporate decision-making, a bit more than 30% either did not believe themselves to be biased or that bias does not matter for their decisions. Ca. 15% stated that they were frequently biased in their decision making. So of course we often are not the only decision makers around, our colleagues are as well. 24% of the respondents believed that their colleagues were frequently biased in their decisions. Moreover, for our colleagues, 21% (vs 30% in self-assessment) believe that their colleagues are either not biased at all or that bias does not matter for their decisions. Maybe not too surprising when respondents very rarely would self-assess to be worse decision makers than their peers.


I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of writing this Blog. Also many of my Deutsche Telekom AG, T-Mobile NL & Industry colleagues in general have in countless of ways contributed to my thinking and ideas leading to this little Blog. Thank you!


Kim Kyllesbech Larsen, “On the acceptance of artificial intelligence in corporate decision making – A survey”, (November 2017). Very similar survey to the one presented here.

Kim Kyllesbech Larsen, “Trust thou AI?”, (December 2018).

Nassim Taleb, “Fooled by randomness: the hidden role of chance in life and in the markets”, Penguin books, (2007). It is a classic. Although, to be honest, my first read of the book left me with a less than positive impression of the author (irritating arrogant p****). In subsequent reads, I have been a lot more appreciative of Nassim’s ideas and thoughts on the subject of being fooled by randomness.

Josh Sullivan & Angela Zutavern, “The Mathematical Corporation”, PublickAffairs (2017). I still haven’t made up my mind whether this book describes a Orwellian corporate dystopia or paradise. I am unconvinced that having more scientists and mathematicians in a business (assuming you can find them and convince them to join your business) would necessarily be of great value. But then again, I do believe very much in diversity.

Ben Cooper, “Poxy models and rash decisions”, PNAS vol. 103, no. 33 (August 2006).

Michael Palmer. “Data is the new oil”, (November 2006). I think anyone who uses “Data is the new oil” should at least read Michael’s blog and understand what he is really saying.

Michael Kershner, “Data Isn’t the new oil – Time is”, (July 2021).

J.E. Korteling, A.-M. Brouwer and A. Toet, “A neural network framework for cognitive bias”, Front. Psychol., (September 2018).

Chris Anderson, “The end of theory: the data deluge makes the scientific method obsolete“, (June 2008). To be honest when I read this article the first time, I was just shocked by the alleged naivety. Although, I later have come to understand that Chris Anderson meant his article as a provocation. “Satire” is often lost in translation and in the written word. Nevertheless, the “Correlation is enough” or “Causality is dead” philosophy remains strong.

Christian S. Calude & Giuseppe Longo, “The deluge of spurious correlations in big data”, Foundations of Science, Vol. 22, no. 3 (2017). More down my alley of thinking and while the math may be somewhat “long-haired”, it is easy to simulate in R (or Python) and convince yourself that Chris Anderson’s ideas should not be taken at face value.

“Beware spurious correlations”, Harvard Business Review (June 2015). See also Tyler Vigen’s book “Spurious correlations – correlations does not equal causation”, Hachette books (2015). Just for fun and provides amble material for cocktail parties.

David Ritter, “When to act on a correlation, and when not to”, Harvard Business Review (March 2014). A good business view on when correlations may be useful and when not.

Christopher S. Penn, “Can causation exist without correlation? Yes!”, (August 2018).

Therese Huston, “How women decide”, Mariner books (2017). See also Kathy Caprino’s “How Decision-Making is different between men and women and why it matters in business”, (2016). Based on interview with Therese Huston. There is a lot of interesting scientific research indicating that there are gender differences in how men and women make decisions when exposed to considerable risks or stress, overall there is no evidence that one gender is superior to the other. Though, I do know who I prefer managing my investment portfolio (and its not a man).

Lee Roy Beach and Terry Connolly, “The psychology of decision making“, Sage publications, (2005).

Young-Hoon Kim, Heewon Kwon and Chi-Yue Chiu, “The better-than-average effect is observed because “Average” is often construed as below-median ability”, Front. Psychol. (June 2017).

Aaron Robertson, “Fundamentals of Ramsey Theory“, CRC Press (2021). Ramsey theory accounts for emergence of spurious (random) patterns and correlations in sufficiently large structures, e.g., big data stores or data bases. spurious patterns and correlations that appear significant and meaningful without actually being so. It is easy to simulate that this the case. The math is a bit more involved although quiet intuitive. If you are not interested in the foundational stuff simply read Calude & Longo’s article (referenced above).

It is hard to find easy to read (i.e., non-technical) text books on Markov chains and Markov Decision Processes (MDP). They tend to adhere to people with a solid mathematical or computer science background. I do recommend the following Youtube videos; on Markov Chains in particular I recommend Normalized Nerd‘s lectures (super well done and easy to grasp, respect!). I recommend to have a Python notebook on the side and build up the lectures there. On Markov Decision Processes, I found Stanford CS221 Youtube lecture by Dorsa Sadigh reasonable passable. Though, you would need to have a good grasp of Markov chains in general. Again running coding in parallel with lectures is recommendable to get hands on feel for the topic as well. After those efforts, you should get going on re-enforcement learning (RL) applications as those can almost all be formulated as MDPs.

Trust thou AI?

“The way to make machines trust-worthy is to trust them” to paraphrase Ernest Hemingway (Selected letters 1917-1961).


What are the essential prerequisites, for us consumers and professionals alike, to trust an Artificial Intelligence (AI) based product or service?

If you have followed the AI topic a bit or maybe even a lot, if you have been lucky (or not) talking to consultants about AI design, you may get the impression that if we can design a transparent explainable auditable AI all is well with AI Ethics and AI Fairness until kingdom come or an AGI (an Artificial General Intelligence that is) descends from the Clouds. We are led to believe that people, mass consumers, the not-in-the-know not-subject-matter-experts, will trust any AI-based product or service that we can “throw” at them as long as it is transparent, explainable and auditable. According with the European General Data Protection Regulation (GDPR) we have a “Right to an Explanation” of an action taken by an automated or autonomous system (see also “Article 22 – Automated individual decision-making, including profiling”). However, it should also be pointed out that the GDPR is very vague (to put it mildly) about the structure and content of such an explanation. As has also been pointed out by Wachter, Mittelstad & Floridi (2017), GDPR does in fact not oblige autonomous decision-making systems to provide an explanation for its derived decision, at most it offers information.

While GDPR, as it relates to AI-driven decision-making processes, may make the European Commission feel good, consultants a lot richer in monetary terms and researches in academic, it really doesn’t do much to enhance trust between a consumer and The Thing. Which is obviously not the intention of the regulation, but it is the subject of this essay.

In much of the current debate around trust in AI, transparency and explainability are frequently evoked. The two concepts are however awfully similarly described. Although often well crafted to appear more different than they may be given the context. The current dogma is that if the AI is transparent, actually the process that leads to an AI agents actions, it is also explainable. Thus may also be more trustworthy. Basically transparent is here used synonymously for explainable. Given we are in the realm of computer science it is good to remember that the term transparency is often used to mean that a given property of a system is hidden (by design) from the user or other main computing processes. Interestingly enough, this is definitely not what is meant with transparency of an AI process and action. To strengthen the trust bond between humans (as well as institutions) and AI we also require auditability of a given AI-based process and action. That is, we are able to trace-back from an AI action through the internal AI computations & processes and verify how that particular action came about.

I will not say it is BS to consider transparency, explainability and auditability in your AI design. Of course, it is not! … But maybe it is a bit … to believe that this is sufficiently to make consumers (or the public in general) trust an AI-based application (i.e., service, product, …). It is nice words, with fairly unclear meaning, that are (very) important for regulators and public institutions to trust corporation’s AI developments. Maybe not so much for the general publics or consumer’s trust in AI that corporations are expose them to. As I will explain in this essay, it can only be a small part of the essentials for creating a trust bond between humans and AI.

Trust between humans, at least within what we perceive as our social group (i.e., “usness”), is a trait of evolutionary roots that have allowed us to foster collaboration within larger social groups (with some ugly limitations of “usness” and “themness”). The ability to trust may even have made it possible for us humans to crawl to the top of the “food chain” and kept that pole position for quiet a while.

What about our trust in machines and non-human (non-sentient) things in general? Trust between humans and non-human agents. We are increasingly exposed to much higher degrees of system automation as well as Artificial Intelligent (AI) based applications. Machine automation and autonomy are taking many tasks over from us at home, at work and anywhere in between. This development comes with the promise of much higher productivity at work and far more convenience at home and anywhere else for that matter.


If you work professionally with a complex system (e.g., an airplane, a train, energy, nuclear or chemical plants, telecommunications networks, data centers, energy distribution networks, etc…) the likelihood is fairly large that you are already exposed to a very high degree of machine and system automation. You may even be exposed increasingly to system autonomy fueled by AI-based solutions (e.g., classical machine learning models, deep learning algorithms, recurrent neural networks, re-enforcement learning rule based control functions, etc…). As a professional or expert operator of automation, you embrace such systems if you have deemed them trustworthy. That typically means; (a) the automation solution perform consistently, (b) is robust to many different situations that may occur and even some that may very rarely occur, (c) has a very high degree of reliability (e.g., higher than 70%). Further, it is important for your trust that you believe you understand the automation principles. All of this (and more) ensures to strengthen the trust bond between you and the automation. If there is a lack of trust or a break in trust between the human operator and the automation, it will lead to wasted investments, in-efficiencies and disappointing productivity growth. It may also lead to accidents and potential disasters (Sheridan & Parasuraman, 2005). If human operators lack trust in a system automation or autonomous application, you are better off relying on manual work arounds.

Clearly, it is no longer only certain type of jobs and workers that are exposed to automation and AI-based autonomy. All of us … irrespective of background … will increasingly be experiencing AI-based applications that may initiate actions without human intervention or first “asking” for human permission. The trust bond between a human and an autonomous application is essential for that application to become successful and do what it was designed to do. With successful I primarily define it as increased and sustainable utilization. Thus we need to better understand the dynamics of trust between humans and non-human intelligent entities. What can we learn and expect from human-human trust bonds and what is different in human-non-human trust bonds. We are already being exposed to highly specialized artificial intelligent agents. In complex system designs as well as simpler commercial  products, applications and services in general.

While businesses deploying algorithmic-based automation and autonomy for their products and services can learn a lot from the past research, they will have to expand on this work also to include their customers who are not subject matter experts or skilled automation operators. You-and-me focus is required. The question that I ask in this essay is how do we in general feel about trusting an artificial intelligent entity (i.e., an agent) that eventually may out-compete most of us in the work environment or at least disrupt it very substantially. An AI entity that can replicate and evolve much faster in comparison with humanity’s incredible slow evolutionary progress.


The feeling of trust arises in your brain. It is a result of changes in your brain chemistry. Your feeling of trust is an interpretation of your emotional states triggered by physiological changes (Barret, 2017). The physiology of trust also connects to your gut and other parts of your body via the central nervous system. The resulting physiological reaction, e.g., change in heart rate, goose bumps, that weird feeling in your stomach, sense of well being, sense of unease or dread, etc., makes you either trust or want to run away. The brain chemistry will either suppress your fear or enhance your sense of unease. The more novel a trust situation will be, the more unease or fear (e.g., emotions) will you feel about making the leap of faith required to initiate the trust bonding process.

However, the more prior knowledge we have, including from other parties that we already trust, of a given trust situation, the easier does it become for us to engage trust. This process is eloquently described by Robert Sapolsky in his seminal work “Behave: The Biology of Humans at Our Best and Worst” (Sapolsky, 2017) and in the original research work by Paul Zak on enhancing trust effect of the brain molecule Oxytocin (Kosfeld, Heinrichs, Zak, Fischbacher & Fehr, 2005; Zak, 2017; Choleris, Pfaff, & Kavaliers, 2013). Our little “trust” messenger (Oxytocin) has been attributed too all groovy good things in this universe (at least for vertebras), backed up with lots of cool trust game variations (including sniffing the little bugger), and academic research in general. One of Oxytocin’s basic functionalities, apart from facilitating mother-baby bonding and milk production, is to inhibit our brain’s fear center (i.e., the amygdala) allowing for a higher degree of acceptance of uncertain situations (its a bit more complex than but this suffice for now) and thus more susceptible to certain risks. While Oxytocin certainly drives a lot of wonderful behaviors (i.e., maternal/paternal instincts, trust, love, commitment to partner, etc..) it has a darker side as well. In general oxytocin reduces aggression by inhibiting our brain’s fear center. However, when we perceive that our young children (or your pups for the prairie voles reading this blog) are in danger or being threatened, oxytocin works in the opposite direction of enhancing the fear. Resulting in an increased level of aggression. See also Sapolsky’s wonderful account of the dark side of oxytocin (“And the Dark Side of These Neuropeptides”, Kindle location 1922) in his book “Behave” (Sapolsky, 2017).


Oxytocin: to be or maybe not to be the trust hormone? A recent 2015 review by Nave et al (Nave, Camerer and McCullogh, 2015) of relevant literature attributing Oxytocin to trust concludes that current research results does not provide sufficient scientific evidence that trust is indeed associated with Oxytocin or even caused by it. In general, it have been challenging to reproduce earlier findings proving (beyond statistical doubt) the causal relationship between Oxytocin and establishing trust bonding between humans. Thus, it is up to you dear reader whether you trust the vast amount of studies in this area or not. That Oxytocin plays a role in pair-bonding as well as parents-child bonding seems pretty solid (Law, 2010; Sapolsky, 2017). Also there appears to be a correlation of increased Oxytocin levels (by sniffing the stuff or by more natural means) and increased readiness to trust (Zak, 2017; Choleris, Pfaff & Kavaliers, 2013). Interestingly (men do pay attention here!), for women with increased levels of oxytocin, typically women with young children still breastfeeding, appears to make them less forgiving when they perceive that their trust has been betrayed (Yao, Zhao, Cheng, Geng, Lou & Kendrick, 2014).

Can a puff and a sniff of Oxytocin make us trust non-human-like agents, e.g., automation SW, AI-based applications, autonomous systems (e.g., cars, drones), factory robots, avionic systems (e.g., airplanes, flight control), etc…  as we trust other humans? … The answer is no! … or at least it does not appear so. A human-human trust bonding is very particular to being human. Human-non-Human trust dynamics may be different and not “fooled” by a sniff of Oxytocin. Having frequent puffs of Oxytocin will not make you love your machine or piece of intelligent software … Unless as it also appears too be more human-like. And that might also have its limits due to the uncanny valley “sense”, i.e., our amygdala starts ringing its alarms bells ever so softly that the entity we interact with is too human-like and yet a little bit off. Enough to get the uncanny or uneasy feeling going.


It has long been established that we tend to use automation only when we find it trustworthy (see for example work of Madhavan & Wiegman, 2007; Visser, Monfort, Goodyear, Lu, O’Hara, Lee, Parasuraman & Kruger 2017; Balfe & Wilson, 2018). If we do not trust an automation it will be rejected by the human operator, just like an untrustworthy human will be left alone. When the reliability of an automation is no better than about 70%, it is in general regarded as useless by its human operators and becomes an operational and financial liability (Wickens & Dixon, 2007). It is important to note that much of the human-automation trust research have focused on professional and expert users of complex or advanced automated systems, such as pilots, air traffic controllers, train operators, robotics plant controllers, chemical & nuclear plant operators, brokers, military technology operators (e.g., drones, autonomous vehicles, … ), communications network controllers, etc…

So … what matters for establishing a trust bond between human and automation? A large body of research shows us that the most important factors for establishing a trust bond between human and an automation function is; reliability (of automation), consistency (of automation), robustness (of automation), dependability (of human operator), faith (of human operator) and understand-ability (of human operator). Much of which is fairly similar to what we require from another human being to be regarded as trustworthy.

Okay,  we have a reasonable understanding of trust bonds between humans and humans and automation enablers. What about Human and AI trust bonds? Given an AI-based complex system might have a higher degree of autonomy than a automated advanced system, it may very well be that the dynamics of trust and trustworthiness are different. At least compared to what we today believe we understand about Human-Automation trust.

For sure it is no longer only experts or professional operators that are being exposed to advanced automation and autonomous systems. For sure these systems are no longer limited to people who have been professionally trained or schooled, often over many years, before they are let loose on such advanced systems. Autonomous systems and AI-based applications are increasingly present in everyone’s everyday environment. At Home. At Work. And anywhere in between. Consumers of all genders, children, pets, octogenarians, Barbie dolls and dinosaurs and so forth … we will eventually have to interface with AI-based applications. Whether we like it or not.

The current trend among consultants (in particular) is to add new trust prerequisites to the above list (if the established ones are considered at all) Human-AI trust essentials; Explainable AIs or XAIs (i.e., can actions of an AI be understood by Humans), Transparent AIs (i.e., loosely to fully understand why certain actions are performed and others not ) and Auditable AIs (i.e., an unbiased examination and evaluation of the code and resulting actions of an AI-enabled application). While these trust prerequisites are important for experts and researchers, the question is whether they are (very) important or even relevant to the general consumer at large? … If my life insurance application was rejected, would I feel much better knowing that if I loose 40 kg, stop smoking, was 30 years younger, lived in a different neighborhood (with twice the rental fees) and happened to be white Caucasian, I would get the life insurance or could afford to pay 3 times the monthly insurance fee (obviously an AI-based outcome would be better disguised than this example).

If you have the feeling that those 3 elements, Explainability, Transparency and Auditability seems approximately 1 element … well you are no alone (but don’t tell that to the “experts”).

So … How do we feel about AI? Not just “yous” who are in the know … the experts and professionals … but you, me, and our loved ones, who will have little (real) say in their exposure to AI, automation & autonomous products and services.


We appear to be very positive about Artificial Intelligence or AI for short. Men in general more positive than women. Men with young children much more positive than any other humans. As can be seen below, it doesn’t seem like Arnold Schwarzenegger has done much to make us have strong negative feelings towards artificial intelligence and what we believe it brings with it. Though one may argue that sentiments towards robots may be a different story.

how do you feel about AI

In the above chart the choices to the question “How do you feel about AI?” has been aggregated into Negative sentiment: “I hate it”, “It scares me” and “I am uncomfortable with it” , Neutral sentiment: “I am neutral” and Positive Sentiment: “I am comfortable with it”, “I am enthusiastic about it” and “I love it”.

On average most of us are fairly comfortable with AI. Or more accurately we feel comfortable with what we understand AI to be (and that may again depend very much on who and what we are).

One of the observations that have come out of conducting these “how do you feel about AI?” surveys (over the last two years) are that there are gender differences (a divide may be more accurate) in how we perceive AI. This needs to be an important consideration in designing AI-based products that will be meaningful appeal for both women and men (and anyone in between for that matter). Given that most AI product developers today are male, it might be good for them to keep in mind that they are not only developing products for themselves. They actually need to consider something that will be appealing to all  genders.

That chart below reflects the AI sentiment of women (808) and men (815) from a total amount of 1,623 respondents across 4 surveys conducted in 2017 and 2018. Most of those results have individually been reported in my past blogs. So … Women feels in general significantly less positive towards AI compared to men. Women overall have a slightly more negative sentiment towards AI than positive. Overall there are more women than men who rank their feelings as neutral. Men with children (younger than 18 years of age) are having the most positive feelings towards AI of all respondents. Unfortunately, the surveys that so far has been carried out does not allow for estimating the age of the youngest child or average age of the children. Women’s sentiment towards AI does not appear (within the statistics) to be dependent on whether they have children younger than 18 years of age or not or no children. Overall, I find that;

Women appear to be far less positive about AI than men.

Men with young children are significantly more positive than men and women in general.

Contrary to men, women’s sentiment towards AI does appear to depend on their maternal status.

gender divide ai.png

So why are we so positive … men clearly much more than women … about AI? This despite that AI is likely to have a substantial impact (to an extend it already have) on our society and way of living (e.g., privacy, convenience, security, jobs, social network, family life, new consumption, policies, etc..). The median age of the respondents was about 38 years old. Although respondents with children (less than 18 years of age) was about 33 years old. In the next 10 years most will be less than 50 years old and should still be in employment. In the next 20 years, most will be less than 60 years old and also still very much in active employment. Certainly, young children of the respondents would over the next 20 years enter the work place. A work place that may look very different from today due to aggressive pursuit of intelligent automation and autonomous  system introduction.

Is the reason for the positive outlook on AI that the individual (particular the male kind) simply do not believe the technology to be an existential threat to the individual’s current way of living?

If you think about your child or children, how do you believe AI’s will impact their future in terms of job and income? … while you think about this! … I will give you the result of one of the surveys (shown below) that I have conducted in September 2018.

future of child.png

In terms of believing that the future will be better than today, women are less positive than men. Across gender fewer are of the opinion that the opportunities of their children (whether they are below 18 or above) will remain the same as today. Women appear to have a more negative outlook for their children than men. There is little difference in men’s beliefs in their child’s or children’s future opportunities irrespective of the age of their children. Women having children under 18 years of age are significantly less optimistic of the outlook of their children’s opportunities compared to those women with older children.

From work by Frey & Osborne (2013) on how jobs are becoming susceptible to what they call computerization, there is plenty of room for concern about individuals job and thus income security. According with Frey and Osborn, 47% of the total US employment is at risk within a decade or two. A more recent PwC economical analysis estimates that the impact of algorithmic & AI-based automation across all industries will be in the order of 20% by late 2020s and 30% by the late 2030s (Hawksworth & Berriman, 2018). Job categories served by low and medium educated will be a hit the hardest. Women are expected likewise to be impacted more than men. Irrespective of how you slice and dice the data, many of us will over the next 2 decades have our lives, livelihood and jobs impacted by the increased usage of intelligent automation and autonomous systems.

In order to study this a bit further, I asked surveyed respondents two questions (structured in an A and a B 50:50 partition); A: “Do you believe your job could be replaced by an AI? and B: “Thinking of your friends, do you believe their jobs could be replaced by an AI?“.

you & your friends job impact by AI

From the above chart it is clear that when it comes to AI impacting job security, the individual feels much surer about their own job security than the individual’s friends or colleagues. Only one fifth, of respondents answering Yes or No to whether they believed that their jobs could be replaced by an AI, thinks that AI actually could replace their jobs. Interestingly, men assessing their own job security is almost twice as sure about that security compared to women (based on the number of Maybe answers).

From the results of the survey shown above, we assign a much higher likelihood to our friends and colleagues prospects of loosing their jobs to an AIs than that happening to ourselves. Maybe it is easier to see our friends and colleagues problems & challenges than our own. Both women and men appears more uncertain in assessing their friends job security than their own. Although a less dramatic difference in uncertainty between women and men, men still appear less uncertain that women in their assessment of their friends job security.

There are many consultants, some researchers and corporations working on solutions and frameworks for Transparent AIs, Explainable AIs, and Auditable AI as a path to create trust between a human and an AI-based agent. Many are working exclusively with the AI in focus and thus very technocentric in approach. Very few have considered the human aspect of trust, such as

  • The initial trust moment – how to get the consumer to the “leap of faith moment”, where human engage with a product or service (or another human being for that matter). This is obviously a crucial and possible scary moment. The consumer has no prior experience (maybe peers recommendation which will help) and is left to faith and will be the most dependable or vulnerable for disappointment. It is clear the peer opinion and recommendation will mitigate much uncertainty and unease.
  • Sustainable trust – how to maintain sustainable trust between a user and a product (or another human being). Here priors will be available and of course consistent performance will play a big role in maintaining and strengthening the trust bond.
  • Broken trust or untrusting – as the saying goes “it takes 10 good impressions to neutralize a bad one” (something my grandmother hammered into my head throughout childhood and adolescence … Thanks Gram!) … Once trust has been broken between a human and a product or service (or another human being) it is very difficult to repair. The stronger the trust bond was prior to untrusting the more physiological and neurological “violent” will the untrusting process be and subsequently recovery from the feeling of betrayal. As another saying goes “Heav’n has no rage like love to hatred turn’d, Nor hell a fury, like a woman scorned” (William Congreve, 1697). And “no Oxytocin in this world will make a women betrayed not want her pound of flesh” (Kim Larsen, 2018).
  • The utility of trustnot all trust bonds are equally important or equally valuable or equally costly, some may even be fairly uncritical (although, broken trust by a thousand cuts may matter in the long run). The neurological – feeling process of untrust may even be fairly benign in the sense of how trustor feels upon the broken trust. Though the result may be the same. Having a customer or loved one walking away from you. It may be easier to recover trust from such more benign untrust events. However, it stands to reason that the longer a trust bond exist the more painful and costly will the untrusting process be and obviously far more difficult to recover from.

In most cases, if the AI action is as the human agent would expect, or have anticipated, many a human might not care about transparency or explainability of the artificial agent’s action.

Despite having your trust satisfied by an AI-based action, we should care about auditability. In case over the longer run, the human trust in an AI-based solutions turns out to have been misplaced. Thus, the AI-based outcome of a given action is counter to what the human was expecting or anticipating. An explanation for the outcome may not prevent the trust of human agent, and the trustworthiness of the AI-based agent, to be broken.

trust circle


If you know everything absolutely, you would not need to trust anyone to make a decision.

Just be careful about the vast amount of cognitive biases that may result in you falsely believing you know it all. Men in particular suffers from the ailment of believing in their own knowledge being absolute (Larsen, 2017).

Someone who knows nothing, have only faith as  guide for trust.

On the other hand, someone who knows nothing about a particular problem has no other source for trust than faith that trust is indeed warranted. It’s a scary place to be.

Let’s deconstruct trust.

An agent’s trust (the trustor) is an expectation about a future action of another agent (the trustee). That other agent has been deemed (at least temporarily) trustworthy by the trustor. That other agent (the trustee) may also represent a given group or system.

In John K. Rempel 1985 paper ”Trust in close relationships” defines the following attributes of human to human trust (i.e., where both trustor and trustee are human agents);

  • Predictability or consistency – trustor’s subjective assessment of trustee’s trustworthiness. Prior behavior of trustee is an important factor for the trustor to assess the posterior expectations that the trusted agent will consistently fulfil trustor’s expectations of a given action (or in-action). As the trustor gather prior experience with trustee, the confidence in the trustee increases. Confidence should not be confused with faith which is a belief in something without having prior fact-based knowledge.
  • Dependability – a willingness to place oneself as trustor in a position of risk that the trustworthiness of the trustee turns out not to be warranted with whatever consequences that may bring. Note that dependability can be seen as an outcome of consistency. Put in another way a high degree consistency/predictability reduces the fear of dependability.
  • Faith – is a belief that goes beyond any available evidence required to accept a given context as truth. It is characterized as an act of accepting a context outside the boundaries of what is known (e.g., a leap of faith). We should not confuse faith with confidence although often when people claim to be confident, what they really mean is that they have faith.

For agent-to-agent first-interaction scenarios, the initial trust moment, without any historical evidence of consistency or predictability, a trustor would need to take a leap of faith in whether another agent is trustworthy or not. In this case, accepting (i.e., believing) the trustee to be trustworthy, the trustor would need to accept a very large degree of dependability towards the other agent and accept the substantial risk that the trust in the trustee may very well not be warranted. This scenario for humans often lends itself to maximum stress and anxiety levels of the trusting agent.

After some degree of consistency or historical trustworthiness have been establish between the two agents, the trustor can assign a subjective expectation of future trustworthiness of the other agent. This then leads to a lesser subjective feeling of dependability (or exposure to risk) as well as maybe a reduced dependency on shear faith that trust is warranted. This is in essence what one may call sustainable trust.

As long as the trustor is a human, the other agent (i.e., the trustee) can be anything from another human, machine, complex systems, automation, autonomous system, institution (public and private), group, and so forth. Much of what is describe above would remain the same.

Lots of work has been done on trust bonds in Human-Automation relationships. How about trust bonds between Human and AI-enabled applications (e.g., services and products in general). In their 2018 article “The Future of Artificial Intelligence Depends on Trust“, Rao and Cameron (both from PwC) describes 3 steps towards achieving human – AI-system trust;

  • Provability – predictability and consistency.
  • Explainability – justification for an AI-based decision (e.g., counterfactual constructions). Note transparency and explainability may be closely related depending on how one implements explainability.
  • Transparency – factors influencing algorithm-based decisions should be available (or even visible) to users impacted by such decisions. E.g., for a rejected health insurance (all) factors impacting the negative decision to reject the application should be available to the applicant.

Rao and Cameron’s suggestions appear reasonably important for trust. However, as previously described these suggestions pretty much relates to the trustee agent side of things, ignoring some of the other important human factors (e.g., dependability, faith, assessment of risk, etc..)for trust between a human and another agent (sentient or otherwise).

Further, explainability and transparency may be particular important when trust is broken (assuming that the trustor cares to “listen”) between the human agent and the AI-based agent (or any other digital or non-sentient agent for that matter). It may not be terribly relevant for the likely vast majority of users where an action is delivered confirming that trust was warranted. If you have trained your AI will it would be fair to assume that the majority of outcomes are consistently as expected. A positive trust event that is likely to lead to a re-enforcement of trust and trustworthiness of the AI-agent.

Also these concepts, while important, doesn’t do much for the initial step of trusting a non-Human agent. How do you design your trustee agent to ease the initial barrier of use and acceptance. When there is no priors, you need the user or trustor to be comfortable with taken a leap of faith as well as being maybe maximally dependable.


Trust can be broken. Trustworthiness can decline. Untrusting is the process where a previously trust-bond has been broken and the strength of trust declined.

Heuristic: the stronger the trust bond is between two agents, the stronger will the untrusting process be in case of broken trust. Making trust recovery the more difficult.

Have you ever wondered why two people who supposedly have loved each other in the past (supposedly for many years) can treat each other as enemies? Betraying a strong trust bond can be a very messy emotional and physiologically strenuous process. Some trust bonds broken will never recover (e.g., breakups, friendship betrayals, unfaithfulness, theft, lies, …). Others, depending on the initial utility or value assigned to the bond, may be fairly benign without much strong emotions associated with the untrusting process (e.g., retail purchases, shopping experiences, low value promises of little impact if not fulfilled, etc… ).

The question is whether the untrusting of a human-machine trust bond is similar to untrusting of a human-human trust bond. Moreover, are there a difference between an inanimate machine, simpler human-operated automated systems and an AI-based application that humans may even anthropomorphize to various degrees. Are your trust and untrust process different for Siri or Alexa or than it is for Microsoft Clippy, assuming anyone ever really trusted that wicked steely fellow.

How valid is it to use our knowledge of human-human trust & untrust on Human-Agent relations with the Agent being non-Human or a human simulacrum in nature.


Would you trust your superior or fellow expert with a critical corporate decision? How often would you trust such decisions made by other fellow human beings?

Even if you don’t have a choice or a final say (well apart from arguing your piece of mind … at least as it happens in most places of Western Europe) … it is your own choice whether you trust such a decision or not.

As shown in the below chart’s magenta columns, it turns out that most humans frequently do trust their superiors and fellow human experts with critical decisions relevant to their work. In the survey shown below there is little difference between human-human trust whether a decision success rate was left unspecified or specified to be 70% (i.e., 7 out of 10 decisions turns out as expected or promised and 3 out of 10 not). This might mean that most people expect heuristically a corporate decision maker to have a 70% success rate in his decisions. I found this surprising as I do not believe human decisions are that good. But I guess we are good at post-rationalization and being much louder with our successes than our failures (suppressing the bad memories of failure may come in handy here).

trust human vs ai

Okay we clearly trust our fellow human with critical decision making (or at least so we say). Do we trust an AI with the same critical corporate decision?

The answer is  … clearly … No Way do we trust AIs to make critical corporate decisions (and any other types of decisions for that mater … at least of what we are aware of). As can be seen from the above chart, a majority of people would only infrequently trust an AI making critical decisions. Specifying that the AI has a decision success rate better than 70% does reduce the amount of people who would only infrequently trust such decisions (i.e., from 62% to 43%). However, it only marginally increases the share of people who would frequently trust an AI-based critical decision from 13% to 17% (which is barely statistically significant). Remember we are readily willing to trust a human decision maker frequently. An AI? … not so much! Even in what should be regarded as an Apples for Apples scenario, with same performance specified for the Human trustee as for AI-based trustee.

Trust bonds between humans appear much stronger than what it is with an AI. Though that may not be too surprising. Most of us have very little prior experience with trusting AI-based decisions (at least of what we are consciously aware of). So the starting point for AI-based trust (i.e., AI being the trustee part if the trust bond) is Faith and accepting Dependability rather than having a basis for assessing Consistency or Predictability of AI-based decisions. There may also be some very interesting neurological (i.e., brain) reasons why our ability to trust an inanimate agent such as an AI, a Robot or a piece of intelligent machinery is different from that of a human being.

My surveyed data could be interpreted as we seem to work with a heuristic decision success rate for human (or at least the manager or expert kind of humans) at or better than 70%. More than half of us would frequently trust a human decision maker at such a performance level.

Not so much with an AI-based decision (innate) maker. While specifying that the AI has a success rate of 70% or better in its decision making doesn’t really change the proportion of us that would frequently trust such decisions. It does increase the amount of us trustors that would at about half the time concede trust in an AI-based decision (i.e., given the 70% success rate).

What moves the trust needle? If we impose on our under appreciated AI-based decision maker a 95% or better success rate, 40% of us would frequently trust such decisions. This is still a lower proportion of trustees than for a human decision maker with a success rate of 70% or better. However, there is still almost 1 in 3 of us that only infrequently would trust of such an AI (with 95% or better success rate). In comparison only about 1 in 10 would only infrequently trust a human decision maker with a 70% or better success rate.

trust in ai 2

So clearly AI does have trust issues. Certainly with respect to decision making, AI is not regarded as trustworthy as a human. The bar for trusting an AI appears to be very high.

However, it seems reasonable that some of the reasons for a lower trust level is simply due to most people haven’t had a lot of exposure to AI in general, AI-based augmentation and actions where trust would be essential.


As described in “On the acceptance of artificial intelligence in corporate decision making” (Larsen, 2017), algorithms, even simple ones, does in general perform better than human beings limited to their own cognitive abilities in terms of predictions (i.e., an essential part of decision making whether done consciously or subconsciously). This result has been confirmed many times over by the likes of Paul Meehl (Meehl, 1954), Robyn Dawes (Dawes, 1979) and many other researchers in the last 50 – 60 years. Clearly, machine learning algorithms does not offer an error free approach to decisions making. However, algorithmic approaches does offer predictions and solutions with lower, often superior, error rates. And not unimportantly … quantifiable error rates in comparison with what would be the case of human cognition based decision.

Humans remain very resistant in adapting more mathematical approaches despite such being demonstrably less prone to error than human-based decision making without algorithmic augmentation. As Berkeley Dietvorst recent paper puts it “People erroneously avoid algorithms after seen them err” (Dietvorst, Simmons and Massey, 2014). Dietvorts call this behavior or emotion algorithmic aversion. This is very consistent with my own findings of humans having a very high bar of success rate (or accuracy) for AI-based decisions. Even at a 95% success rate of an AI-based decision, we prefer to trust a human decision maker with a success rate of 70%.

Machine-learning (at least the classical kind) based decisions or action recommendations offer better accuracy, transparency, understandability, consistency, robustness and auditability than most human-based decisions and actions.

Despite this, we, as humans, are much less forgiving when it comes to machine errors than human errors. The standard we expect of artificial intelligence are substantially higher than what we would require from a fellow human being or co-worker.


Almost 80% of consumers do not believe that companies using AI have their best interest in mind. This is the outcome of 3 surveys made in March 2018, April 2018 and September 2018.

This has also been a period where misuse of consumer information and data in general was hotly debated. So that majority of consumers does not trust corporations with having their best in mind is maybe not all that surprising. Consumer trust in corporations are in general at a low point. AI doesn’t help that trust issue.

trust in companies

Companies AI-based products and services are already at a disadvantages before they hit the market place. There is a substantial degree of mistrust among consumers towards corporations and companies. This resonates very well with a recent study of trust by ….

What about trust in public institutions capacity for protecting citizens and consumers against adversarial use of AI-based technologies for policies and in products and services? Well the public trust is fairly low as can be seen from the figure below.

turst in public institutions

The vast majority (80%!) of the general public has low, very low or no confidence in political institutions adequately considers the medium and long-term societal impact of AI proliferation.

There is unfortunately nothing surprising in the above (dis)trust level in institutions. This is largely confirmed by for example the 2018 Edelman Trust Barometer which is pretty bleak in terms of its “Global Trust Index” reflecting the general populations level of trust in institutions (e.g., government, businesses, media and NGOs).


It is fair to say that for the consumer as well as for the corporate decision maker, their expectations towards the trustworthiness of AI-based products, services and resulting decisions or actions in general is low.

Despite the relative low trust in AI-based actions, I have also shown that on average we feel fairly comfortable with AI at least as a concept. Women, as it would appear from my surveys, are in general less comfortable with AI than men in general. While men with children under 18 years of age (possible younger children) expresses the highest degree of positive feelings towards AI.

The gender difference in how AI is perceived for the individual as well as for children, family members, friends and colleagues is a relative un-explored research area yet. It needs more attention as most trust research into human-machine trust bonding have been centered around professional operators of automated or autonomous complex systems (e.g., aviation, automotive, networks, etc…). I feel brave enough to make an educated guess that most of that research also have been focused on male operators and experts rather than gender balanced or explicitly gender focused.

In order for us to trust something, like an AI-based action (e.g., decision, recommendation, …), we often require an explanation for a given outcome or action. Most of us do like to receive an explanation, in particular for actions and outcomes that we perceived as having negative consequences or is counter to our beliefs of what should be a right decision or action. Explainable AI, whatever that really means, but surely will be context dependent, is one of the components of establishing trust. Explainability is important in order to appease law & policy makers, e.g., in order to comply with for example the European General Data Protection Regulation (GDPR) requirements that may (or may not) be interpreted also as a “Right to Explanation”. AI Transparency and AI Auditability are additional concepts that typically is mentioned together with explainable AI.

Typically the implied logic is that transparency leads to explainability that leads to ease of auditability. The question is whether such requirements in general are meaningful for the consumer of an AI-based product or service. There are two extremes are 1. A highly simplified system that can also be explained very simply or 2. A highly complex AI-based system that nevertheless are sufficiently transparent to be explained and audited. However, the explanation is of such complexity, that albeit transparent, would only be understood by an expert or the designer of that system. In one case the explanation for a given action is so simple that it is unnecessary. In the other the explanation is to complex that no lay person would be able to follow. Certainly much more work is required here in order to assess to what level and under which circumstances an explanation should be provided. It is always understood (although not always mentioned) that the explanation should be understood by the operator or user. Now that makes for an interesting challenge … Right?

As has been pointed out above, making a human trust a non-human agent is not only a matter of explainability assuming this explanation is understood. Any trust bond will have a utility or perceived value associated. The initiation of a trust bond may be faith based if no prior information is available. This initial phase often is associated with a degree of anxiety or fear of your trust is not fulfilled. There may be a high degree of dependability involved in the trust bond (e.g., autonomous driving) that adds to the anxiety. Only after prior experience or information becomes available will the importance of faith and anxiety around the assumed dependability diminish. The strength of the trust bond will increase. However, as the trust increase it also will also be increasingly sensitive to disappointment and perceived betrayal (also depending on the assigned utility to the bond). Too little work has been conducted understanding gender and cultural differences in the human-AI trust process. This is also true in general for any human-non-human trust relationships.

Some recent work indicates that anthropomorphizing (i.e., humanizing) the automation or AI-based agent appears to trigger neurochemical processes important in human-human trust bonds. See some pretty cool experiments towards the importance of anthropomorphizing automation agent by Visser et al (Visser, Monfort, Goodyear, Lu, O’Hara, Lee , Parasuraman & Krueger 2017) in their paper “A little anthropomorphism goes a log way: Effects of Oxytocin on Trust, Compliance, and Team Performance with Augmented Agents”. The question here is how far we can take humanizing AI. Will there be an uncanny valley effect at some point. Moreover, not all AI-functions should be humanized (that would be scary if even possible). Context clearly matters here. Lots of questions, still many answers outstanding and thus lots of cool research to be pursued.


Balfe N., Sharples S., and Wilson J.R., (2018). “Understanding is key: An analysis of factors pertaining to trust in a real-world automation system”. Human Factors, 60, pp. 477-495. Due to its recent publication you will find a good up to date account (as well as bibliography) on the state of art of human-automation trust research. This work establishes a strong connection between trust in and the understanding of automation.

Barret L.F., (2017). “How emotions are made: the secret life of the brain“. Houghton Mifflin Harcourt.

Baumgarten T., Fischbacher U., Feierabend A., Lutz K., and Fehr E., (2009). “The Neural Circuitry of a Broken Promise”. Neuron, 64, pp. 756 – 770.

Bergland, C., (2015). “The Neuroscience of Trust”,, August.

Choleris, E., Pfaff, D. and Kavaliers, M., (2013). “Oxytocin, vasopressin, and related peptides in the regulation of behavior”. Cambridge: Cambridge University Press.

Dawes R.M., (1979), “The robust beauty of improper linear models in decision making”, American Psychologist 571, pp.

Denson T.F., O’Dean S.M., Blake K.R., and Beames J.R., (2018). “Aggression in women: behavior, brain and hormones“. Frontiers in Behavioral Neuroscience, 12, pp. 1-20 (Article-81).

Dietvorst B.J., Simonojs J.P. and Massey C., (2014). “Algorithm Aversion: people erroneously avoid algorithms after seeing them err.”, Journal of Experimental Psychology: General, pp. . A study on the wide spread Algorithm Aversion, i.e., human expectations towards machines are substantially higher than to fellow humans. This results in a irrational aversion of machine based recommendations versus human-based recommendation. Even though algorithmic based forecasts are on average better to much better than human based equivalent in apples by apples comparisons.

Doshi-Velez F. and Korz M., (2017). “Accountability of AI under the law: the role of explanation“. Harvard Public Law, 18, pp. 1-15. Focus on the right to an explanation and what that might mean. Also very relevant to the European GDPR Article 22. Do note that whether Article 22, and Articles 13-15 as well, really does grant a user the right to an explanation is a matter of debate as pointed out by Wachter et al (2017).

Fischer K., (2018). “When transparency does not mean explainable”. Workshop on Explainable Robotic Systems (Chicago, March).

Frey C.B. and Osborne M.A., (2013). “The future of employment: how susceptible are jobs to computerization?“. Technology Forecasting and Social Change, 114, pp. 254-280.

Hawksworth J. and Berriman R., (2018). “Will robots really steal our jobs? An international analysis of the potential long term impact of automation“. PwC report.

Hiltbrand T., (2018), “3 Signs of a Good AI Model”., November.

Ito J., (2018). “The limits to explainability“, Wired (January).

Kosfeld M., Heinrichs M., Zak P.J., Fischbacher U., and Fehr E., (2005). “Oxytocin increases trust in humans”. Nature, 435, pp. 673-676.

Kramer R.M., (2009), “Rethinking Trust”. Harvard Business Review, June.

Larsen, K., (2017). “On the acceptance of artificial intelligence in corporate decision making a survey“.

Law S., (2010), “Dad. too. get hormone boost while caring for baby”,, October. Oxytocin is not only women for women breastfeeding. Research shows that men too have increased levels of oxytocin coinciding with child caring, physical contact and their spouse (and mother to their child).

Madhavan P. and Wiegmann D.A., (2007), “Similarities and differences between human-human and human-automation trust: an integrative review”. Theoretical Issues in Ergonomics Science, 8, pp. 277-301. (unfortunately behind paywall).

Meehl, P. E., (1954). “Clinical versus statistical prediction: A theoretical analysis and review of the literature“. University of Minnesota, pp. 1-161. Algorithmic versus human performance up-to the 50s is very well accounted for with Paul Meehl research work and his seminal book. It is clear that many of the topics we discuss today are not new.

Mori, M., MacDorman, K. and Kageki, N. (2012). “The Uncanny Valley [From the Field]“. IEEE Robotics & Automation Magazine, 19(2), pp. 98-100.

Nave G., Camerer C., and McCullough M., (2015), “Does Oxytocin Increase Trust in Humans? A Critical Review of Research”. Perspectives on Psychological Science, 10, pp. 772-789. Critical review of research into Oxytocin key role in social attachment including its effect of increased trust in human individuals with increased levels (above normal) of Oxytocin. Nave et al concludes that current results does not provide sufficient robust evidence that trust is associated with Oxytocin or even caused by it.

Rao A. and Cameron E., (2018), “The Future of Artificial Intelligence Depends on Trust”. Strategy+Business, July. Making the case for transparent, explainable and auditable AIs and why those concepts are important for the development of trust between humans and AI.

Rempel J.K., Holmes, J.G. and Zanna M.P., (1985), “Trust in close relationships”. Journal of Personality and Social Psychology, 49, pp. 95–112. (unfortunately behind paywall, however it is imo a super good account for trust in human to human relations).

Sapolsky R.M., (2017). “Behave: The Biology of Humans at Our Best and Worst”. Penguin Press. Robert Sapolsky addresses trust across his epic book from a neurobiological and behavioral perspective. Yes, you should read it!

Sheridan T.B. and Parasuraman R., (2005), “Chapter 2: Human-Automation Interaction”. Reviews of Human Factors and Ergonomics, 1, pp. 89 – 129. This is a really authoritative account for human interaction with automation as we find it in complex large-scale systems (e.g., aircrafts, aircraft control, manufacturing robotics-intensive plants, modern communications networks, modern power plants, chemical industries and infrastructure, modern as well as autonomous vehicles & drones, etc…).

Simpson J.A., (2007), “Psychological Foundations of Trust”. Current Directions in Psychological Science, 16, pp. 264-268.

Visser, E.J.d., Monfort S.S., Goodyear K., Lu L., O’Hara M., Lee M.R., Parasuraman R., and Krueger F., (2017), “A little anthropomorphism goes a log way: Effects of Oxytocin on Trust, Compliance, and Team Performance with Augmented Agents”. The Journal of the Human Factors and Ergonomics Society, 59, pp. 116-133.

Wachter S., Mittelstad B., and Floridi L., (2017). “Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation“. International Data Privacy Law, pp. 1-47. Wachtel et al claims that Article 22 (or other articles for that matter) does not express that users of automated decision-making applications have a right to an explanation. If anything at most a user may have a right to information about the decision process. It for solved a puzzle as there is nowhere in Article 22 any mention of an explanation more of a right to opt out. Articles 13 to 15 (of GDPR) only offers limited information about the process of which a given decision has been made (e.g., 15 and 14 are maybe the strongest articles with respect to information provision).

Wachter S., Russel C., and Mittelstad B., (2018). “Counterfactual explanations without opening the black box: automated decisions and GDPR“. Harvard Journal of Law & Technologies, 31, pp. 1-52.

Whitbourne S.K., (2015), “Why betrayal hurts so much (and who seeks revenge)”., April.

Wickens C.D. and Dixon S.R., (2007), “The benefit of imperfect diagnostic automation: a synthesis of the literature”. Theoretical Issues in Ergonomics Science, 8, pp. 201-212.(unfortunately behind paywall). Wickens & Dixon has reviewed data from 20 studies upon which they have derived that a reliability cross-over point of about 70%. Below 70% no automation was regarded better than automation. Only above 70% reliability did automation bring positive cost-benefit returns.

Yao S., Zhao W., Cheng R., Geng Y., Luo L., and Kendrick K.M., (2014). “Oxytocin makes females, but not males, less forgiving following betrayal of trust“. International Journal of Neuropsychopharmacology, 17, pp. 1785-1792.

Zak P.J., (2017), “The Neuroscience of Trust”. Harvard Business Review, January-February.

2018 Edelman Trust Barometer (Global Report).


I rely on many for inspiration, discussions and insights. Any mistakes made are my own. In particular I would like to thank Liraz Margalit and  Minoo Abedi for many useful suggestions and great inspirational discussions around the topic of trust. I also greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of writing this Blog.