On the Acceptance of Artificial Intelligence in Corporate Decision Making – A Survey.

SUMMARY OF SURVEY FINDINGS.

Approximately 658 corporate decision-makers have been surveyed for their confidence in their own decision-making skills and their acceptance of Artificial Intelligence (A.I.) in general as well as in augmenting (or replacing) their decision-making. Furthermore, the survey reveals the general perception of the corporate data-driven environment available to decision-makers, e.g., the structure and perceived quality of available data.

A comprehensive overview and analysis of our AI sentiments, as it relates to corporate decision-making, is provided as a function of Gender, Age, Job-level, Work area, and Education.

Some of the findings of the survey;

  • We believe that we are all better corporate decision-makers than our peers.
  • There is a substantial gender difference in self-confidence ( or more likely over-confidence) as it relates to corporate decision-making.
  • The higher a given individual’s corporate position is, the higher the confidence and perceived quality of that individual’s “gut feelings” compared to peers.
  • On average corporate decision-makers are comfortable with A.I.
  • The higher the educational level of people the more positive is A.I. viewed.
  • Women seem (on average) to be slightly more reserved towards A.I. than men.
  • Significantly more women than men have stronger reservations against A.I..
  • More than 70% of corporate decision-makers believe that A.I. will be important in 50% or more of a company’s decisions.
  • In general corporate decision-makers don’t trust decisions based on A.I. consultation.
  • Owners, Executives, and C-level decision makers are substantially less trusting towards decisions made in consultation with an A.I..
  • More decision-makers would follow A.I. advice than trust decisions based on A.I. consultation.
  • More than 80% of decision makers would abandon an A.I.-based recommendation if disputed by a fellow human.   
  • Corporate Decision makers in general do not fear losing their decision-making influence to A.I.s.

INTRODUCTION.

You don’t need to make an effort to find articles, blogs, social media postings, books, and insights in general on how Artificial Intelligence (hereafter abbreviated A.I.) will provide wonders for all human beings, society and leapfrog corporate efficiencies and shareholder values for the ones adapting to A.I. (of which you would be pretty silly not too of course).

Somehow I cannot help but wonder whether there might be a tiny paradox here? or at the very least a bit of societal challenges. By the way, challenges that in my opinion are largely ignored by policymakers and public institutions.

The challenge! How is it possible to both improve vastly the life of people and at the same time aggressively leapfrog corporate value in terms of productivity via very aggressive intelligence automation?

The question, with many different answers, is how much of work and workforce will A.I. fundamentally change and/or ultimately replace. What type of work will be impacted by A.I. and how will it shape the development of existing corporations and organizations. This issue is addressed in the 2013 paper “The future of employment: how susceptible are jobs to computerization” by Frey and Osborne. They estimated that 47% of the total number of jobs in the USA (i.e., ca. 75 Million out of 160 Million) are exposed to high risk of being automated by intelligent algorithms and A.I. over the next decade or two. I suspect that given this analysis was done 4 – 5 years ago that these numbers have only grown. Frey and Osborne also clearly point out that many decision-making processes are prone to be significantly augmented by A.I. or outright being taken over (e.g., legal, health/diagnostics, fraud detection, etc..). Contrary to the past industrial revolutions that replaced menial and physical labor with machine labor. While in some parts of the world today (e.g., China) human factory workers are being massively replaced by robots and intelligent automation in general (e.g., reporting a quantum leap in productivity), this time around also highly specialized and cognitive intensive jobs, requiring college or graduate-level degrees, are at risk to be replaced by A.I..

It is wise to keep the Friedman doctrine in mind stating that “The social responsibility of Business is to increase its profits (i.e., Milton Friedman, New York Times Magazine, 1973 … Milton was not a great believer in corporate social responsibility in a millennia sense I guess;-). In other words, a corporation’s only goal is to increase its profits within the rules of law. Following this doctrine, it might be compelling to pursue aggressive A.I.-driven automation leading to workforce reduction and ultimate replacement (e.g., China manufacturing).

Obviously, today through taxes (in general) and salaries, it is possible to maintain a degree of social responsibility. Albeit indirectly by individuals working for corporations or businesses. In case of the zero-human-touch corporations resulting from a structural replacement of human labor by A.I.s, the indirect path to social responsibility might disappear. Assuming such a corporate strategy really would optimize profit sustainability over time rather than cost internal structures. I suspect that one of the bigger challenges to society will be that it is very possible on a local level to hugely maximize profit by zero-human-touch corporations, e.g., China manufacturing aggressively pursuing automation. Profit maximization can be maintained, as long as goods or services are sold somewhere else with a stable socio-economical fabric (i.e., geographical arbitrage) or there exists a group for people on a local level not impacted by loss of work and income. Obviously, if your workers are an integral part of your business model, massively laying them off might not be the best idea for profit maximization (i.e., who cares that you have slashed 50% of your cost if nobody can afford to buy your product or service because you put them out of work).

The intelligent machine age will see the remaining part of factory workers being replaced by A.I.. Also, many tasks requiring a high degree of cognitive involvement, as well as higher education, will be augmented and eventually replaced by A.I..

Having a graduate degree might soon no longer be a guarantee for job security.

CORPORATE DECISION MAKING – HUMAN VS A.I. OR HUMAN + A.I.?

From a corporate decision-making perspective there are two main directions to take (and a mixture in between);

  1. A.I. augment the corporate decision-makers in their decisions.
  2. A.I. takes over major parts of the corporate decision-making process.

And so … it begins …

I got really intrigued by a recent article in Harvard Business Review titled “AI May Soon Replace Even the Most Elite Consultants” by Barry Libert and Megan Beck (both consultants/advisors) making the case that A.I. could replace the role of elite consultants as they are supposedly used today. One of my favorite quotes from this thought-provoking article is Perhaps sooner than we think, CEOs could be asking, “Alexa, what is my product line profitability?” or “Which customers should I target, and how?” rather than calling on elite consultants”. I really hope that a CEO would not need an Elite Consultant for such answers … but it might be true, that corporations frequently use expensive consultants for what turns out to be silly tasks.

Obviously … the cynic in me says … The CEO could not only save the expensive elite consultant but also considerable internal elite resources; e.g., CMO, Sales Director, Marketing Managers, Pricing Specialists, Financial Controllers, etc… (just to name a few in the corporate food chain). That sounds pretty cool! Imagine the salaries and costs that could be saved here! Wow! … Though, I suspect that he (i.e., the CEO) might still need some (new) elite & likely hilarious expensive A.I. consultants instead (maybe Barry and Megan would be up for that task;-)?

Moreover, there is an inherent assumption in the assertion that most corporate decisions, or at least the important ones taking up the time of CxOs and senior management, are coming with a high-quality voluminous amount of data that would naturally lend it to a data-driven algorithmic augment decision process. In my opinion, this is far from reality. Many decisions that corporate decision-makers are bound to make will be based on tiny to small amounts of often uncertain or highly outdated data. Thus lending itself poorly to the typical arsenal of data-driven decision-making and big-data-based algorithmic approaches. The assumptions made, backing up corporate decisions, will be based on “gut feelings” (backed up by Excel and nice Powerpoints), theory of (corporate) mind (no), and largely directional rather than hard science. There is obviously nothing that hinders decision-makers from applying the same approaches we would apply to ideal data-driven analysis and decision-making, as long as the decision-maker understands the limitations, risk, and uncertainty that such an approach would bring in the context of a given decision. Particularly when the underlying data is tiny to small and inherently of poor quality (e.g., because of age, uncertainty, apples and bananas, out-of-context, …).

Bridgewater Associates LP, the largest hedge-fund company in the world with ca. 1,700 employees and 150 Billion US$ of assets under management, is currently working on automating most of the firm’s management. It would appear that one of the most important roles of the current workforce in Bridgewater is to provide the training ground for a corporate A.I. that can take over the management of that workforce. This vision and strategy is the brainchild of Ray Dalio, the founder, chairman, and co-CIO, of Bridgewater Associates.

So What’s the Future? (or WTF? for short to gently borrow the term from O’Reilly’s wonderful book “WTF?: What’s the Future and Why It’s Up to Us”) … Is this the future?CEO_and_his_AI_2

WFT!

What about replacing the CEO with an AI? … run everything as a DAO or distributed autonomous organization based on smart contracts and orchestrated by a Chief Artificial Officer (CAO). Sounds even more like science fiction (or a horror movie pending on tastes) … but Jack Ma, the founder of Alibaba, has speculated that within the next 30 years, the Time Magazine cover for the best CEO of the year will likely be an A.I. or CAO.

So maybe the future looks more like this;

empty_office_w_ai.jpg

WTF?

Will Intelligent Algorithms make CEOs irrelevant in the not-too-distant future?

Will the CEO be replaced by the CAO? … WTF! … Well, time will show!

With the adaptation of intelligent algorithms and corporate-wide pursuit of aggressive automation, how will an A.I.-augmented organization look like? Josh Sullivan and Angela Zutavern in their wonderful book, particularly for a person having a degree in mathematics and physics, “The mathematical corporation: where machine intelligence and human ingenuity achieve the Impossible”  provides a vision of how this next-generation corporation might look like … The Mathematical Corporation … it is a place where algorithmic augmented decision-making is intimately integrated into the corporate decision-making process … I am not 100% (maybe not even 20%) convinced that the term “Mathematical Corporation” will find wide jubilance (with the possible exception of STEM folks … maybe) … If I am wrong, I would argue that this already is on the way to achieving the impossible.

The world of the Mathematical Corporation is a world where the human decision making is augmented, as opposed to replaced, by mathematics or algorithm applied to huge amount of available data (that no mortal human could possible make sense off in the same comprehensive way as a mathematical algorithm) … It is a positive world for Homo sapiens sapiens, or at least for the ones who are able to adapt and become Homo Sapiens Digitalis Intelligere … Sullivan and Zutavern states: The supercharged human ingenuity you will wield in the real world will stem from the thought-like operation of machines in the digital one.” (emphasis my own) … and then the caveat … “Only leaders who learn to assemble the pieces and tap their potential will realize the benefits of this marriage, however” (emphasis my own).  Sooo … the future is bright for the ones who are able (and willing) to become the New Human augmented by Digital Intelligence … For the rest … please read Charles Darwin and pray for universal basic income.

Again (and again) we meet the inherent assumption that most corporate decisions can be fitted into an ideal data-driven algorithmic process lending themselves “easily” to A.I.. This does not fit the reality of many corporate decisions including many important and critical ones. For the applied machine learning practitioners, with “dirt” on their hands (and up their elbows), in practical terms know that there is nothing easy about getting data prepared for machine learning … its hard work with no instant success formula. It is a largely iterative and manual (labor-intensive) process to come to a result that is actually applicable to real-world problems.

But wait a minute … how likely is it that decision-makers will actually adapt towards a mathematical corporation? Will they actually trust and follow A.I.-based recommendations or just discard such “foolishness”?

Algorithmic aversion may turn ugly and become an A.I. allergy among workforces that stands to be replaced or “upgraded” by augmentation.

The question really is whether applying an algorithmic approach to tiny or small data amounts still provides a better basis for decisions than leaving it out completely. In other words, augmenting the decision makers’ own wetware cognitive decision process and inferences is often based on the theory of (corporate) minds.

Let us first establish that even relatively simple mathematical forecasting procedures and algorithms are providing for better decisions and insights than if based purely on human intuition and experience. In other words; algorithmic approaches, even simple ones, will augment a human-based decision (although I will also immediately say that it assumes that the algorithmic approach has been correctly implemented, its inherent uncertainty, error rate, and bias have all been correctly considered … sorry even here there is no “free lunch”).

There is a whole body of literature on the topic of algorithmic performance vs human performance and the human adaptation of more mathematical approaches to forecasting and decision-making. This work goes back to the 50s and into the 80s, with Paul Meehl research work and seminal book “Clinical versus statistical prediction: a theoretical analysis and a review of the evidence”  and through the work of Robyn M. Dawes (see Cool and Relevant Reads below) and alike.

Algorithms, even simple ones, do perform better than human beings limited to their own cognitive abilities in terms of predictions (i.e., an essential part of decision-making whether done consciously or subconsciously). This result has been confirmed many times over by the likes of Paul Meehl, Robyn Dawes, and many other researchers in the last 50 – 60 years. Importantly though, machine learning algorithms do not offer an error-free approach to decisions making. However, algorithmic approaches do offer predictions and solutions with lower, often superior, error rates (and not unimportantly … quantifiable error rates) than what would be the case of pure cognition-based decisions.

No wonder Homo sapiens sapiens have grounds to be allergic to intelligent algorithms … Most of us have problems with peers being smarter than us … although this luckily happens extremely rarely as we will see in the data of the Survey presented below (or at least if you happen to ask peoples own opinion). The challenge around algorithm aversion is addressed by Berkeley Dietvorst el. al. in a more recent 2014 paper “Algorithm Aversion: people erroneously avoid algorithms after seeing them err” (see references in the paper as well). This paper in detail addresses algorithmic aversion in experts and laypeople. People, in general, remain very resistant to adapting more mathematical approaches despite such being demonstrably less prone to error than human-based decision-making without algorithmic augmentation. This holds true for simple algorithmic approaches as well as for example explored to great length by Robyn Dawes and co-workers. As argued in the paper of Dietvorst et al “we know very little about when and why people exhibit algorithmic aversion” … However, one thing is very clear;

We, as humans, are much less forgiving when it comes to machine errors than human errors.

The standard we expect of artificial intelligence is substantially higher than what we would require from a fellow human being or co-worker.

However, it is also true that minds and cultures change often in synchronicity and that what was unthinkable a time ago can be the new normality sometime after.

And obviously, even the best algorithmic approaches or the smartest A.I. implementations will make errors. Either because we are at the limit of Bayes optimal error or due to the limitations of the training that was applied in the algorithmic learning process … Bad Robot! … That obviously is not the point. Humans make mistakes and err as well. We are prone to “tons” of various cognitive biases (as has been described so well by Kahneman & Tversky back in the 80s) and are pretty lousy at handling too much complexity.

What? … Bad at handling complexity? Well … Yes, we are! In general, the human mind appears to have a capacity limit for processing information around the 7 information chunks or pieces. Plus or Minus 2. As George Miller describes in his 1956 influential paper  “The Magical Number Seven, Plus or Minus Two Some Limits on Our Capacity for Processing Information”. Since Miller’s work back in 1956, the magic number is still around 7 (or 4 – 11), albeit we are having a more nuanced view on how to group informational chunks together to effectively increase our handling of complex problems. Isn’t this just of academic interest … Liraz Margalit, Head of Behavioral Research at Clicktale,  back in July wrote a wonderful blog, backed up by experimental evidence, on how choices can become overwhelming and why it makes sense for businesses to make it easier for customers to choose (see “Choices can become overwhelming, so make it easier for customers”). I wish Telco and other online retailers would follow Liraz’s advice of simplifying the options presented to the potential online customer. The complexity of presenting or recommendation can all be dealt with easily in the background by an intelligent algorithm (i.e., A.I.).

Does intelligent algorithms, or A.I., suffer from similar limitations in complexity handling or from a gazillion cognitive biases? Handling complexity … obviously not … I hope we do agree here … So what about biases introducing errors in the decision process (note: bias here not in a machine learning sense, which implies under-fitting to available data, but in the more expansive sense of the word)? Sure algorithms can be (and possibly often are to an extent) “biased” in the sense of a systematic error introduced in training the algorithm, by for example unfair sampling of a population (e.g., leaving out results of women or singling out groups of a population ignoring data from the remainder, etc..). Often algorithmic biases can be introduced un-intentionally simply by the structure of the data used for training the A.I.. Some recent accounts for A.I. biases are the ones provided by Motherboard which found that Google’s sentiment analyzer thinks being Gay is bad or that training data had been labeled (by humans) in a way that would teach the A.I. to be sexist and racist. An Example of potential A.I. bias: For corporate decision making it would not be too strange that past training data would reflect a dominance of male decisions. It is a scientifically well-established fact that men more frequently make decisions even if a decision would be counterproductive or irrational (in terms of risk and value). Men are prone to a higher degree of over-confidence in their decision-making which results in higher losses (or less gains) over time compared to women. Thus, using training data dominantly representing male corporate decisions might to a degree naturally bias the A.I. algorithm towards a similar male-dominated decision logic. Unless great care is taken in de-biasing data, which might mean much less available for training, or using synthesized data of idealized rational decision logic (i.e., much easier said than done). Furthermore, given humans are very good at post-rationalizing bad decisions, the danger might anyway be that available data labeled by human decision-makers might not be entirely free of bias itself irrespectively.

Human biases are often acquired within a cultural context and by the underlying neurological workings of our brains. So overall there are good reasons why mathematical algorithms outperform, or at the very least match, in most situations the human decision maker or forecaster. For a wonderful account of the neurobiology of human behavior do find time to read Robert Sapolsky’s “Behave: the biology of Humans at our best and worst” which provides a comprehensive (and entertaining) account of some of the fundamental reasons why we humans behave as we do and probably can do very little about it (btw. I recommend the audible version as well which is brilliantly read by Michael Goldstrom).

Another reason for a potential A.I. aversion among decision makers (if being faster, better, and more accurate should not be enough reason) is the argument that we don’t understand what is going on inside the applied machine learning algorithms. For a majority of decision-makers, not having had exposure to reasonably advanced mathematics or simply don’t care much about that discipline, even simpler algorithms might be difficult to understand. And it quickly gets much more complex from there (e.g., deep learning field). It might not really matter much that some of the world’s top A.I. experts argue that understanding does not matter and it is okay to use intelligent algorithms in a black-box sense.

The cynic in me (mea culpa) would argue that most decision-makers don’t understand their own brain very well (or might not even be consciously aware of its role in decision-making 😉 and that certainly doesn’t prevent them from making many decisions. In this sense, the brain is a black box. So A.I. performance and its capability of handling large and complex data volumes should be a pretty good reason for not worrying too much about understanding the process of A.I. reasoning.

Why? Because my A.I. says so! (not entirely comfortable with that either I guess).

In summary, why are humans might be prone to A.I. allergy or algorithmic aversion, apart from we don’t like ‘smart-asses’;

  • A.I. is much better at handling large-scale complexity than humans (i.e., the human limit seems to be somewhere between 4 – 11 chunks of information).
  • A.I. is likely to be substantially less biased compared to the plethora of human cognitive and societal biases.
  • A.I. would take the fun part out of decision-making (e.g., risk-taking and the anticipatory reward).
  • A.I. is a threat to our jobs (whether a perceived or real threat does not really matter).
  • Humans do not like (get very uncomfortable with) what they do not understand (at least if they are conscious about it, e.g., our brains are usually not a big issue for us).

doubt

It is clear that with the trend of increasing computer and storage power at increasingly lower cost, married with highly affordable ubiquitous broadband coverage (i.e., fixed and mobile), twice married with an insane amount of data readily available in the digital domain, algorithmic approaches providing increased convenience and augmentation of every day civil as well as corporate life becomes highly attractive.

The development of A.I. performance is likely going to increase in a super-linear fashion following improvements in computer and storage performance. The wet biological brain of homo sapiens sapiens is not so much (obviously).

It is no longer unthinkable, nor too far out in the future, that blockchain-enabled decentralized autonomous organization technologies (i.e., DAOs) combined with a very high degree of A.I.-driven automation could result in almost zero-human-touch corporations. Matthew Mather has described a possible darker future based on such principles in his super exciting novel “Darknet”, where an A.I.-boosted DAO conspires to become a world-dominating business with presidential aspirations (there might be some upside to that scenario compared to today’s political reality around the world … hmmm).

So where does all this leave us … Homo Sapiens Sapiens?

How will algorithms and complex mathematics change corporate decision makings that today is exclusively done with the help of a beautiful complex biological machine … the human brain.

Might there be a corporate advantage of augmenting or maybe eventually replacing the emotional neurobiological homo sapiens sapiens brain, with an A.I.-driven digital brain?

Assuming we will have a choice … will we, as humans, accept being augmented by Digital Rainmen? … Will the CxOs and upper management stop think and exclusively make use of the Digital Intelligence, the A.I., available to them in the near- to medium-term future? (note: near and medium could still be far away in some A.I. gurus’ opinions).

Lots of questions! Time to try to get some answers!

To gauge corporate managers’ perception of their own wet brain decision-making capability, their decision-making corporate environment, and their opinion of having their decision-making process augmented by A.I., I designed a 3 – 4 minute survey with SurveyMonkey.com.

survey.jpg

THE SURVEY.

The survey consists of 24 questions and takes on average a little less than 4 minutes to complete. The questions are structured around 4 main themes;

  • General information about you (e.g., gender, age, job level, education level).
  • Your corporate decision-making skills.
  • The quality of data used in your decision-making process.
  • Your acceptance of A.I. as it related to your corporate decision-making processes.

Over the cause of the data presented here, I have collected 658 responses over 3 groups of Collectors streams open collection data from various sources.

  • Collector Group 1 (CG1): SurveyMonkey Audience Response option. SurveyMonkey in this case gathered responses from 354 respondents in the United States between 18 – 100+ years of age and with an income above 75 thousand US$. Age balancing was basic and Gender balancing was based on the current census. The data was collected between September 3rd and September 6th 2017. This is a paid service with a cost of approximately 1,040 euros or 3 euros per respondent. From a statistics perspective, this is the most diverse or least biased (apart from being from the USA) response data used in this work. When I talk about the Reference Group (RG), this would be the group I refer to.
  • Collector Group 2 (CG2): My own social media connections are from LinkedIn, Facebook, and Twitter. This is approximately 113 responses. This sample is largely gender skewed towards males (62 males vs 31 females). Furthermore, a majority of responses here have a background in the telecommunications and media industry. Most of this sample consists of respondents with a graduate-level degree (77) or a 4-year college degree (19).
  • Collector Group 3 (CG3): This group consists of 191 responses primarily from the European telecom industry (but does not overlap with CG2). Again this response sample is largely biased towards males (156 responses) with a 4-year college degree or graduate-level degree (128 responses).

The data will be made available on GitHub allowing others to reproduce the conclusions made here as well as provide other insights than addressed in this blog.

For this Blog, I will focus on the survey results across the above 3 Collector groups and will not discuss the individual groups with the only exception of SurveyMonkey’s own Collector, i.e., the Reference Group. Irrespective, the 3 Group responses are statistically largely similar (i.e., within the 95 percentile) in their response distributions with very few exceptions.

Out of the Reference Group, 50 respondents identified themselves as Retired. These responses have not been considered in the analysis. In the SurveyMonkey audience response (i.e., the Reference Group), 95 respondents did not match the provided current job level options and choose the Other category with an alternative specification.

Thus 608 responses to the Survey are left for further analysis.

GENDER DISTRIBUTION

After filtering out retirees, we are left with 608 respondents in the Survey. The Reference Group has a reasonably balanced gender mix of 46% female and 54% male. The other Collector groups CG2 and CG3 are much more skewed towards males (e.g., 27% and 18% female mix respectively). The reason for this bias is the substantially lower representation of women in technology-heavy units of the telecom & media industry which is represented in CG2 and CG3.

However, both in its totality as well as in the separate Collector Groups are there sufficient Gender-based data to make statistically valid inferences within a level of 95% confidence.

Chart_Q1

This Survey’s gender distribution. Note that the nnn, mm% (quantity comma separated percentage share, e.g., 402, 66%) on the bars provides the frequency (i.e., the quantity of a particular type, e.g., 402 Males) and the relative amount in percentage (e.g., 66% Males).

JOB LEVEL DISTRIBUTION

The purpose of this Survey was to try to capture corporate management decision-making. This is very nicely reflected in the job-level distribution of the participating respondents.

At least 335 (or 55%) of the respondents are in Middle Management or higher. In total 80 (or 13%) characterize their current job level as Owner, Executive, or C-Level management.

Chart_Q4

This Survey’s job-level distribution. The nnn, mm% (quantity comma separated percentage share) on the bars provides the frequency (i.e., the quantity of a particular type, e.g., 80) and the relative amount in percentage (e.g., 13%).

The absolute numbers per job-level category above do allow us to statistically analyze the possible differences in corporate decision-making perception, sensitivity towards A.I. in general and A.I.-driven augmentation in particular between the different management categories sampled here.

In this question of job level, women are under-represented (compared to their overall share of respondents, i.e., 34%) in the senior and middle management categories with 25% and 30%. This bias is also present in the Reference Group with women also being under-represented in the “Owner/Executive/C-level” category.

Does a C-level leader perceive A.I.-augmented decision-making differently than a senior manager? and what about those two categories compared to middle management?

EDUCATIONAL DISTRIBUTION

The educational level of the respondent to this survey is very high. More than 70% of the respondents have a 4-year college degree of higher. 47% have a graduate-level degree. This might be important to consider when we get deeper into opinions of decision-making and A.I. sentiment.

Chart_Q3

This Survey’s highest-level education distribution. The nnn, mm% (quantity comma separated percentage) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

The absolute response numbers for “Primary school” (3) and “Some high school, but no diploma” (17) are not sufficiently high to carry statistical significance in comparative analysis. Those distributions are on an individual level not considered for any conclusions or comparative inferences.

AGE DISTRIBUTION

The average age of this survey’s respondents is approximately 45 years of age. The age distribution between males and females is very similar. It is clear that the sample has a definite age bias. This is reflected across all the Collector Groups including the Reference Group, where the average age is closer to 47 after the Retired Group has been filtered out.

Chart_Q2

This Survey’s age distribution. The nnn, mm% (quantity comma separated percentage) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

Note that the absolute response numbers for age groups “17 or younger” (2) and “18 – 20” (4) are not sufficiently high to carry statistical significance in comparative analysis.

The cynic might question why it is so relevant to understand the opinion and sentiment towards A.I. in a sample which such a relatively high age.

Over the next 10 years, it is likely that many of those in the group below 55 will either remain in their management functions or have been promoted to senior management or executive/C-level. Even the Mark Zuckerbergs of today do age (i.e., Mark Z will in 10 years’ time be 43 and Yann LeCun 67 and I just had age-selective amnesia …). Thus their decision-making skills would still be largely in use over a period where A.I. is likely to become an increasingly important tool in the corporate decision-making process. Augmenting and in many areas replacing the human decision-maker.

Jump

WE ARE ALL BETTER CORPORATE DECISION-MAKERS THAN OUR PEERS.

It is a well-established “fact” that we humans are all less risky and more skillful drivers than our fellow drivers. This was systematically confirmed in the seminal paper by Ola Svenson back in 1981. Well at least we as individuals pretty much all believe so (allegedly) … I certainly do too, so others must be wrong! ;-). In the study by Svenson, 88% of US participants in the research believed themselves to be safer than the median (i.e., frequency distribution midpoint or 50% of quantities falls below and 50% above). Talk about self-confidence or maybe more accurately over-confidence.

So to paraphrase Ola Svenson’s statement into a question relevant to corporate decision-making… Are we as corporate decision-makers better at making less risky and much better decisions than our peers?

Chart_Q7_1

The nnn, mm% (quantity comma separated percentage, e.g., 329, 54%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

And the answer is overwhelming … YES! (even if it of course makes little statistical or reality sense).

We are as corporate decision-makers all (or almost all) better than our peers. At least that is our perception.

Only! 3% (THREE PERCENT) ranked their decision-making skills below average. 54% above average. If you impose a normal distribution it would even be reasonably fair to state that ca. 75% of respondents assess their corporate decision-making skills to be better than their peers (or above the median in a statistical sense).

Chart_Q7_2

It is interesting, although not surprising or novel, that male self-confidence in general is higher than that of female respondents. Of course, self-confidence is a very nice (too nice maybe) word for over-confidence in one self’s own ability to make good or better decisions.

Statistically, only for CG1 (i.e., the MonkeySurvey audience response) is the overall response distribution for females statistically significantly lower (at 95% confidence) compared to that of male respondents. In other words, females are to a lesser degree than males over-confident in terms of their own decision-making skills.

There are several perspectives on gender differences as it relates to confidence (call it self-confidence or over-confidence). We have the classical work by Maccoby and Jacklin (“The Psychology of Sex Differences” from 1974) which take is in my opinion somewhat a pessimistic outlook or outdated since the time of their exhaustive research work: “Lack of self-confidence in achievement-related tasks is, indeed, a rather general feminine trait. The problem may lie, at least in part, in the tendency for women to perceive themselves as having less control over their own fates than do men”. Sarah Burnett (contemporary to Maccoby & Jacklin) in her beautiful account (yeah I like it better;-) for gender differences and self-confidence (from 1978) “Where Angels fear to tread: An analysis of sex differences in self-confidence” concludes; “If there is a “real” sex difference in self-confidence, it could well lie in the fact that women are reluctant to forecast success for themselves in the absence of reliable supporting evidence; men, perhaps because of their wider range of experiences, their “machismo,” their penchant for risk, or whatever, seem less hesitant.”.

Finally, I want to refer to an equally interesting account for gender differences as it relates to self-confidence and implied risks. Barber and Odean’s super interesting article “Boys will be boys: Gender, Overconfidence, and common stock investments” based on stock trading behavior clearly show that men are significantly more confident than women in their ability to choose the “winning” stock. The work of Barber and Odean (as well as other works in this field) also shows that men in general incur higher losses than women investors. This has been attributed to the substantially higher degree of (and statistically significantly) male over-confidence compared to that of females.

And oh boy! … if you are very brave, let an adolescent male, or a single male, make your corporate decisions. You might be in for a really interestingly scary ride. Why? your typical adolescent person,  between ca. 15 – 25ish years of age, has an underdeveloped frontal cortex & executive system. Simply put an adolescent does not have enough rational control to put up red flags when engaging in risk-taking.  Single or unmated people have in general lower levels of oxytocin and vasopressin (i.e., neuropeptides) compared to what is found in couples. Both vasopressin and oxytocin are known to lower or moderate risk-taking and increased pro-social behavior (e.g., particularly true for males).

boy will be boy risk.jpg

Both men and women are subject to substantial over-confidence in their corporate decision-making skills.

Men show a higher degree of over-confidence, compared to women, in their corporate decision-making skills.

Women working in a male-dominated environment (e.g., engineering) are at least as over-confident in their abilities to make corporate decisions as their male peers.

Talk about a gender gap!?

gender.jpg

So which job-level group has the highest opinion about their own corporate decision-making skills? Which group has the overwhelmingly largest degree of over-confidence (or self-confidence if we want to be nice) bias across all job levels? … hmmmm …

Well, no surprise (maybe?) .. Owners, Executives, and C-level leaders outshine all other job levels by their decision-making confidence compared to their peers. Interestingly not only is the average significantly higher for Owners/Executives/C-level respondents but also their variation (“collective doubt”) was significantly lower than any other job-level group.

It is not only that “Boys will be Boys” should worry us … maybe “CEOs will be CEOs” should as well? 😉

Chart_Q7_3Also a reasonable clear trend (with the exception of Senior Management and Middle Management which are statistically similar). The lower an individual is, in the corporate hierarchy, the less expressed self-confidence that individual appear to have, e.g., Entry-level managers at 60% and Executive at 89%, a staggering difference of almost 30% in self-confidence.

The higher in the corporate hierarchy an individual is, the higher is that individual’s degree of confidence in her or his decision-making skills.

So we have established that just like individuals have confidence in their own driving skills compared to peers, the same appears to hold true for corporate decision-making skills. We are all better than our peers. But what about decisions based on that wonderful “gut feeling” or intuition … or as I have often heard it expressed: “I feel it on my water” (no further elaboration will be given).

According to Wiktionary: “gut feeling (plural gut feelings) (idiomatic) an instinct or intuition; an immediate or basic feeling or reaction without a logical rationale.” .

The characteristics of gut feelings, instinct or intuition are;

  • Arrive rapidly without deliberate rational thought.
  • Triggered by past experience and learnings.
  • A sense of confidence and trust in feelings.
  • Difficult to rationalize.
  • “Behind the scene” or a sense of the sub-conscious process.
  • etc..

Ultimately, what is going on with our ‘gut feelings’ is thought to be a result of an intimate play between the autonomous nervous system, the (ventromedial) prefrontal cortex, and the amygdala (among other actors in the limbic system). It is believed to be a manifestation of bodily feelings associated with emotions. This could be a heartbeat pulse increase, an uneasy feeling in the gut, ‘goosebumps’,  etc… It is that feeling in the body we get in case of unease, discovery, confronted with unexpectedness, and so forth. Thus there is a well-established (although maybe less well-understood) Brain – Body coupling or feedback that is responsible for those bodily feelings that signal to the brain to be on the lookout for the immediate future. This process has been well described in Antonio Damasio’s 1994 book “Descartes’ Error: Emotion, Reason and the Human Brain” (and in countless scientific publications since then). Another way of looking at gut feelings or gut instinct is Daniel Kahneman’s dual system of fast and slow thinking. The fast system in many ways is a metaphor for that gut feeling or intuition. This is often also called the effect heuristic which allows us to very rapidly make decisions or solve problems.

Depending on what emotional state the gut feelings are associated with, can greatly influence the decision-making of individuals. There are many situations where gut instinct or feelings are beneficial for the decision maker. As has been argued by Antonio Damasio and others; “emotionless decisions are not default good decisions” and of course, there is the too much of a good thing “too many feelings/emotions are also detrimental to good decisions” (e.g., people are terrible decision makers under stress). There needs to be a proper balance between the mind’s affective processes (i.e., typically residing within the limbic system) and that of the frontal cortex’s cognitive-controlled processes.

Much of the data-driven philosophy including the ideas around the mathematical corporation, is to decouple emotions and feelings from the decision-making process. After all an algorithm doesn’t have that intricate play between emotion, feeling, and “rational” reasoning a human does (e.g., it doesn’t have a limbic system). An A.I. may not be burdened by a sh*t load of cognitive biases in its decision-making process (note: it does not mean an A.I. cannot be biased, it most often will be if the data it has been subjected to are biased … which most data typically will be). So that is swell! …?… Maybe not! As Antonio Damasio has shown lack of emotions can easily paralyze or mess up decision-making (see his “Descarted’ Error: …” or study psychopaths decision-making).

So … How prevalent is decision-making based on instinct or gut feelings? (or how willing are respondents to admit that they are using feelings, instinct, or sense of direction in this super duper data-driven world of ours … or at least the aspiration of a data-driven decision-making world).

Chart_Q9_1

The nnn, mm% (quantity comma separated percentage, e.g., 22, 4%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

The above response shows that in a bit more than 50% of business decisions taken rely (to an extent) on gut feelings. I should point out that within the surveyed response data there is no clear statistical evidence of difference between different sub-groups (e.g., male vs female, job-level, education).

I refrain from passing judgment on the above-surveyed result, as I can say that I have, as a scientist, benefitted from such gut feelings or intuitive leaps in the past. I do think it is important to point out as this process remains an integral part of most human decisions, irrespective of whether our business has become increasingly data-driven (or mathematical).

Gary Klein (in “Sources of Power: how people make decisions”) estimates that in 80 plus percent of time-pressured situations decision makers rely on intuition or gut feelings rather than deliberate rational choice. Burke & Miller in their 1999 paper “Taking the mystery out of intuitive decision making” surveyed 60 experienced professionals holding significant positions in major organizations across various industries in the US. Burke and Miller’s survey results were that 12% of surveyed professionals answered that they always used intuition in their decision-making, 47% often, 30% sometimes, 7% seldom, and 3% rarely. This is not that different from the reported survey results above on the frequency of the use of gut feelings in corporate decision-making (although the scales might not be completely comparable).

So how do we assess the quality of our “gut feelings” in comparison with our peers?

Chart_Q10_1

The nnn, mm% (quantity comma separated percentage, e.g., 343, 56%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

Maybe not too surprising, as Question 10 closely resembles that of Question 7, respondents in general perceive their gut feelings as being better than their peers.

The maybe interesting observation here is that the gender difference is not statistically apparent from the responses to Question 10. While there where a clear statistical difference in self-confidence (i.e., Question 7) between women and men, this is not apparent in the self-judgment of the qualities of a gut feeling in comparison to peers.

Chart_Q10_2

Parroting the decision-making skill confidence question (i.e., Question 7), the survey data on the quality of one’s own “gut feelings” do indicate a dependency of role in the corporate hierarchy. The higher the corporate position the higher is the “gut feelings” quality perceived in comparison with peers.

When it comes to self-assessment of an individual’s “gut feelings” quality compared to peers there is no apparent gender difference.

The higher a given individual’s corporate position is, the higher is the confidence in or perceived quality of the individual’s “gut feelings” compared to peers.

Finally, do corporate decision-makers like to make decisions?

Chart_Q6

The nnn, mm% (quantity comma separated percentage, e.g., 222, 37%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

Overwhelmingly, respondents do like or enjoy making (corporate) decisions. I should point out that the question posed here might be leading to answers towards the positive end of decision-making sentiment. In retrospect, Question 6 could have been asked in a more neutral fashion (e.g., “How do you feel about making decisions relevant to your company” or alike).

Why is it relevant to understand individuals’ self-confidence in own and sentiments toward decision-making?

First of all, it might reveal an uncomfortable degree of over-confidence in corporate decision-making that more algorithmic approaches could address. It might point towards a substantial degree of bias in the corporate decision-making process ignoring in practice relevant available data. Again A.I. methodologies might provide for a more balanced decision-making process by neutralizing some of that individualized bias that typically overweight corporate decisions. On a very basic level, it might further provide some realistic expectations for general adaptations of algorithmic approaches to data-driven decision-making. Successful A.I. policy and strategy certainly would stand or fall with individual decision makers’ perception of value to them as individuals as well as the corporation they are paid to manage and lead.

woman_decision_making.jpg

CORPORATE DATA-DRIVEN DECISION MAKING.

The newish buzz of the corporation (unless you are with Amazon, Facebook or Google it is a pretty old buzz) … data-driven decision making, algorithmic augmentations to data analysis and resulting decision making, the move towards the so-called mathematical organization are resulting in expressed (or unexpressed) strategies (but often very little or poor policies) that permeates medium and large corporations today (and pretty much non-existing for small ones).

The impression we “corporate peasants” are often given by (some) A.I. Gurus (and usually affiliated with Management Consulting or from firms light-years ahead of the pack) is that in the near-future algorithmic approaches will be able to substantially augment and in many instances replace decision-making processes and makers. That all should be data-driven and that data-driven decision-making is the holy grail. The A.I. Gurus are often acting as the new Latin speakers of the Age of Enlightenment (for the ones enjoying the satirical plays of the Enlightenment have a look at Ludwig Holberg’s “Erasmus Montanus” written in 1722).

The fact is that many corporate decisions, even important ones, are not or cannot be based on huge amounts of relevant data. Often data is not available or simply not relevant or outdated. Applying algorithmic approaches or machine learning approaches might be highly in-efficient and lead to a false sense of comfort that more human-driven decisions may not suffer from (although there is likely a whole host of other biases playing a role irrespective).

Human decision-makers make mistakes (males more than females). The more decisions the more mistakes. Such mistakes can be costly. Even catastrophically so. Often the root cause is that the human decision-maker is over-confident (possibly to the extreme as we have seen above) in his or her ability to make good decisions considering the associated risks and uncertainties.

Generalization from small or tiny data is something the human brain is a master of, even when the brain demonstrably / probabilistically has no basis for such generalizations.

When confronted with large and huge amounts of often complex data, cognitively the human brain simply cannot cope and will dramatically simplify down to a level that it can handle.

ocean_of_data.jpg

Anyway, let’s break the data-driven decision-making down into the available data, the structure of the available data, and of course the perceived quality.

Chart_Q11

The nnn, mm% (quantity comma separated percentage, e.g., 191, 31%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

About 70% or more of the respondents frequently (ca. >70% of decisions) always use available data in their decision-making process. After all, why would a decision maker not use available data? … Well it might depend on the quality of that data! … and to be fair, from the question it is difficult to gauge with what weight available data is included in the decision process vis a vis gut feelings and other cognitive biases, e.g., over-confidence in interpretation of available data.

Most corporate decision-makers consider available data for most of their decisions.

So that’s great when data is available. What about how frequently data is actually available to be considered in the decision-making process?

Chart_Q12

The nnn, mm% (quantity comma separated percentage, e.g., 76, 13%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

So data would be available for about 57% of the decision-makers in at least 70% of their decisions. While 31% of respondents always consider available data in almost all of their decisions (i.e., Question 11), only 13% of respondents have data available for almost all the corporate decisions they need to make.

A little more than half of corporate decision-makers have data available for most of their decisions.

Ca. 30% of the surveyed respondents have data available for half their decisions. However, only 19% of the respondents consider the available data for approximately half of their decisions.

There is a relatively large disconnect between data being available for corporate decision-making and the data actually being used.

This might indicate several issues in the data-driven decision process

  • Available data is perceived as poor quality.
  • Available data is perceived as being too complex to contribute to the decision process.
  • A certain degree of immaturity in how to include data in the decision process.
  • Too high reliance on gut feelings and overconfidence bias ignoring available data.
  • etc.

Let us have a look at the perceived quality of the available data;

Chart_Q13

The nnn, mm% (quantity comma separated percentage, e.g., 316, 52%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

From the above categorization of data quality, one would expect that a little less than 40% of respondents would have a possible straightforward path to a data-driven decision of reasonable to high quality. Approximately 60% or more would either not be able to rely on an algorithmic data-driven process. Or if pursuing this venue, would need to be very careful in their interpretation of the available data, and analysis based on this data. they should be expecting a relatively high degree of uncertainty and systemic risk in their decision. Particular comparative scenarios or alternatives, often considered in corporate decisions, could be rather meaningless if the underlying data is of relatively poor quality. A data-driven or mathematical decision process will not change that.

A majority of corporate decisions rely on data that might not be very well suited for advanced data-driven algorithmic approaches.

GIGO (i.e., garbage in garbage out) is still a very valid dogma even in a data-driven decision-making process augmented by algorithms or other mathematical tools.

Chart_Q14

The nnn, mm% (quantity comma separated percentage, e.g., 248, 41%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

When it comes to important decisions, 50+% of the respondents’ corporate decisions relying on are either large-data (46%) or big-data driven (6%). The glass-half-full perspective is that for at least half of all important corporate decisions, this does bodes well. It should be possible to apply advanced algorithms or machine-learning approaches that would augment the human decision-making process. The glass-half-empty perspective is that for the other half of important decisions, we may not have such luck that the mathematical corporate philosophy could offer. The challenge obviously is how relevant mathematical approaches can be to important corporate decisions where small-, tiny- or no relevant data is available. Would the application of pre-trained data models, trained on larger but non-related data amounts, be of use. Maybe this remains a domain where “wishful” thinking models (e.g., normal business models & business cases), gut feelings, and inflated self-confidence would be the prevalent method to come to a decision.

Would it not be great if your competition had no higher quality data available for their decision-making processes than available to your business. At least if you have a level playing field, in terms of available data and the quality of such data is about the same, the rest would be up to the ingenuity of respective decision-makers including the quality of applied algorithmic processes.

Chart_Q16

The nnn, mm% (quantity comma separated percentage, e.g., 448, 74%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

Compared to respondents self-assessment of their own decision-making skills and the quality of their gut feelings (compared to peers), they appear more careful in judging the quality of their corporate competitors. 74% assess that there is little difference in the underlying quality of data available to their competitors and their own decision-making process.

Most corporate decision-makers expect business competitors’ data to be of the same quality as that available to them.

It is easy to lose sight of human opinion when discussing data-driven decision processes and decision-making. Particularly as such processes become more automated and enhanced by algorithmic or applied machine learning approaches. It might become easy to ignore human insights when you have a solid mathematical-statistical framework to process your data and provide recommendations for the best decisions based on available data. Taking the data-driven organization to the possible extreme.

How important are human insights or human opinion augmentations to data-driven insights?

Chart_Q15

The nnn, mm% (quantity comma separated percentage, e.g., 202, 33%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

The glass-is-half-full interpretation of the above result of this survey would be that more than 50% of the respondents find it important to consider human opinions beyond what comes out of their data-driven process. In other words, enrich their data-driven analysis with (alternative) human opinions.

The glass-is-half-empty interpretation is that almost 50% of the respondents only augment their decision-making process in less than 50% of their decisions with (alternative) human opinions.

Obviously, when decision-makers believe they are better at making decisions than their peers, there might not be such a great incentive for seeking alternative human opinions to what a decision-maker has already concluded to be the best way forward based on available data (or “gut instinct”).

The question is whether A.I.-augmented decision-making could be a game changer in how corporations make decisions? Will an algorithmic data-driven approach provide the framework for better and more valuable decisions than is the case today, where largely human-driven decision-making, with all its cognitive biases, rules? Will silicon-based decision-making overtake biology-based decision making and will such decisions be better?

How does the human corporate decision-maker perceive artificial intelligence? Is A.I. perceived as a threat? As an opportunity? or a bit of both?

As Tim O’Reilly might say WTF? or my grandmother WTF!

scary_ai.jpg

THOSE A.I. “SUCKERS” ARE NOT GOING TO MESS WITH MY DECISIONS!?

Firstly, the survey revealed that, not surprisingly, most of the respondents had heard about Artificial Intelligence prior to the survey. In this survey, a little more than 90% of the respondents had heard of A.I..

In the following, respondents that have not heard of A.I. have been filtered out of the analysis. This filtering is in addition to the filtering out respondents providing “Retired” as the job level. In total 39 respondents (6.4%) had not heard of A.I. at the time of the survey. This leaves a remaining sample of 569 out of the original 658 (i.e., of which 50 were retired and an additional 39 had not heard of A.I. prior to this Survey).

So … How do we feel about A.I.?

Chart_Q18

The nnn, mm% (quantity comma separated percentage, e.g., 178, 31%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

The average sentiment and standard deviation (in the parenthesis) across all respondents (i.e., who have heard of A.I.) were 2.65 (0.88). An average score would indicate a sentiment between “I am neutral” and “I am very comfortable with it”.

The survey did reveal a statistically significant gender difference in the sentiment towards A.I.. Women’s (i.e., 2.82 (0.85)) sentiment towards A.I. is more neutral than men’s (i.e., 2.57 (0.88)). This is also reflected in proportionally more women indicating more negative sentiments (i.e., “I am uncomfortable with it” or “I hate it”).

If we ignore the neutral category of 46%, which might swing to either side pending future information and experience, there are 41% of the respondents have a very positive sentiment towards A.I. (i.e., either “very comfortable” with or “love” A.I.). Only 13% of the respondents express unease (11%) or direct hate (2%) against A.I.. Also here gender difference is observed. 17% of women expressed concern about A.I. compared to 12% of men.

Many more people are positive towards A.I. than negative.

Women seem on average to be slightly more reserved towards A.I. than men. Although significantly more women than men have stronger reservations against A.I..

From a job-level perspective, “Owner/Executive/C-Level” have the 2nd most positive average attitude towards A.I. followed by “Senior Management”. However, the lowest standard variation is found for “Senior Management” which might indicate a higher degree of conformity towards A.I. than found in any other job-level categories including the “Owner/Executive/C-Level” category. What is interesting and at least for me not self-explanatory is that the “Entry level” category appears to have the most positive attitude towards A.I., a difference that is statistically significant within a 95% confidence. This aspect will be further explored in an upcoming analysis.

From an education perspective, respondents with a graduate-level degree are more positive in their attitude towards A.I. than for example 4-year college degree respondents or respondents with some college education but no degree. These findings a likewise statistically significant within a 95% confidence. Difference between other categories are apparent (e.g., mean score systematically worsen with less education) however distribution wise not statistically significant (within a 95% confidence level).

The higher the educational level of people the more positive is A.I. viewed.

Furthermore, the higher the educational level the less likely are people to have stronger reservations against A.I..

I wanted to check whether there might be any difference in a respondent’s answer depending on emphasizing that the A.I. is a decision-making optimized” A.I. (i.e., the B-variant) or just keeping the question general without the emphasis on the A.I.  having been optimized for decision-making (i.e, the A-variant).

Questions 19 to 24 are run as an A/B test.  The intention is to check whether there is a difference in a respondent’s answer based on A-variant “an A.I.” or B-variant “a decision-making optimized A.I.”. Approximately 50% of respondents got the A-variant and the remainder (i.e., ca. 50%) got the B-variant.

In the following, I will present the responses as a consolidated view of both the A- and B-variants, if there is no statistical difference between the A- and B-distributions within 95% confidence.

Imagine an A.I. would be available to your company. It might even be a decision-making optimized A.I., trained to your corporate decision-making history (btw it would be reasonably useless to you if it wasn’t trained on relevant data) as well as publicly available data on decision outcomes. It might even have some fancy econometric and psychometric logic that tests decision space for rationality and cognitive biases of proposed decisions. Such a tool will not only be able to tell you whether your proposed decision is sound but also provide you with recommendations for better decisions and how your competitors might respond.

Thus, instead of being fairly mono-dimensional in considerations around a given corporate decision, this A.I. will provide a holistic picture (or scenarios) of a given decision’s most likely impact on the future; the value of the short, medium, and long-term, competitive responses, etc..

Would that not be great …?

Chart_Q19_AB

The nnn, mm% (quantity comma separated percentage, e.g., 220, 39%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

Within 95% confidence, there is no statistical difference between the distribution of A answers and B answers. Thus I have chosen to show both together. At a deeper level, e.g., job level, age, or other responder characteristics, there are also in general no statistical differences between A-variant and B-variant distributions.

Almost 30% of the respondents believe that an A.I. would be unimportant or irrelevant to their company’s decision-making process. About the same percentage (i.e., 32%) believe that it will be very important (30%) or always used (2%). About 40% expect it to be used in approximately half of all decisions. The last part would obviously be a quantum leap in A.I. adaptation compared to today where that number is very low.

1 in 3 decision-makers to not expect A.I. to become important in their corporate decisions.

Senior Management is more optimistic about the importance of A.I. in the corporate decision-making process compared to their leadership (i.e., the “Owner / Executive / C-level” category). Although Middle Management is statistically less inclined. Again it is found that “Entry Level” respondents are more bullish towards A.I. than higher management.

More than 70% of corporate decision-makers believe that A.I. will be important in 50% or more of a company’s decisions.

Upper Management and Entry Level respondents more strongly believe in the A.I. adaptation in the corporate decision-making process.

Okay! but would you trust a decision based on an A.I. consultation? This of course could involve a human decision-maker’s decision augmented by an A.I. consultation rather than a more human-driven decision-making process.

Chart_Q20_AB

The nnn, mm% (quantity comma separated percentage, e.g., 227, 40%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

Only 3% of the respondents would always trust a decision based on A.I. consultation. 27% frequently and 40% only in about half the time that a decision was based on A.I. consultation (i.e., might as well flip a coin).

About 30% of the respondents would infrequently or never trust a decision based on A.I. consultation.

Before pondering on the job-level dependency, note that there is no statistical difference between A and B answer distributions. This also holds true in general on the deeper respondent level.

Does any one particular group have A.I. trust issues? … hmmm

Chart_Q20_AB_job-level

Clearly, the “Owner/Executive/C-level” respondent category, which is a pretty important category in a company’s decision-making process, really seems to have the greatest degree of trust issues towards A.I.. 31% of “Owner/Executive/C-level” would never or infrequently trust a decision based on A.I. consultation. For me this is a wow! and if in general true for corporations might signal some barriers towards wide adaptation of A.I.’s in companies’ decision-making processes.

However, it is also fair to note that the “Executive” category also has the largest variance in response across the trust scale used here compared to any of the other job-level categories.

As is a recurrent theme. Upper management and Entry-level managers are overwhelmingly (and significantly) trusting towards decisions based on A.I. consultation compared to their colleagues in other management categories.

Owners, Executives, and C-level decision makers are substantially (and significantly) less trusting towards decisions made in consultation with an A.I..

The current verdict seems to be that corporate decision-makers don’t really trust the “suckers” (=A.I.).

So the next one should be easy. How often would you follow a human’s advice different from your A.I.’s recommendation? (in retrospect I really should also have asked how often a respondent would follow an A.I.’s advice different from the respondent’s own opinion … but alas for an up-and-coming survey).

Chart_Q23_AB

The nnn, mm% (quantity comma separated percentage, e.g., 295, 52%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

About 50% of respondents it appears would prefer “to flip a coin” to determine whether to follow the human advice or to go ahead with the A.I. recommendation. Okay … this is of course not what is asked and maybe also not what is meant … However, if you in half of A.I. recommended decisions being disputed by a follow human the decision maker, follow either one or the other … then they might as well flip a coin.

It is maybe good to re-remind the reader that algorithmic approaches perform in general better than human-based decisions or on the downside at least as well.

ca. 30% of our decision-makers would follow human advice rather than continue with the A.I. recommendation. Less than 20% would be relatively bold and go ahead with an A.I. recommendation disputed by a fellow human.

Let’s just ask again … Does any particular job level have trust issues?

Chart_Q23_AB_job-level

20% of the “Owner/Executive/C-level” respondents would only infrequently follow a fellow Humans advice different from an A.I. recommendation. Note in Question 20 (above) 33% of the Executives (i.e., “Owner/Executive/C-level”) would trust an A.I.-based consultation with 28% frequently. This appears completely consistent as A.I. recommendations subject to Human dispute would result in a reduction of such being pursued.

Irrespective, the majority is in the “flip a coin category” which might mean that they neither trust the A.I. nor the Humans… this will be more roughly pursued in a follow-up analysis going deeper into the data available and in more refined surveys.

Assume you have an A.I. available to consult and to guide your decisions. It is an integral (or maybe not so integral?) part of your company’s data-driven decision-making process. How often would you follow such an A.I.’s advice?

Chart_Q22_AB

The nnn, mm% (quantity comma separated percentage, e.g., 262, 46%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

Remember that 40% of the respondents would trust a decision based on A.I. consultation about half the time (i.e., what the cynic might call a “coin flip strategy”). 27% would trust such a decision frequently and 24% infrequently.

Would you follow advice based on something you doubt? Well, the result of Question 22 could to an extent be interpreted in this way. 31% of respondents would frequently follow the A.I. advice which is only marginally higher than the 27% that frequently would trust a decision based on A.I. consultation (i.e., Question 20). 46% would follow the A.I. advice in about half the time they are in such a situation. Finally, 16% would follow the advice infrequently although 24% of the respondents only infrequently would trust a decision based on A.I. consultation. There is a difference between following advice and trusting it as history also teaches us I suppose.

More decision-makers would follow A.I. advice than trust decisions based on A.I. consultation.

From a job-level perspective, the response are reasonably consistent with the previous two questions addressed above,

Chart_Q22_AB_job-level.png

Consistent in the sense that irrespective of the above specific trust in decisions based on A.I. consultation, respondents would still go ahead and follow advice based on A.I..

Coming to the end of this survey, it is fair to ask the question of whether a company with an A.I. available for its corporate decision-making process would actually need the decision maker.

So … are you needed you think? … Yes, after the results of the above Questions 19 – 23 it does become a bit rhetoric …

Chart_Q24_AB

The nnn, mm% (quantity comma separated percentage, e.g., 137, 24%) on the bars provides the frequency (i.e., the quantity of a particular type) and the relative amount in percentage.

So … 65% of the decision-makers believe that their decision-making skills will remain needed by their companies. 24% expect that in about half of the time, their skills would still be needed and 10% expect it to be infrequent or never.

There is no statistically significant differences between job levels in their answer to this question.

There is a strong sense among decision makers that their decision-making skills will continue to be required by their companies irrespective of A.I.’s being available to their company’s decision-making processes.

worry_free.jpg

ACKNOWLEDGEMENT

I greatly acknowledge my wife Eva Varadi for her support, patience and understanding during the creative process of creating this Blog. Without her support, I really would not be able to do this or it would take long past my expiration date to finish.

COOL AND RELEVANT READS.

  1. Carl Benedikt Frey and Michael A. Osborne, The future of employment: how susceptible are jobs to computerization?, (2013).
  2. Barry Libert & Megan Beck, “AI May Soon Replace Even the Most Elite Consultants” , Harvard Business Review (July 2017). If an Elite consultant can be replaced by Alexa (Amazon’s A.I.) or another A.I.-bot that basically is a Wikipedia with a voice, then obviously that consultant should be replaced. But maybe, more importantly, the CxO wasting money on an Elite Consultant acting as a biological Wikipedia maybe more so (imho).
  3. Tim O’Reilly, “WTF?: What’s the Future and Why It’s Up to Us” (HarperCollins, 2017). A must-read by a great & knowledgable storyteller.
  4. Ajay Agrawal, Joshua Gans & Avi Goldfarb, “How AI Will Change the Way We Make Decisions.”, Harvard Business Review (July 2017). The devil is in the detail and not all corporate decisions would easily be taken over by A.I. (e..g, decisions that are based on tiny amounts of data). However, it really is a trade-off of how much human error/risk can you tolerate versus an A.I. error (e.g., false positives, false negatives, ..) on various types of decisions.
  5. Josh Sullivan and Angela Zutavern, “The Mathematical Corporation: Where Machine Intelligence and Human Ingenuity Achieve the Impossible.” (PublicAffairs, 2017). With my physics & math background, I somehow cannot help being intrigued by the concept. However, I also believe it is far more visionary than practical to implement. WTF … we will see.
  6. Berkeley J. Dietvorst, Joseph P. Simonojs, and Cade Massey, “Algorithm Aversion: people erroneously avoid algorithms after seeing them err.”, Journal of Experimental Psychology: General (2014). Study on the widespread Algorithm Aversion, i.e., human expectations towards machines are substantially higher than to fellow humans. This results in an irrational aversion of machine-based recommendations versus human-based recommendations. Even though algorithmic-based forecasts are on average better to much better than human-based equivalent in apples-by-apples comparisons.
  7. Robyn M. Dawes, “The robust beauty of improper linear models in decision making”, American Psychologist (July 1979) 571.
  8. Motherboard, “Copyright law makes artificial intelligence bias worse”, October 31 (2017).
  9. Amanda Levendowski, “How copyright law can fix artificial intelligence’s implicit bias problem“, Washington Law Review, forthcoming. Latest review 14 October 2017. The latest draft version can be downloaded from the URL link provided. The draft paper provides an authoritative account of the issues around biases arising from training A.I. on available datasets (in private as well as public domain). Also, some interesting ideas on how copyright might mitigate some of the A.I. bias risks we certainly see in today’s implementations.
  10. Daniel Kahneman, Paul Slovic & Amos Tversky, “Judgment under uncertainty: heuristics and biases” Cambridge University Press (1982). A book that I have read and re-read many times. Keep finding inspiration from every chapter of that book. For me it is in many ways a better one than the later “Thinking fast and slow” from Daniel Kahneman though certainly also a must-read.
  11. Robert Sapolsky’s “Behave: the biology of Humans at our best and worst” , Penguin Random House UK (2017). Robert has been my companion throughout the summer and fall. I have read his book a couple of times and have it in its Audible version as well. It is not only insanely entertaining but also very thought-provoking as it relates to our behavior and why we humans at times are so bad decision-makers.
  12. Tim Swanson, “Great Chain of Numbers”, (2014). Providing an excellent overview of what is already possible to day with smart contracts and blockchain-enabled DAOs (i.e., Distributed Autonomous Organizations) and so forth. Obviously, also shows what the future could look like.
  13. Timothy Short, “Blockchain – The Comprehensive Guido to Mastering the Hidden Economy.” (2016). Note: this doesn’t seem to be available in Kindle format any longer. A great starting point for understanding blockchain technologies.
  14. Matthew Mather, “Darknet”, (2014). A dark account for DAOs, Blockchain and A.I. conspiring and going rogue.
  15. Ola Svenson, “Are we all less risky and more skillful than our fellow drivers?”, Acta Psychologica 47 (1981), 143. Seminal paper that systematically proved over-confidence bias.
  16. Sarah A. Burnett, “Where angels fear to tread: An analysis of sex differences in self-confidence”, Rice University Studies, Vol. 64 (Winter 1978) 101.
  17. Brad M. Barber and Terrance Odean, “Boys will be boys: gender, overconfidence, and common stock investment”, Quarterly Journal of Economics (Feb 2001) 261. Cool paper that shows you should rather ask for a female stock advisor than a male. Particularly if he is single.
  18. Gary Klein, “Sources of power: how people make decisions” , MIT Press, (1998).
  19. Lisa A. Burke and Monica K. Miller, “Taking the mystery out of intuitive decision making”, Academy of Management Executive, Vol. 13 (1999) 91.
  20. Don A. Moore and Paul J. Healy, “The trouble with Overconfidence”, Carnegie Mellon University, Research Showcase.
  21. Antonio R. Damasio, “Descartes Error: Emotion, Reason and the Human Brain”, Avon Books (1994). This is a very interesting account for human emotions, reason, and decision-making and how our brain supports and messes the whole thing up. In order to appreciate Damasio’s work it is important to understand the distinction between Emotions (what a 3rd party observer can see) and Feelings (what an individual senses). I am likely at fault for occasionally mixing up the two concepts.
  22. Barneby B. Dunn, Tim Dalgleish, and Andrew D. Lawrence, “The somatic marker hypothesis: a critical evaluation”, Neuroscience and Biobehavioral Reviews (2005) 1 – 33. Antonio Damasio’s somatic marker hypothesis, from around 1991, has been (and remains) very influential as an explanation of Brain – Body coupling or feedback. Albeit the idea is not scientifically proven in all its aspects and often is prone to various interpretations. You will in this paper find a comprehensive reference list to the most important literature in this field.
  23. Manuel G. Bedia and Ezequiel Di Paolo, “Unreliable gut feelings can lead to correct decisions: the somatic marker hypothesis in non-linear decision chains”, Frontiers in Psychology (October 2012), Article 384.
  24. Elizabeth A. Phelps, Karolina M. Lempert, and Peter Sokol-Hessner, “Emotion and Decision Making: Multiple Modulatory Neural Circuits”, Annual Review Neuroscience (2014), 263. The more modern neuroscientific perspective of emotion and decision-making compared to the more classical duality between emotion and reasoning.
  25. George A. Miller, “The Magical Number Seven, Plus or Minus Two Some Limits on Our Capacity for Processing Information”, Psychological Review, Vol. 101 (1994) 343. First published back in 1956. This is really a seminal paper and a must-read for people who want to appreciate our cognitive limits in handling complexity.
  26. Alan Baddeley, “The magic number seven: still magic after all these years”, Psychological Review, Vol. 101 (1994) 353. Reflecting on the state of research since Miller’s original paper in 1956.
  27. Nelson Cowan, “The magical number 4 in short term memory: a reconsideration of mental storage capacity”, Behavioral and Brain Science, 24 (2000) 87. A great and systematic overview of storage capacity, categorization, and methods to increase such (e.g., chunking or grouping).

books.jpg

APPENDIX – THE SURVEY QUESTIONAIRE.

You are welcome to take the survey using the following link;

Perceived quality and acceptance of Human & Artificial Intelligence Augmentation in Corporate Decision Making.

and yes new responses will be collected under a separate Collector Group.

The questionnaire consist of 24 questions roughly structure as

  1. General information about you.
  2. Your corporate decision making skills.
  3. The quality of data used in your decision-making process.
  4. Your acceptance of A.I. as it related to your corporate decision-making processes.

The typical time spent on answering the 24 questions is a bit less than 4 minutes.

  • Q1 – What is your gender?
    • Female.
    • Male.
  • Q2 – What is your age?
    • 17 or younger.
    • 18 – 20.
    • 21 – 29.
    • 30 – 39.
    • 40 – 49.
    • 50 – 59.
    • 60 or older.
  • Q3 – What is the highest level of school that you have completed?
    • Primary school.
    • Some high school, but no diploma.
    • High school diploma (or GED).
    • Some college, but no degree.
    • 2-year college degree.
    • 4-year college degree.
    • Graduate-level degree.
    • None of the above.
  • Q4 – Which of the following best describes your current job level?
    • Owner/Executive/C-level.
    • Senior Management.
    • Middle Management.
    • Intermediate.
    • Entry Level.
    • Other (please specify).
  • Q5 – What department do you work in?
    • Accounting.
    • Administrative.
    • Customer Service.
    • Marketing.
    • Operations.
    • Human Resources.
    • Sales.
    • Finance.
    • Legal.
    • IT
    • Engineering.
    • Product.
    • Research & Development.
    • International.
    • Business Intelligence.
    • Manufacturing.
    • Public Relations.
    • Other.
  • Q6 – Do you enjoy making decisions relevant to your company?
    • I hate making decisions.
    • I do not enjoy making decisions.
    • I am okay with making decisions.
    • I enjoy making decisions.
    • I love making decisions.
  • Q7 – How would you characterize your decision making skills in comparison with your peers?
    • Below average.
    • Average.
    • Above average.
  • Q8 – Do you consult with others before making a decision?
    • I rarely consult others (e.g., 3 our of 10 times or lower).
    • Approximately half of my decisions have been consulted with others.
    • I frequently consult others (e.g., 7 out of 10 times or higher).
  • Q9 – Do you rely on “gut feelings” when making corporate decisions?
    • Never.
    • Infrequently.
    • Approximately half of my decisions.
    • Frequently.
    • Always.
  • Q10 – How would you characterize your “gut feelings” compared to your peers?
    • Worse.
    • Average.
    • Better.
  • Q11 – How often is available data considered in your corporate decisions?
    • Data is never considered.
    • Infrequently.
    • For approximately half of my decisions data is considered.
    • Frequently.
    • Data is always considered.
  • Q12 – How often is data available for your corporate decisions?
    • Never or very rarely.
    • Infrequently.
    • For approximately half of my decisions.
    • Frequently.
    • Very frequently or always.
  • Q13 – When data is available, how would you characterize the quality of that data?
    • Very poor (i.e., no basis for decisions).
    • Poor (i.e., uncertain, error prone, biased, very limited data available).
    • Good (i.e., uncertain but can be relied upon, some bias, limited data available).
    • High (i.e., reliable, sizable data available, limited uncertainty).
    • Very high (i.e., meets the stringiest test to data quality, large amounts of data).
  • Q14 – How would you characterize your most important decisions in terms of the use of available & relevant data?
    • Never data-driven (i.e., no relevant data available).
    • Rare-data driven (i.e., tiny amount of relevant data available).
    • Small-data driven (i.e., little relevant data available).
    • Large-data driven (i.e., large amounts of relevant data available).
    • Big-data driven (i.e., huge amount of relevant data available).
  • Q15 – How important is human opinion compared to data-driven insights in your decision making?
    • It is irrelevant.
    • It is of some importance.
    • About half of my decisions are based on human insights.
    • It is very important.
    • It is exclusively used for my decisions.
  • Q16 – How would you characterize the quality of the data available to you and used in important corporate decisions compared to your competition?
    • Worse.
    • About the same.
    • Better.
  • Q17 – Have you heard of Artificial Intelligence (A.I.)?
    • No.
    • Yes.
  • Q18 – How would you best describe your feelings toward A.I.?
    • I love it.
    • I am very comfortable with it.
    • I am neutral.
    • I am uncomfortable with it.
    • I hate it.

The following questions are broken into an A and a B part. Approximately 50% of respondents will be presented with either A or B. I am in particular interested in understanding whether respondents changes their sentiment to A.I., whether the question is neutral towards A.I. (A-path) or specifically mentions that the A.I. is decision-making optimized (the B-path).

  • Q19A (~ 50%) – If an A.I. would be available to your company, how important do you think it would be in your company’s decision making processes?
    • Irrelevant.
    • Not Important.
    • Important in about half of all decisions.
    • Very important.
    • Always used.
  • Q19B (~50%) –  If a decision-making optimized A.I. would be available to your company, how important do you think it would be in your company’s decision making processes?
    • Irrelevant.
    • Not Important.
    • Important in about half of all decisions.
    • Very important.
    • Always used.
  • Q20A (~50%) – Would you trust a decision based on A.I. consultation?
    • Never.
    • Infrequently.
    • About half the time.
    • Frequently.
    • Always.
  • Q20B (~50%) – Would you trust a decision based on a decision-making optimized A.I. consultation?
    • Never.
    • Infrequently.
    • About half the time.
    • Frequently.
    • Always.
  • Q21A (~50%) –  If an A.I. would be available to you, how frequently do you think this A.I. would be consulted in your decision making process?
    • Never.
    • Infrequently.
    • About half the time.
    • Frequently.
    • Always.
  • Q21B (~50%) –  If a decision-making optimized A.I. would be available to you, how frequently do you think this A.I. would be consulted in your decision making process?
    • Never.
    • Infrequently.
    • About half the time.
    • Frequently.
    • Always.
  • Q22A (~50%) – If an A.I. would be available to guide your decisions, how often would you follow its advice?
    • Never.
    • Infrequently.
    • About half the time.
    • Frequently.
    • Always.
  • Q22B (~50%) – If a decision-making optimized A.I. would be available to guide your decisions, how often would you follow its advice?
    • Never.
    • Infrequently.
    • About half the time.
    • Frequently.
    • Always.
  • Q23A (~50%) – If an A.I. would be available to guide your decisions, how often would you follow Human advices different from your A.I.’s recommendation?
    • Never.
    • Infrequently.
    • About half the time.
    • Frequently.
    • Always.
  • Q23B (~50%) – If a decision-making optimized A.I. would be available to guide your decisions, how often would you follow Human advices different from your A.I.’s recommendation?
    • Never.
    • Infrequently.
    • About half the time.
    • Frequently.
    • Always.
  • Q24A (~50%) – If an A.I. would be available to your company, do you think your company still would need your decision making skills?
    • Never.
    • Infrequently.
    • About half the time.
    • Frequently.
    • Always.
  • Q24B (~50%) – If a decision-making optimized A.I. would be available to your company, do you think your company still would need your decision making skills?
    • Never.
    • Infrequently.
    • About half the time.
    • Frequently.
    • Always.