Theoretical foundation
Intelligence is the ability to accomplish complex goals, learn, reason, and adaptively perform effective actions within an environment. Gottfredson (1997)
Or short: acquire and apply knowledge.
Human intelligence
Human intelligence “covers the capacity to learn, reason, and adaptively perform effective actions within an environment, based on existing knowledge. This allows humans to adapt to changing environments and act towards achieving their goals.” Dellermann et al. (2019, 632)
Sternberg et al. (1985) proposes three distinctive dimensions:
- Componential intelligence refers to the ability to take apart problems and being able to see solutions not often seen.
- Experiential intelligence refers to the ability to learn and adapt through evolutionary experience.
- Contextual intelligence refers to the capacity to create an ideal fit between themselves and their environment by adaptation, shaping, and selection.
Thinking as a group
Collective intelligence
Collective intelligence refers to “[…] groups of individuals acting collectively in ways that seem intelligent.” Malone (2015, 3)
The concept implies that, under certain conditions, a (large) group of average, homogeneous individuals can outperform any individual of the group or even a single expert (Leimeister 2010).
Originally, research studies how groups of people act and think “as a whole”, e.g. using various coordination and decision-making mechanisms.
Today, research increasingly focuses on hybrid collective intelligence to explore how of heterogeneous agents (i.e., humans and machines) can be connected so that they combine their complementary intelligence and act more intelligently (Malone 2015).
Artificial intelligence
The term artificial intelligence is used to describe systems that perform “[…] activities that we associate with human thinking, activities such as decision-making, problem solving, learning […]” Bellman (1978, 3)
The basic idea behind this concept is that systems becomes capable of analyzing their environment and adapting to new circumstances in this environment.
AI can be defined as “[…] the art of creating machines that perform functions that require intelligence when performed by people […]” Kurzweil et al. (1990, 580:117)
Or in short: to replicate the human mind.
Hybrid intelligence
The idea is to combine the complementary capabilities of humans and computers to augment each other. Dellermann et al. (2019)
Complementary strengths
Humans are flexible, creative, empathic, and can adapt to various settings. This allows, for instance, human domain experts to deal with so called ‘‘broken-leg’’ predictions that deviate from the currently known probability distribution. However, they are restricted by bound rationality that prevents them from aggregating information perfectly and drawing conclusions from that. On the other hand, machines are particularly good at solving repetitive tasks that require fast processing of huge amounts of data, at recognizing complex patterns, or weighing multiple factors following consistent rules of probability theory (Dellermann et al. 2019).
However, there is also a technology-centric perspective that assumes that true intelligence can ultimately only be found in well-developed and mature (general) AI systems. Humans are biologically limited in their information processing and reasoning abilities and exhibit many types of cognitive biases, while computers offer virtually infinite possibilities to develop rational intelligence at human levels and beyond (Peeters et al. 2021).
Kahneman (2011) proposed a two-system model of human cognition, which he called System 1 and System 2.
System 1 is an intuitive, automatic, and fast mode of thinking that operates outside of our conscious awareness. It is responsible for generating impressions, making quick judgments, and executing routine tasks with minimal effort.
System 2, on the other hand, is a more analytical, controlled, and deliberate mode of thinking that requires conscious effort and attention. It is responsible for problem-solving, critical thinking, and decision-making.
While System 1 operates quickly and automatically, it can be prone to biases and errors, particularly in complex or unfamiliar situations. System 2, though slower and more effortful, can help us avoid these biases and make more accurate decisions.
Definition
Hybrid intelligence is defined as the ability to achieve complex goals by combining human and artificial intelligence, thereby reaching superior results to those each of them could have accomplished separately, and continuously improve by learning from each other. Dellermann et al. (2019, 640)
Main characteristics of hybrid intelligence are:
- Collectively means that tasks are performed collectively and activities are conditionally dependent.
- Superior results means that neither AI nor humans could have achieved the outcome without the other.
- Continuous learning means that all components of the socio-technical system learn from each other through experience.
Visualization
Implications
According to Peeters et al. (2021) following conclusions can be drawn:
- Intelligence should not be studied at the level of individual humans or AI-machines, but at the group level of humans and AI-machines working together.
- Increasing the intelligence of a system should be achieved by increasing the quality of the interaction between its constituents rather than the intelligence of the constituents themselves.
- Both human as well as artificial intelligence can be regarded as very shallow when considered in isolation.
- No AI is an island.
Empirical evidence
General observations
Peeters et al. (2021) see following evidence that support a hybrid intelligence perspective:
- In various domains, unforeseen emergent effects at the systemic level can be observed (e.g., sustaining biases with hiring software and other decision support systems)
- One of the biggest challenges is how to seamlessly integrate AI systems in human processes and workflows (e.g., autonomous cars and robots)
- Personal conversational assistants and other AI rely on many other webservices to create value (i.e., networked systems)
- At the level of teams, AI applications and humans together form human–agent teams (e.g., RPA integrated in a team)
- Observability1, predictability2, explainability3, and directability4 are important requirements for the effective design of hybrid intelligence
Impact on explainable AI on human cognition
Bauer, Zahn, and Hinz (2023) show that AI systems that provide explanations (XAI) in addition to predictions 5 may
- draw users’ attention excessively to the explanations (i.e., those that confirm their prior beliefs6) rather than adhering to the prediction,
- diminish employees’ decision-making performance for the task at hand,
- lead individuals to carry over learned patterns to other domains (e.g., biased explanations),
- decrease individual level-noise in the decision-making process (i.e., an individual’s decisions become more consistent),
- additionally foster differences in the decision-making process across subgroups of users that possess heterogeneous priors.
A focus on the explanation as well as increased decision variance can substantially contribute to errors and ultimately harm business performance (see e.g., Kahneman, Sibony, and Sunstein (2021).
Examples
- Robots in de klas
- A team consisting of a remedial teacher, an educational therapist, and a Nao robot collaborate to support a child with learning difficulties. The robot provides expertise and advice while also helping the child stay focused and engaged.
- Spawn
- The musician Holly Herndon created “Spawn,” an AI system that generates unique music different from her usual style. By using Spawn as a tool, Holly is able to avoid creating music that repeats her previous works but to to expand the possibilities of their music.
What examples do come to your mind?
Q&A
Literature
Footnotes
Observability means that an actor should make its status, its knowledge of the team, task, and environment observable to others.↩︎
Predictability means that an actor should behave predictably such that others can rely on them when considering their own actions.↩︎
Directability means that actors should have the opportunity to (re-)direct each other’s behavior.↩︎
Explainability means that agents should be capable of explaining their behavior to others :::↩︎
Will become a regulatory standard and many domains↩︎
A phenomenon called confirmation bias↩︎