Intelligence and affectiveness

Future Technologies & Media (FTM)

Andy Weeger

Neu-Ulm University of Applied Sciences

April 24, 2024

Revision

Hypothesis 1

Emerging information technologies enable multimodal and immersive systems.

Multimodality

Multimodality refers to the use of multiple modes of communication to to create meaning.

Multimodality implies that the use of several means of communication contributes to a better overall understanding of a message.

Immersion

Immersion refers to the state of being deeply engaged, absorbed, or submerged in an environment, either physically or mentally.

Immersion implies that the consciousness of the immersed person is detached from their physical self. Immersiveness is the quality or degree of being immersive.

Interdependency

Stimuli that determine the immersiveness of environments created by technology are multimodal.

Visual, auditory, tactile, olfactory, and interactive.

Hypothesis 2

Emerging information technologies enable intelligent and affective systems.

Intelligence

Discussion

What do we mean
by intelligence?

Provide a description that outlines what intelligence could mean.
Take 5 minutes to reflect in small groups.

Human intelligence

Human intelligence “covers the capacity to learn, reason, and adaptively perform effective actions within an environment, based on existing knowledge. This allows humans to adapt to changing environments and act towards achieving their goals.” Dellermann et al. (2019, 632)

Sternberg et al. (1985) proposes three distinctive dimensions:

  • Componential intelligence
    the ability to take apart problems and being able to see solutions not often seen
  • Experiential intelligence
    the ability to learn and adapt through evolutionary experience
  • Contextual intelligence
    the ability to create an ideal fit between the environment by adaptation, shaping, and selection

Artificial intelligence

‘AI system’ means a machine-based systems designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it received, how to generate output such as content, predictions, recommendations, or decisions, that can influence physical or virtual environment (European Commission 2024).

Based on this definition, three main properties of intelligent agents can be distinguished:

Capacity to work in a complex environment1, cognitive abilities2, and complex behavior3.

Working in complex environments

Agents and environments

Necessary components to interact with complex environments

Example

Example for an intelligent systems

Task environment

When designing an intelligent system, the task environment (i.e., the problem) must be specified as fully as possible, including

Thep performance measure
the environment,
the actuators,
and the sensors

Russel and Norvig (2022) call the task environment PEAS.

Properties

Task environments can be categorized along following dimensions (Russel and Norvig 2022, 62–64):

  • Fully observable ⇠⇢ partially observable
  • Single agent ⇠⇢ multi-agent
  • Deterministic ⇠⇢ nondeterministic
  • Episodic ⇠⇢ sequential
  • Static ⇠⇢ dynamic
  • Discrete ⇠⇢ continuous
  • Known ⇠⇢ unknown

Exercise

Describe the task environment of a chess player and a autonomous car.

Form small groups, take 15 minutes for this exercise and prepare yourself to present your findings.

Autonomous vs. advisor system

Types of intelligent systems in terms of their interaction with the environment (Molina 2020)

 

Cognitive abilities

Processing mental information

A cognitive ability is an ability that requires to process mental information (Carroll 1993).

It refers to the skills involved in performing tasks associated with perception, understanding, reasoning, judgment, and language.

There are four types of cognitive abilities:

Attention, memory, logic and reasoning, auditory and visual processing.

Primary cognitive abilities

Primary cognitive abilities of intelligent systems based on Molina (2020)

 

 

 

 

Exercise

Describe the basic cognitive abilities of an autonomous car.

Form small groups, take 10 minutes for this exercise.
Prepare yourself to present your findings.

Deliberation and reactive behavior

Different types of behavior require different “thinking systems” based on Molina (2020)

Multiagent systems

Meta cognitive abilities by means of multiagent systems based on Molina (2020)

Complex behavior

Properties

To realize complex behavior, the components of an intelligent system (i.e., perception, deliberation, action control and interaction) must have following properties (to some extent):

Rationality
Learning
Introspection

Rationality

A rational agent is one that does the right thing.

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has (Russel and Norvig 2022, 58).

It can be quite hard to formulate a performance measure correctly, however:

If we use, to achieve our purposes, a mechanical agency with those operation we cannot interfere once we have started it […] we had better be quite sure that the purpose built into the machine is the purpose which we really desire (Wiener 1960, 1358)

Exercise

Under which circumstances does a vacuum cleaning agent act rational?

Rationality and perfection

Rationality != perfection

  • Rationality maximizes expected performance
  • Perfection maximizes actual performance
  • Perfection requires omniscience
  • Rational choice depends only on the percept sequence to date

Rationality and cognitive abilities

Rational decisions affect different cognitive abilities (Molina 2020)

 

 

 

 

Learning

Learning agents are those that can improve their behavior through diligent study of past experiences and predictions of the future Russel and Norvig (2022, 668)

A learning agent

  • uses so-called machine learning (ML), if it is a computer;
  • improves performance based on experience (i.e., observations of the world);
  • is required when the designer lacks omniscience (i.e., in unknown environments) and/or
  • has no idea how to program a solution themselves (e.g., recognizing faces)

Learning types

Supervised learning
Involves learning a function from examples ➞ test and training data

Unsupervised learning
The agent has to learn patterns in the input ➞ identification of categories or classifications

Reinforcement learning
The agent must learn from punishments or rewards ➞ learning by trial and error

Learning and cognitive abilities

Adaptation through learning can affect differnt cognitive abilities (Molina 2020)

 

 

 

 

Introspection

Introspection refers to the capabilitiy to analyze one’s cognitive abilities.

The system uses an observable model of its own abilities
This model is used to simulate self-awareness processes.

Introspection allows the system …

  • … to judge its own actions and, thus, provides learning opportunities
    (e.g., analyzing past outputs ot identify errors or biases) and
  • … to generate explanations and, thus, to justify decisions to the user
    (e.g., explainable AI — showing how a systems arrives at a solution)

Summary

The properties of an intelligent system are

Capacity to work in a
complex environment

Interaction with the environment and other agents

Cognitive abilities
Perception, action control, deliberation, and interaction

Complex behavior
Acting rationally, adaptation through learning, and introspection

Exercise

Select an intelligent system and analyse it using the properties outlined here.

Form small group and take 20 minutes to work on a comprehensive analysis.
Prepare a short presentation and prepare yourself to perform it.

Affections

Affective computing

Computing that relates to, arises from or deliberately influences emotion.

Objectives

Assigning systems “the human-like capabilities of observation, interpretation and generation of affect features4(Tao and Tan 2005, 981)

The goal is to simulate empathy — that affective systems can interpret the emotional states of humans and adapt their behavior to them, giving an appropriate response for those emotions (i.e., emotion aware systems).

Properties

Emotion recognition
Interpreting the emotional states of humans

Emotion expresssion
Ability to simulate human affects (e.g. ‘emotional modality’)

Adequate response to emotion
Linking emotion recognition and expression e.g., to reinforce the meaning of messages

Emotional signals

Facial expression, posture, speech, force or rhythm of key stroke, temperature change (e.g., hand on mouse) can signify changes in user’s emotional state.

These can be detected and interpreted by an affective system.

Affective systems can use some of these to simulate emoptions.

Basic emotions

Ekman et al. (1987) categorized emotions into 6 groups:

Fear, surprise, disgust, anger, happiness, and sadness

All of these can facially expressed.

Examples

  • Facial expression analysis
    Using computer vision and machine learning to analyze facial expressions and determine the emotional state of a person.
  • Voice analysis
    Analyzing the tone, pitch, and other characteristics of a person’s voice to determine their emotional state.
  • Physiological sensing
    Using wearable devices to monitor physiological signals such as heart rate, skin conductance, and body temperature to detect emotional responses.
  • Emotion simulation
    Developing systems that can generate emotional responses, such as a virtual assistant that can express empathy or a chatbot that can adapt its tone based on the user’s emotional state.

Exercise

Search for real-life use cases for affective computing.

Form small group and take 15 minutes for your research.
Prepare a short presentation of a use case and the technologies that enable it.
Relate them to the basic properties of affective systems.
Argue why affective computing is effective in this use case.

Hybrid intelligence

Homework

Form small groups and synthesize your findings from reading Dellermann et al. (2019) by findings answers to following questions:

  1. How can hybrid intelligence be defined?
  2. What are main characteristics of hybrid intelligence?
  3. What are complementary strengths of humans and machines?
  4. What implications does that concept have for practice?

Take 15 minutes to synthesize your findings and to create a short presentation.

Concept

The idea is to combine the complementary capabilities of humans and computers to augment each other.

Complementary strengths

Human intelligence

Flexibility & transfer

Empathy & creativity

Eventualities

Common sense

Intuition

Artificial intelligence

Pattern recognition

Probabilistic

Consistency

Speed & efficiency

Analysis

Definition

Hybrid intelligence is defined as the ability to achieve complex goals by combining human and artificial intelligence, thereby reaching superior results to those each of them could have accomplished separately, and continuously improve by learning from each other. Dellermann et al. (2019, 640)

Main characteristics of hybrid intelligence are:

  • Collectively
    Tasks are performed collectively and activities are conditionally dependent
  • Superior results
    Neither AI nor humans could have achieved the outcome without the other
  • Continuous learning
    All components of the socio-technical system learn from each other through experience

Visualization

Distribution of roles in hybrid intelligence (Dellermann et al. 2019, 640)

General observations

Peeters et al. (2021) see following evidence that support a hybrid intelligence perspective:

  • In various domains, unforeseen emergent effects at the systemic level can be observed
    (e.g., sustaining biases with hiring software and other decision support systems)
  • One of the biggest challenges is to seamlessly integrate AI systems in human processes and workflows (e.g., autonomous cars and robots)
  • At the level of teams, AI applications and humans together form human–agent teams
    (e.g., RPA integrated in a team)

Observability5, predictability6, explainability7, and directability8 are important requirements for the effective design of hybrid intelligence

Implications

According to Peeters et al. (2021) following conclusions can be drawn:

  • Intelligence should not be studied at the level of individual humans or AI-machines, but at the group level of humans and AI-machines working together.
  • Increasing the intelligence of a system should be achieved by increasing the quality of the interaction between its constituents rather than the intelligence of the constituents themselves.
  • Both human as well as artificial intelligence can be regarded as very shallow when considered in isolation.
  • No AI is an island.

Examples

Robots in de klas
A team consisting of a remedial teacher, an educational therapist, and a Nao robot collaborate to support a child with learning difficulties. The robot provides expertise and advice while also helping the child stay focused and engaged.
Spawn
The musician Holly Herndon created “Spawn,” an AI system that generates unique music different from her usual style. By using Spawn as a tool, Holly is able to avoid creating music that repeats her previous works but to to expand the possibilities of their music.

What examples do come to your mind?

Explainable AI and human cognition

Bauer, Zahn, and Hinz (2023) show that AI systems that provide explanations (XAI) in addition to predictions 9 may

  • draw users’ attention excessively to the explanations (i.e., those that confirm their prior beliefs10) rather than adhering to the prediction,
  • diminish employees’ decision-making performance for the task at hand,
  • lead individuals to carry over learned patterns to other domains (e.g., biased explanations),
  • decrease individual level-noise in the decision-making process (i.e., an individual’s decisions become more consistent),
  • additionally foster differences in the decision-making process across subgroups of users that possess heterogeneous priors.

A focus on the explanation as well as increased decision variance can substantially contribute to errors and ultimately harm business performance (see e.g., Kahneman, Sibony, and Sunstein (2021).

Q&A

Literature

Adami, Elisabetta. 2016. “Introducing Multimodality.” The Oxford Handbook of Language and Society, 451–72.
Bauer, Kevin, Moritz von Zahn, and Oliver Hinz. 2023. “Expl (AI) Ned: The Impact of Explainable Artificial Intelligence on Users’ Information Processing.” Information Systems Research.
Carroll, John Bissell. 1993. Human Cognitive Abilities: A Survey of Factor-Analytic Studies. 1. Cambridge university press.
Dellermann, Dominik, Philipp Ebel, Matthias Söllner, and Jan Marco Leimeister. 2019. “Hybrid Intelligence.” Business & Information Systems Engineering 61: 637–43.
Ekman, Paul, Wallace V Friesen, Maureen O’sullivan, Anthony Chan, Irene Diacoyanni-Tarlatzis, Karl Heider, Rainer Krause, et al. 1987. “Universals and Cultural Differences in the Judgments of Facial Expressions of Emotion.” Journal of Personality and Social Psychology 53 (4): 712.
European Commission. 2024. “Artificial Intelligence in the European Commission — a Strategic Vision to Foster the Development and Use of Lawful, Safe and Trustworthy Artificial Intelligence Systems in the European Commission” C(2024) 380.
Kahneman, Daniel. 2011. Thinking, Fast and Slow. macmillan.
Kahneman, Daniel, Olivier Sibony, and Cass R Sunstein. 2021. Noise: A Flaw in Human Judgment. Hachette UK.
Lee, Hyuck-Gi, Sungwon Chung, and Won-Hee Lee. 2013. “Presence in Virtual Golf Simulators: The Effects of Presence on Perceived Enjoyment, Perceived Value, and Behavioral Intention.” New Media & Society 15 (6): 930–46.
Molina, Martin. 2020. “Intelligent Systems.” Master Course (Lecture Slides).
Peeters, Marieke MM, Jurriaan van Diggelen, Karel Van Den Bosch, Adelbert Bronkhorst, Mark A Neerincx, Jan Maarten Schraagen, and Stephan Raaijmakers. 2021. “Hybrid Collective Intelligence in a Human–AI Society.” AI & Society 36: 217–38.
Picard, Rosalind W. 2000. Affective Computing. MIT press.
Russel, Stuart, and Peter Norvig. 2022. Artificial Intelligence: A Modern Approach. Harlow: Pearson Education.
Sternberg, Robert J et al. 1985. Beyond IQ: A Triarchic Theory of Human Intelligence. CUP Archive.
Suh, Ayoung, and Jane Prophet. 2018. “The State of Immersive Technology Research: A Literature Analysis.” Computers in Human Behavior 86: 77–90.
Tao, Jianhua, and Tieniu Tan. 2005. “Affective Computing: A Review.” In International Conference on Affective Computing and Intelligent Interaction, 981–95. Springer.
Wiener, Norbert. 1960. “Some Moral and Technical Consequences of Automation.” Science 131 (3410): 1355–58.
Footnotes
  1. The capacity to work in a complex environment is described as agency

  2. Cognitive abilities are, for instance, perception and language

  3. Intelligent behavior is reflected, for instance, by rationality and learning

  4. ”Affect” is basically a synonym for emotion.

  5. Observability means that an actor should make its status, its knowledge of the team, task, and environment observable to others.

  6. Predictability means that an actor should behave predictably such that others can rely on them when considering their own actions.

  7. Directability means that actors should have the opportunity to (re-)direct each other’s behavior.

  8. Explainability means that agents should be capable of explaining their behavior to others :::

  9. Will become a regulatory standard and many domains

  10. A phenomenon called confirmation bias