Intelligence and Affection

Future Technologies & Media (FTM)

Andy Weeger

Neu-Ulm University of Applied Sciences

April 19, 2026

Revision

Hypothesis 1

Emerging information technologies enable multimodal and immersive systems.

Multimodality

Multimodality refers to the use of multiple modes of communication to to create meaning.

Multimodality implies that the use of several means of communication contributes to a better overall understanding of a message.

Immersion

Immersion refers to the state of being deeply engaged, absorbed, or submerged in an environment, either physically or mentally.

Immersion implies that the consciousness of the immersed person is detached from their physical self. Immersiveness is the quality or degree of being immersive.

Interdependency

Stimuli that determine the immersiveness of environments created by technology are multimodal.

Visual, auditory, tactile, olfactory, and interactive.

Hypothesis 2

Emerging information technologies enable intelligent and affective systems.

Intelligence

Discussion

What do we mean
by intelligence?

Provide a description that outlines what intelligence could mean.

Human intelligence

Human intelligence “covers the capacity to learn, reason, and adaptively perform effective actions within an environment, based on existing knowledge. This allows humans to adapt to changing environments and act towards achieving their goals.” Dellermann et al. (2019, p. 632)

Sternberg et al. (1985) proposes three distinctive dimensions:

  • Componential (analytica) intelligence
    the ability to break down complex information and apply logical processes to find the most efficient solution
  • Experiential (creative) intelligence
    the ability to synthesize prior knowledge to navigate novel situations and automate new tasks
  • Contextual (practical) intelligence
    the ability to read environmental demands and adapt behavior (or the environment) to achieve success

Artificial intelligence

‘AI system’ means a machine-based systems designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it received, how to generate output such as content, predictions, recommendations, or decisions, that can influence physical or virtual environment (European Commission, 2024).

Based on this definition, three main properties of intelligent agents can be distinguished:

Capacity to work in a complex environment1, cognitive abilities2, and complex behavior3.

Complex environments

Agents

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

Agents and environments

Necessary components to interact with complex environments

Example

Example for an intelligent systems

Task environment

When designing an intelligent system, the task environment (i.e., the problem) must be specified as fully as possible, including

The performance measure
the task environment,
the actuators,
and the sensors

Russel & Norvig (2022) call the task environment PEAS.

Properties

Task environments can be categorized along following dimensions (Russel & Norvig, 2022, pp. 62–64):

  • Fully observable ⇠⇢ partially observable
  • Single agent ⇠⇢ multi-agent
  • Deterministic ⇠⇢ nondeterministic
  • Episodic ⇠⇢ sequential
  • Static ⇠⇢ dynamic
  • Discrete ⇠⇢ continuous
  • Known ⇠⇢ unknown

Exercise

Describe the task environment of a chess player and an autonomous car.

15:00

Autonomous vs. advisor system

Types of intelligent systems in terms of their interaction with the environment (Molina, 2020)

 

Cognitive abilities

Definition

Cognitive abilities—or “thinking skills”—are mental capacities that enable us to acquire knowledge, process information, and solve problems. They involve processing mental information through (Carroll, 1993):

perception, attention, memory, reasoning, language, and executive functions

Implementation in AI systems

Primary cognitive abilities of intelligent systems

 

 

 

 

Discussion

What are the basic cognitive abilities of a chatbot? How they are technically implemented?

Think on your own, then share your thoughts.

05:00

Deliberation and reactive behavior

Different types of behavior require different “thinking systems” based on Molina (2020)

Multiagent systems

Meta cognitive abilities by means of multiagent systems based on Molina (2020)

Exercise

Find two real-world examples of multi-agent systems (one digital, one physical)

For each system, document:

  • What are the individual agents?
  • How do they jointly perceive and deliberate?
  • How do agents interact/coordinate?
10:00

Complex behavior

Properties

Complex behavior refers to the observable patterns of actions and responses that an AI system exhibits (i.e., what the system does), particularly sophisticated, adaptive, or emergent conduct that goes beyond simple stimulus-response patterns.

To realize complex behavior, the components of an intelligent system (i.e., what the system can do cognitively) must have the following properties (to some extent):

Autonomy, rationality,
learning, and introspection

Rationality

A rational agent is one that does the right thing—it is goal directed.

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. Russel & Norvig (2022, p. 58)

It can be quite hard to formulate a performance measure correctly, however:

If we use, to achieve our purposes, a mechanical agency with those operation we cannot interfere once we have started it […] we had better be quite sure that the purpose built into the machine is the purpose which we really desire Wiener (1960, p. 1358)

Discussion

Under which circumstances does a chatbot act rational?

Rationality and perfection

Rationality != perfection

  • Rationality maximizes expected performance
  • Perfection maximizes actual performance
  • Perfection requires omniscience
  • Rational choice depends only on the percept sequence to date

Rationality and cognitive abilities

Rational decisions affect different cognitive abilities (Molina, 2020)

 

 

 

 

Learning

Learning agents are those that can improve their behavior through diligent study of past experiences and predictions of the future Russel & Norvig (2022, p. 668)

A learning agent

  • uses so-called machine learning (ML), if it is a computer;
  • improves performance based on experience (i.e., observations of the world);
  • is required when the designer lacks omniscience (i.e., in unknown environments) and/or
  • has no idea how to program a solution themselves (e.g., recognizing faces)

Learning types

Supervised learning
Involves learning a function from examples ➞ test and training data

Unsupervised learning
The agent has to learn patterns in the input ➞ identification of categories or classifications

Reinforcement learning
The agent must learn from punishments or rewards ➞ learning by trial and error

Learning and cognitive abilities

Adaptation through learning can affect differnt cognitive abilities (Molina, 2020)

 

 

 

 

Introspection

Introspection refers to the capabilitiy to analyze one’s cognitive abilities.

The system uses an observable model of its own abilities
This model is used to simulate self-awareness processes.

Introspection allows the system …

  • … to judge its own actions and, thus, provides learning opportunities
    (e.g., analyzing past outputs ot identify errors or biases) and
  • … to generate explanations and, thus, to justify decisions to the user
    (e.g., explainable AI — showing how a systems arrives at a solution)

Summary

The properties of an intelligent system are

Capacity to work in a
complex environment

Interaction with the environment and other agents

Cognitive abilities
Perception, action control, deliberation, and interaction

Complex behavior
Acting autonomously, rationally, adaptation through learning, and introspection

Homework

Select an intelligent system and analyse it using the properties outlined here.

Note: This is an excellent task to prepare for the exam.

Read Dellermann et al. (2019).

Recap

Discussion

What is intelligence?

Intelligence

Intelligence is the ability to accomplish complex goals, learn, reason, and adaptively perform effective actions within an environment. Gottfredson (1997)

Artificial intelligence

The term artificial intelligence describes systems that perform “[…] activities that we associate with human thinking, activities such as decision-making, problem solving, learning […]” Bellman (1978, p. 3)

AI can be defined as “[…] the art of creating machines that perform functions that require intelligence when performed by people […]” Kurzweil et al. (1990, p. 117)

The basic idea: systems that can analyse their environment, adapt to new circumstances, and act in ways that advance specified goals without explicit programming for every situation.

This requires agency (i.e., the capacity to work in complex environments), thinking skills (i.e., cognitive abilities) as well as observable patterns of actions and responses (i.e., complex behavior).

Anthropocentrism

Are we defining intelligence, or just human-ness?

  • Anthropocentric bias
    We use the human mind as the “Gold Standard” for all intelligence.
  • Cognitive narrow-mindedness
    We often ignore forms of intelligence that don’t solve “human” problems (e.g., navigation without landmarks, multi-agent coordination in insects).
  • The mimicry trap
    We build AI to pass human tests (e.g., Turing, Bar Exam), which prioritizes imitation over novel synthetic cognition.

Mimicking the mind

Current AI architectures seem to be inspired by cognitive architectures as, e.g., proposed by Dual Process Theory (Kahneman, 2011).

Feature System 1 (Intuitive) System 2 (Analytical)
Human Equivalent Fast, automatic, “gut feeling” Slow, effortful, logical reasoning
AI (LLM) Next-token prediction (Instinct) Chain-of-Thought (Reasoning)
Computational Cost Low / immediate High (requires more “tokens”)
Table 1: Cognitive architecture based on Kahneman (2011)

Observation: By forcing LLMs to “think before they speak” (using reasoning traces), we are literally programming a digital version of human introspection.

Thinking as a group

Alternative intelligences

What if human-like isn’t the only way to be smart?

  • Swarm intelligence: Ants and bees solve massive optimization problems through collective simple actions rather than individual complex thought.
  • Decentralized intelligence: Can “thinking” happen without a central CPU? Think of the Octopus where two-thirds of neurons are in the limbs.
  • Non-biological scales: Mycelial (fungal) networks that process information across entire ecosystems over months, rather than seconds.

By building AI to mimic our brains, are we missing out on “Alien” forms of intelligence that could solve problems humans can’t even perceive?

Hybrid intelligence

Collective intelligence

Collective intelligence refers to “[…] groups of individuals acting collectively in ways that seem intelligent.” Malone (2015, p. 3)

The concept implies that under certain conditions, a (large) group of homogeneous individuals can outperform any single individual or even a single expert (Leimeister, 2010).

Today, research increasingly focuses on hybrid collective intelligence: connecting heterogeneous agents (e.g., humans and machines) so that they combine complementary intelligence and act more intelligently together (Malone, 2015).

Exercise

Synthesize your findings from reading Dellermann et al. (2019) by findings answers to following questions:

  1. How can hybrid intelligence be defined?
  2. What are main characteristics of hybrid intelligence?
  3. What are complementary strengths of humans and machines?
  4. What implications does that concept have for practice?
10:00

Concept

The idea is to combine the complementary capabilities of humans and computers to augment each other.

Complementary strengths

Human intelligence

Flexibility & transfer

Empathy & creativity

Eventualities

Common sense

Intuition

Artificial intelligence

Pattern recognition

Probabilistic

Consistency

Speed & efficiency

Analysis

Definition

Hybrid intelligence is defined as the ability to achieve complex goals by combining human and artificial intelligence, thereby reaching superior results to those each of them could have accomplished separately, and continuously improve by learning from each other. Dellermann et al. (2019, p. 640)

Main characteristics of hybrid intelligence are:

  • Collectively
    Tasks are performed collectively and activities are conditionally dependent
  • Superior results
    Neither AI nor humans could have achieved the outcome without the other
  • Continuous learning
    All components of the socio-technical system learn from each other through experience

Visualization

Distribution of roles in hybrid intelligence (Dellermann et al., 2019, p. 640)

The automation–augmentation paradox

Raisch & Krakowski (2021) argue that automation and augmentation are not opposing strategies — they are interdependent:

  • Overemphasising automation (machines replacing humans) creates reinforcing cycles that erode human capability, ultimately making humans less able to provide value when it matters most
  • Overemphasising augmentation (humans plus machines) can under-exploit AI capabilities and leave significant efficiency potential unrealised

Effective AI deployment requires holding both logics simultaneously, managing their tensions across time and space

The question is not “automate or augment?”
— but “when, where, and how to combine both?”

From tools to teammates

Seeber et al. (2020) highlight a fundamental shift in how AI systems are positioned in organisations:

Traditional AI AI as Teammates
Role: Tool to be used Role: Active collaboration partner
Interaction: Responds to commands Interaction: Engages proactively
Function: Task automation Function: Complex problem-solving
Agency: Limited / directed Agency: Autonomous with initiative
Integration: Technical system integration Integration: Social & team integration

Examples

Robots in de klas
A team consisting of a remedial teacher, an educational therapist, and a Nao robot collaborate to support a child with learning difficulties. The robot provides expertise and advice while also helping the child stay focused and engaged.
Spawn
The musician Holly Herndon created “Spawn,” an AI system that generates unique music different from her usual style. By using Spawn as a tool, Holly is able to avoid creating music that repeats her previous works but to to expand the possibilities of their music.
GitHub Copilot
In collaborative coding, Copilot can engage in back-and-forth dialogue about software design decisions, propose implementations, and explain reasoning about technical approaches - moving beyond simply generating code.

Delegation

Fügener et al. (2022) conducted experiments on human-AI prediction tasks and found:

Human-AI teams achieve superior performance only when AI delegates to humans, not vice versa.

Human metaknowledge, i.e., the ability to assess your own reliability in a specific context (“knowing what you know”), seems to be the critical variable:

  • AI can assess its own certainty well and delegates effectively (even to low-performing humans) because it knows what it knows and what it doesn’t
  • Humans, by contrast, lack metaknowledge: they cannot accurately judge their own reliability, leading to poor delegation decisions despite genuine willingness to collaborate
  • This metaknowledge deficit is unconscious and cannot be explained by algorithm aversion — subjects tried to follow delegation strategies diligently and appreciated the AI support

Critical design areas

Seeber et al. (2020) identify three interconnected design areas for AI teammates:

  1. Machine artifact design: the AI system itself: appearance, capabilities, interaction modalities
  2. Collaboration design: how humans and AI work together: team composition, task allocation, workflows, communication protocols
  3. Institution design: the broader context: responsibility frameworks, liability, training requirements, governance structures

These areas are interdependent: decisions in one area constrain and shape the others. Effective design requires a holistic rather than purely technical approach.

Implications for hybrid intelligence

According to Peeters et al. (2021):

  • Intelligence should be studied at the group level of humans and AI-machines working together — not at the level of individual components
  • Increasing system intelligence means increasing the quality of interaction between components — not merely improving individual components
  • Both human and artificial intelligence are shallow when considered in isolation
  • No AI is an island — value emerges from the system, not the artefact

Affection

Affective computing

Computing that relates to, arises from or deliberately influences emotion.

Objectives

Assigning systems “the human-like capabilities of observation, interpretation and generation of affect features4(Tao & Tan, 2005, p. 981)

The goal is to simulate empathy: affective systems are designed to interpret the emotional states of humans and adapt their behavior to them, giving an appropriate response for those emotions (i.e., emotion aware systems).

Properties

Emotion recognition
Interpreting the emotional states of humans

Emotion expresssion
Ability to simulate human affects (e.g. ‘emotional modality’)

Adequate response to emotion
Linking emotion recognition and expression e.g., to reinforce the meaning of messages

Emotional signals

Facial expression, posture, speech, force or rhythm of key stroke, temperature change (e.g., hand on mouse) can signify changes in user’s emotional state.

These can be detected and interpreted by an affective system.

Affective systems can use some of these to simulate emoptions.

Basic emotions

Ekman et al. (1987) categorized emotions into 6 groups:

Fear, surprise, disgust, anger, happiness, and sadness

All of these can facially expressed.

The circumplex model

While Ekman et al. (1987) uses categories, many AI systems use a dimensional approach to map human affect.

Valence
How positive (pleasant) or negative (unpleasant) is the emotion?

Arousal
How intense is the physical/mental energy (from calm to excited)?

The circumplex model (Russell, 1980) allows the system to map “nuance.” A user isn’t just “sad”; they might be “slightly frustrated” (low valence, medium arousal) or “severely depressed” (very low valence, low arousal).

Examples

  • Facial expression analysis
    Using computer vision and machine learning to analyze facial expressions and determine the emotional state of a person.
  • Voice analysis
    Analyzing the tone, pitch, and other characteristics of a person’s voice to determine their emotional state.
  • Physiological sensing
    Using wearable devices to monitor physiological signals such as heart rate, skin conductance, and body temperature to detect emotional responses.
  • Emotion simulation
    Developing systems that can generate emotional responses, such as a virtual assistant that can express empathy or a chatbot that can adapt its tone based on the user’s emotional state.

The CASA paradigm

Computers Are Social Actors.

Research by Reeves & Nass (1996) shows that humans naturally and unconsciously treat computers and AI as if they were real people.

  • We apply social rules to machines (e.g., being polite to a voice assistant).
  • We feel social pressure from affective displays (e.g., feeling “guilty” if an AI avatar looks sad).

Implication: If an AI doesn’t have an “affective” layer, it feels “broken” or “rude” to the human brain, even if the logic is perfect.

Affection x hybrid teams

In hybrid intelligence, trust is the most critical variable.

Affective systems are used to build rapport, that is the feeling of being in sync with a teammate.

  • Trust calibration
    An AI that expresses “uncertainty” (a form of affect) through its tone helps humans know when not to rely on it.
  • Empathy as utility
    In high-stress environments (e.g., healthcare or cockpits), an AI that recognizes “cognitive load” or “stress” and adjusts its tone can prevent human burnout.

Exercise

Think of/identify a real-life use case for affective computing.

  • Relate it to the basic properties of affective systems.
  • Argue why affective computing is effective in this use case.
10:00

The Uncanny Valley

More “affection” is not always better.

The Uncanny Valley describes the dip in human comfort when an AI/Robot becomes “almost human” but not quite—leading to a sense of eeriness or revulsion (Mori et al., 2012).

Some ethical questions to be asked:

  • Should a system “feign” empathy to sell a product?
  • Does a company have the right to “read” your stress levels via your keyboard rhythm or webcam?
  • Can an AI accurately read “anger” across different cultures?

Q&A

Homework

Listen to the Decoder podcast episode with Google CEO Sundar Pichai on AI-powered search and the future of the web and reflect on the problems with the Internet and responses discussed.

Literature

Adami, E. (2016). Introducing multimodality. The Oxford Handbook of Language and Society, 451–472.
Bellman, R. (1978). An introduction to artificial intelligence: Can computers think? Thomson Course Technology.
Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge university press.
Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid intelligence. Business & Information Systems Engineering, 61, 637–643.
Ekman, P., Friesen, W. V., O’sullivan, M., Chan, A., Diacoyanni-Tarlatzis, I., Heider, K., Krause, R., LeCompte, W. A., Pitcairn, T., Ricci-Bitti, P. E., et al. (1987). Universals and cultural differences in the judgments of facial expressions of emotion. Journal of Personality and Social Psychology, 53(4), 712.
European Commission. (2024). Artificial intelligence in the european commission — a strategic vision to foster the development and use of lawful, safe and trustworthy artificial intelligence systems in the european commission. C(2024) 380.
Fügener, A., Grahl, J., Gupta, A., & Ketter, W. (2022). Cognitive challenges in Human–Artificial Intelligence collaboration: Investigating the path toward productive delegation. Information Systems Research, 33(2), 678–696. https://doi.org/10.1287/isre.2021.1079
Gottfredson, L. S. (1997). Mainstream science on intelligence: An editorial with 52 signatories, history, and bibliography. In Intelligence (1; Vol. 24, pp. 13–23). JAI.
Kahneman, D. (2011). Thinking, fast and slow. macmillan.
Kurzweil, R., Richter, R., Kurzweil, R., & Schneider, M. L. (1990). The age of intelligent machines (Vol. 580). MIT press Cambridge.
Lee, H.-G., Chung, S., & Lee, W.-H. (2013). Presence in virtual golf simulators: The effects of presence on perceived enjoyment, perceived value, and behavioral intention. New Media & Society, 15(6), 930–946.
Leimeister, J. M. (2010). Collective intelligence. Business & Information Systems Engineering, 2, 245–248.
Malone, T. (2015). Handbook of collective intelligence; bernstein, MS, ed. The MIT Press: Cambridge/London, UK.
Molina, M. (2020). Intelligent systems. Master Course (Lecture Slides).
Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19(2), 98–100.
Peeters, M. M., Diggelen, J. van, Van Den Bosch, K., Bronkhorst, A., Neerincx, M. A., Schraagen, J. M., & Raaijmakers, S. (2021). Hybrid collective intelligence in a human–AI society. AI & Society, 36, 217–238.
Picard, R. W. (2000). Affective computing. MIT press.
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review, 46(1), 192–210. https://doi.org/10.5465/amr.2018.0072
Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people. Cambridge, UK, 10(10), 19–36.
Russel, S., & Norvig, P. (2022). Artificial intelligence: A modern approach. Pearson Education.
Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161.
Seeber, I., Bittner, E., Briggs, R. O., De Vreede, T., De Vreede, G.-J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß, S., Randrup, N., et al. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57(2), 103174.
Sternberg, R. J. et al. (1985). Beyond IQ: A triarchic theory of human intelligence. CUP Archive.
Suh, A., & Prophet, J. (2018). The state of immersive technology research: A literature analysis. Computers in Human Behavior, 86, 77–90.
Tao, J., & Tan, T. (2005). Affective computing: A review. International Conference on Affective Computing and Intelligent Interaction, 981–995.
Wiener, N. (1960). Some moral and technical consequences of automation. Science, 131(3410), 1355–1358.
Footnotes
  1. The capacity to work in a complex environment is described as agency

  2. Cognitive abilities are, for instance, perception and language

  3. Behavior refers the observable patterns of actions and responses that an AI system exhibits. Autonomy, adaptiveness, goal-directedness, emergence, and context-sensitivity makes it complex

  4. ”Affect” is basically a synonym for emotion.