Future Technologies & Media (FTM)
Neu-Ulm University of Applied Sciences
April 19, 2026
Emerging information technologies enable multimodal and immersive systems.
Multimodality refers to the use of multiple modes of communication to to create meaning.
Multimodality implies that the use of several means of communication contributes to a better overall understanding of a message.
Immersion refers to the state of being deeply engaged, absorbed, or submerged in an environment, either physically or mentally.
Immersion implies that the consciousness of the immersed person is detached from their physical self. Immersiveness is the quality or degree of being immersive.
Stimuli that determine the immersiveness of environments created by technology are multimodal.
Visual, auditory, tactile, olfactory, and interactive.
Emerging information technologies enable intelligent and affective systems.
What do we mean
by intelligence?
Provide a description that outlines what intelligence could mean.
Human intelligence “covers the capacity to learn, reason, and adaptively perform effective actions within an environment, based on existing knowledge. This allows humans to adapt to changing environments and act towards achieving their goals.” Dellermann et al. (2019, p. 632)
Sternberg et al. (1985) proposes three distinctive dimensions:
‘AI system’ means a machine-based systems designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it received, how to generate output such as content, predictions, recommendations, or decisions, that can influence physical or virtual environment (European Commission, 2024).
Based on this definition, three main properties of intelligent agents can be distinguished:
An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.
When designing an intelligent system, the task environment (i.e., the problem) must be specified as fully as possible, including
The performance measure
the task environment,
the actuators,
and the sensors
Russel & Norvig (2022) call the task environment PEAS.
Task environments can be categorized along following dimensions (Russel & Norvig, 2022, pp. 62–64):
Describe the task environment of a chess player and an autonomous car.
15:00
Cognitive abilities—or “thinking skills”—are mental capacities that enable us to acquire knowledge, process information, and solve problems. They involve processing mental information through (Carroll, 1993):
perception, attention, memory, reasoning, language, and executive functions
What are the basic cognitive abilities of a chatbot? How they are technically implemented?
Think on your own, then share your thoughts.
05:00
Find two real-world examples of multi-agent systems (one digital, one physical)
For each system, document:
10:00
Complex behavior refers to the observable patterns of actions and responses that an AI system exhibits (i.e., what the system does), particularly sophisticated, adaptive, or emergent conduct that goes beyond simple stimulus-response patterns.
To realize complex behavior, the components of an intelligent system (i.e., what the system can do cognitively) must have the following properties (to some extent):
Autonomy, rationality,
learning, and introspection
A rational agent is one that does the right thing—it is goal directed.
For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. Russel & Norvig (2022, p. 58)
It can be quite hard to formulate a performance measure correctly, however:
If we use, to achieve our purposes, a mechanical agency with those operation we cannot interfere once we have started it […] we had better be quite sure that the purpose built into the machine is the purpose which we really desire Wiener (1960, p. 1358)
Under which circumstances does a chatbot act rational?
Rationality != perfection
Learning agents are those that can improve their behavior through diligent study of past experiences and predictions of the future Russel & Norvig (2022, p. 668)
A learning agent
Supervised learning
Involves learning a function from examples ➞ test and training data
Unsupervised learning
The agent has to learn patterns in the input ➞ identification of categories or classifications
Reinforcement learning
The agent must learn from punishments or rewards ➞ learning by trial and error
Introspection refers to the capabilitiy to analyze one’s cognitive abilities.
The system uses an observable model of its own abilities.
This model is used to simulate self-awareness processes.
Introspection allows the system …
The properties of an intelligent system are
Capacity to work in a
complex environment
Interaction with the environment and other agents
Cognitive abilities
Perception, action control, deliberation, and interaction
Complex behavior
Acting autonomously, rationally, adaptation through learning, and introspection
Select an intelligent system and analyse it using the properties outlined here.
Note: This is an excellent task to prepare for the exam.
Read Dellermann et al. (2019).
What is intelligence?
Intelligence is the ability to accomplish complex goals, learn, reason, and adaptively perform effective actions within an environment. Gottfredson (1997)
The term artificial intelligence describes systems that perform “[…] activities that we associate with human thinking, activities such as decision-making, problem solving, learning […]” Bellman (1978, p. 3)
AI can be defined as “[…] the art of creating machines that perform functions that require intelligence when performed by people […]” Kurzweil et al. (1990, p. 117)
The basic idea: systems that can analyse their environment, adapt to new circumstances, and act in ways that advance specified goals without explicit programming for every situation.
This requires agency (i.e., the capacity to work in complex environments), thinking skills (i.e., cognitive abilities) as well as observable patterns of actions and responses (i.e., complex behavior).
Are we defining intelligence, or just human-ness?
Current AI architectures seem to be inspired by cognitive architectures as, e.g., proposed by Dual Process Theory (Kahneman, 2011).
| Feature | System 1 (Intuitive) | System 2 (Analytical) |
|---|---|---|
| Human Equivalent | Fast, automatic, “gut feeling” | Slow, effortful, logical reasoning |
| AI (LLM) | Next-token prediction (Instinct) | Chain-of-Thought (Reasoning) |
| Computational Cost | Low / immediate | High (requires more “tokens”) |
Observation: By forcing LLMs to “think before they speak” (using reasoning traces), we are literally programming a digital version of human introspection.
What if human-like isn’t the only way to be smart?
By building AI to mimic our brains, are we missing out on “Alien” forms of intelligence that could solve problems humans can’t even perceive?
Collective intelligence refers to “[…] groups of individuals acting collectively in ways that seem intelligent.” Malone (2015, p. 3)
The concept implies that under certain conditions, a (large) group of homogeneous individuals can outperform any single individual or even a single expert (Leimeister, 2010).
Today, research increasingly focuses on hybrid collective intelligence: connecting heterogeneous agents (e.g., humans and machines) so that they combine complementary intelligence and act more intelligently together (Malone, 2015).
Synthesize your findings from reading Dellermann et al. (2019) by findings answers to following questions:
10:00
The idea is to combine the complementary capabilities of humans and computers to augment each other.
Human intelligence
Flexibility & transfer
Empathy & creativity
Eventualities
Common sense
Intuition
Artificial intelligence
Pattern recognition
Probabilistic
Consistency
Speed & efficiency
Analysis
Hybrid intelligence is defined as the ability to achieve complex goals by combining human and artificial intelligence, thereby reaching superior results to those each of them could have accomplished separately, and continuously improve by learning from each other. Dellermann et al. (2019, p. 640)
Main characteristics of hybrid intelligence are:
Raisch & Krakowski (2021) argue that automation and augmentation are not opposing strategies — they are interdependent:
Effective AI deployment requires holding both logics simultaneously, managing their tensions across time and space
The question is not “automate or augment?”
— but “when, where, and how to combine both?”
Seeber et al. (2020) highlight a fundamental shift in how AI systems are positioned in organisations:
| Traditional AI | AI as Teammates |
|---|---|
| Role: Tool to be used | Role: Active collaboration partner |
| Interaction: Responds to commands | Interaction: Engages proactively |
| Function: Task automation | Function: Complex problem-solving |
| Agency: Limited / directed | Agency: Autonomous with initiative |
| Integration: Technical system integration | Integration: Social & team integration |
Fügener et al. (2022) conducted experiments on human-AI prediction tasks and found:
Human-AI teams achieve superior performance only when AI delegates to humans, not vice versa.
Human metaknowledge, i.e., the ability to assess your own reliability in a specific context (“knowing what you know”), seems to be the critical variable:
Seeber et al. (2020) identify three interconnected design areas for AI teammates:
These areas are interdependent: decisions in one area constrain and shape the others. Effective design requires a holistic rather than purely technical approach.
According to Peeters et al. (2021):
Computing that relates to, arises from or deliberately influences emotion.
Assigning systems “the human-like capabilities of observation, interpretation and generation of affect features4” (Tao & Tan, 2005, p. 981)
The goal is to simulate empathy: affective systems are designed to interpret the emotional states of humans and adapt their behavior to them, giving an appropriate response for those emotions (i.e., emotion aware systems).
Emotion recognition
Interpreting the emotional states of humans
Emotion expresssion
Ability to simulate human affects (e.g. ‘emotional modality’)
Adequate response to emotion
Linking emotion recognition and expression e.g., to reinforce the meaning of messages
Facial expression, posture, speech, force or rhythm of key stroke, temperature change (e.g., hand on mouse) can signify changes in user’s emotional state.
These can be detected and interpreted by an affective system.
Affective systems can use some of these to simulate emoptions.
Ekman et al. (1987) categorized emotions into 6 groups:
Fear, surprise, disgust, anger, happiness, and sadness
All of these can facially expressed.
While Ekman et al. (1987) uses categories, many AI systems use a dimensional approach to map human affect.
Valence
How positive (pleasant) or negative (unpleasant) is the emotion?
Arousal
How intense is the physical/mental energy (from calm to excited)?
The circumplex model (Russell, 1980) allows the system to map “nuance.” A user isn’t just “sad”; they might be “slightly frustrated” (low valence, medium arousal) or “severely depressed” (very low valence, low arousal).
Computers Are Social Actors.
Research by Reeves & Nass (1996) shows that humans naturally and unconsciously treat computers and AI as if they were real people.
Implication: If an AI doesn’t have an “affective” layer, it feels “broken” or “rude” to the human brain, even if the logic is perfect.
In hybrid intelligence, trust is the most critical variable.
Affective systems are used to build rapport, that is the feeling of being in sync with a teammate.
Think of/identify a real-life use case for affective computing.
10:00
More “affection” is not always better.
The Uncanny Valley describes the dip in human comfort when an AI/Robot becomes “almost human” but not quite—leading to a sense of eeriness or revulsion (Mori et al., 2012).
Some ethical questions to be asked:
Listen to the Decoder podcast episode with Google CEO Sundar Pichai on AI-powered search and the future of the web and reflect on the problems with the Internet and responses discussed.
The capacity to work in a complex environment is described as agency
Cognitive abilities are, for instance, perception and language
Behavior refers the observable patterns of actions and responses that an AI system exhibits. Autonomy, adaptiveness, goal-directedness, emergence, and context-sensitivity makes it complex
”Affect” is basically a synonym for emotion.