Business Value Creation with IT/AI (BVC)
Neu-Ulm University of Applied Sciences
March 16, 2026
After completing this unit, you will be able to:
What is intelligence?
Intelligence is the ability to accomplish complex goals, learn, reason, and adaptively perform effective actions within an environment. Gottfredson (1997)
Or more concisely: think and act — humanly and/or rationally.
Human intelligence “covers the capacity to learn, reason, and adaptively perform effective actions within an environment, based on existing knowledge. This allows humans to adapt to changing environments and act towards achieving their goals.” Dellermann et al. (2019, p. 632)
Sternberg et al. (1985) proposes three distinctive dimensions:
Kahneman (2011) distinguishes two modes of human cognition:
Both modes have blind spots. AI augmentation can (under the right conditions) compensate for System 1 biases without overwhelming System 2 capacity.
Collective intelligence refers to “[…] groups of individuals acting collectively in ways that seem intelligent.” Malone (2015, p. 3)
The concept implies that under certain conditions, a (large) group of homogeneous individuals can outperform any single individual or even a single expert (Leimeister, 2010).
Today, research increasingly focuses on hybrid collective intelligence: connecting heterogeneous agents (e.g., humans and machines) so that they combine complementary intelligence and act more intelligently together (Malone, 2015).
The term artificial intelligence describes systems that perform “[…] activities that we associate with human thinking, activities such as decision-making, problem solving, learning […]” Bellman (1978, p. 3)
AI can be defined as “[…] the art of creating machines that perform functions that require intelligence when performed by people […]” Kurzweil et al. (1990, p. 117)
The basic idea: systems that can analyse their environment, adapt to new circumstances, and act in ways that advance specified goals — without explicit programming for every situation.
What can AI do — and what can’t it?
Think individually for two minutes, then discuss with a partner.
06:00
If we use, to achieve our purposes, a mechanical agency with those operations we cannot interfere once we have started it […] we had better be quite sure that the purpose built into the machine is the purpose which we really desire. Wiener (1960, p. 1358)
Formulating a performance measure correctly is difficult — and a reason to be careful.
Rationality is not the same as perfection.
To understand the engineering limits of AI, we distinguish between three standards:
| Metric | Definition | Info Requirement | Feasibility |
|---|---|---|---|
| Rationality | Maximizing expected performance | Percept sequence + prior knowledge | High: The engineering standard |
| Omniscience | Knowing the actual outcome of actions | Complete future and present data | Impossible: Requires a “crystal ball” |
| Perfection | Maximizing actual performance | Requires Omniscience | Impossible in unpredictable worlds |
To bridge the gap between initial ignorance and rational behavior, agents must utilize information gathering and learning.
Since agents lack omniscience, they must be designed to:
As the environment is usually not completely known a priori and not completely predictable, these are vital parts of rationality (Russel & Norvig, 2022, p. 59).
Agentic AI is an emerging paradigm in AI that refers to autonomous systems designed to pursue complex goals with minimal human intervention. Acharya et al. (2025, p. 18912)
Core characteristics
| Feature | Traditional AI | Agentic AI |
|---|---|---|
| Primary purpose | Task-specific automation | Goal-oriented autonomy |
| Human intervention | High (predefined parameters) | Low (autonomous adaptability) |
| Adaptability | Limited | High |
| Environment interaction | Static or limited context | Dynamic and context-aware |
| Learning type | Primarily supervised | Reinforcement and self-supervised |
| Decision-making | Data-driven, static rules | Autonomous, contextual reasoning |
Anthrophic (2024) discusses five key patterns for designing agentic AI workflows:
Identify a process relevant to your project challenge.
Choose one process step from your project and work through these questions in pairs:
10:00
The idea is to combine the complementary capabilities of humans and computers to augment each other. Dellermann et al. (2019)
Hybrid intelligence is defined as the ability to achieve complex goals by combining human and artificial intelligence, thereby reaching superior results to those each of them could have accomplished separately, and continuously improving by learning from each other. Dellermann et al. (2019, p. 640)
Main characteristics:
Raisch & Krakowski (2021) argue that automation and augmentation are not opposing strategies — they are interdependent:
Effective AI deployment requires holding both logics simultaneously, managing their tensions across time and space
The question is not “automate or augment?”
— but “when, where, and how to combine both?”
Berente et al. (2021) identify three interdependent dimensions that define the management challenge of AI systems:
These three dimensions interact: higher autonomy + higher inscrutability creates accountability gaps. Learning + higher inscrutability can produce invisible drift in system behaviour.
Seeber et al. (2020) highlight a fundamental shift in how AI systems are positioned in organisations:
| Traditional AI | AI as Teammates |
|---|---|
| Role: Tool to be used | Role: Active collaboration partner |
| Interaction: Responds to commands | Interaction: Engages proactively |
| Function: Task automation | Function: Complex problem-solving |
| Agency: Limited / directed | Agency: Autonomous with initiative |
| Integration: Technical system integration | Integration: Social & team integration |
Seeber et al. (2020) identify three interconnected design areas for AI teammates:
These areas are interdependent: decisions in one area constrain and shape the others. Effective design requires a holistic rather than purely technical approach.
According to Peeters et al. (2021):
The IT value creation process (Soh & Markus, 1995):
IT investments only translate into performance if three linked processes work:
When AI agents close the mising link (i.e., IT usage) — what changes?
The shift from human use to agent action changes where value is created and where it can break down:
| Characteristic | Traditional IT | Agentic AI |
|---|---|---|
| Missing link | Human adoption & use | Agent design & governance |
| Risk | Non-adoption, misuse, workarounds | Misaligned objectives, invisible errors, drift |
| Remedy | Training, UX design, change mgmt | Careful design, monitoring, oversight structures |
| Value driver | Effective human behaviour | System-level performance & accountability |
Herath et al. (2024) derive seven evidence-based design principles from action design research across three business decision contexts:
Fügener et al. (2022) conducted experiments on human-AI prediction tasks and found:
Human-AI teams achieve superior performance only when AI delegates to humans — not vice versa.
Human metaknowledge, i.e., the ability to assess your own reliability in a specific context (“knowing what you know”), seems to be the critical variable:
Traditional AI design assumes “top-down” delegation: humans decide when to hand tasks to AI. However, empirical evidence suggests this is often ineffective (Fügener et al., 2022).
Why human delegation fails
Why AI delegation works
Revilla et al. (2023) conducted a field experiment in retail demand forecasting. Their results reveal the conditionality of hybrid intelligence value:
| Context | Superior Strategy | Explanation |
|---|---|---|
| Short horizon, high uncertainty |
Automation (AI only) |
AI extracts signal from noise better; humans “tinker at the edges” and add bias |
| Long horizon, low uncertainty |
Augmentation (human + AI) |
AIML model is well-grounded; humans add contextual knowledge the algorithm misses |
| Short horizon, low uncertainty |
Adjustable automation | Some contextual knowledge helps, but short-horizon noise limits full augmentation benefit |
| Long horizon, high uncertainty |
Mixed | Long horizons favor human input, high uncertainty favors AI — effects partially offset |
There is no universal best practice. Task context determines optimal collaboration strategy.
Where does agentic AI create value in your project?
Map your project’s core AI component to the IS business value matrix (Schryen 2013):
Where is the primary value? Where are the risks? What conditions must hold?
10:00
Peeters et al. (2021) identify four properties that human-AI systems must exhibit for effective collaboration:
These properties enable calibrated trust (i.e., humans trusting AI appropriately): neither too much nor too little.
Bauer et al. (2023) show that AI systems providing explanations (XAI) alongside predictions may:
Transparency ≠ better decisions.
How XAI is designed determines whether it helps or hurts.
Different explanation types serve different cognitive needs (Miller, 2019; Wang et al., 2019):
The most effective explanation type depends on user expertise, time pressure, decision stakes, and potential for bias activation.
Hemmer et al. (2025) identify the organisational factors that enable effective human-AI complementarity:
Optimal task allocation:
According to Dennis et al. (2023), AI agents provide three fundamental affordances to human teams:
These affordances enable AI to contribute to team processes in ways that complement human team members and, thus, enable superior collective outcomes.
As agentic AI systems act autonomously, safety and accountability are critical — not optional (Shavit et al., 2023).
Three compounding factors raise the stakes:
The governance question is not “how do we prevent AI from making mistakes?” — but “how do we detect, correct, and account for mistakes when they inevitably occur?”
Shavit et al. (2023) proposes a series of practices for the responsible deployment of agentic AI:
Jarrahi & Ritala (2025) apply principal-agent theory to reframe AI agents as delegated actors rather than autonomous systems:
Three design principles follow:
This framing keeps accountability firmly with the principal — AI is an agent, not an autonomous actor with its own standing.
Papagiannidis et al. (2025) identify a systematic gap between AI principles and AI governance:
Their framework spans four phases:
Agentic AI raises ethical questions that governance frameworks must address:
What governance does your project solution need?
10:00
Agentic AI creates value not through autonomy alone — but through thoughtful design of human-AI interaction and clear governance.
The key connections:
Three questions your project design must answer:
A solution that cannot answer all three questions is not ready for deployment, regardless of its technical performance.
Multi-agent systems, foundation models as agents, and AI-to-AI coordination represent the next wave — raising the same questions of complementarity, value, and governance at a larger scale.
The concepts we have discussed today (hybrid intelligence, managed autonomy, responsible governance) are not specific to current AI technology. They are enduring frameworks for navigating the evolving boundary between human and machine capability.