Introduction to AI (I2AI)
Neu-Ulm University of Applied Sciences
March 17, 2026
When you hear “Generative AI”, what comes to mind?
And what do you think it actually means for a machine to create something?
Neural networks learn to recognize patterns. Generative AI learns to create them.
The shift is fundamental (Goodfellow et al., 2016; Urbach et al., 2026):
Generative AI has rapidly transitioned from a niche research domain to a significant driver of innovation across industries (Urbach et al., 2026).
Two major families of foundational models dominate today:
Beyond standalone models, Agentic AI combines these capabilities with planning, memory, and tool use and, thus, enable AI to act, not just generate.
The introduction of ChatGPT by OpenAI in November 2022 marked a turning point:
ChatGPT is a catalyst, not the full picture.
LLMs are neural networks trained on vast amounts of text, capable of generating coherent, contextually appropriate language (Brown et al., 2020; Vaswani et al., 2017).
Key characteristics
The generation process in an LLM follows a clear probabilistic pipeline (Sanderson, 2024):
LLMs are not trained in a single step, they go through three distinct phases (Ouyang et al., 2022):
Consider the tasks you do in a typical working day. Where could an LLM genuinely help? And where might it do more harm than good?
LLMs are applied across a broad spectrum of domains (Gimpel et al., 2023, 2024):
Despite remarkable capabilities, LLMs have fundamental limitations that any responsible deployment must address (Riemer & Peter, 2023; Verma & Oremus, 2023).
While LLMs generate text token by token, diffusion models generate images, video, and audio through an iterative denoising process inspired by physics (Ho et al., 2020; Urbach et al., 2026).
The core intuition:
The most prominent application of diffusion models is generating images from text descriptions (Rombach et al., 2022):
Diffusion models extend naturally to other modalities:
All three modalities share the same fundamental mechanism: embedding the prompt to iterative denoising to structured output (Liu et al., 2023; Singh, 2023).
From generation to action
Agentic AI is an emerging paradigm in AI that refers to autonomous systems designed to pursue complex goals with minimal human intervention. Acharya et al. (2025, p. 18912)
Core characteristics
| Feature | Traditional AI | Agentic AI |
|---|---|---|
| Primary purpose | Task-specific automation | Goal-oriented autonomy |
| Human intervention | High (predefined parameters) | Low (autonomous adaptability) |
| Adaptability | Limited | High |
| Environment interaction | Static or limited context | Dynamic and context-aware |
| Learning type | Primarily supervised | Reinforcement and self-supervised |
| Decision-making | Data-driven, static rules | Autonomous, contextual reasoning |
Four key components transform LLMs into agents (Urbach et al., 2026):