Agent Types

🧠 Introduction to AI

Andy Weeger

Neu-Ulm University of Applied Sciences

February 12, 2024

Agents

Agent

Figure 1: Agents interact with environments through sensors and actuators

Rational agent

A rational agent is one that does the right thing.

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has (Russel and Norvig 2022, 58).

It can be quite hard to formulate a performance measure correctly, however:

If we use, to achieve our purposes, a mechanical agency with those operation we cannot interfere once we have started it […] we had better be quite sure that the purpose built into the machine is the purpose which we really desire (Wiener 1960, 1358)

Exercise

Under which circumstances does a vacuum cleaning agent act rational?

Rationality

Rationality is not the same as perfection.

  • Rationality maximizes expected performance
  • Perfection maximizes actual performance
  • Perfection requires omniscience
  • Rational choice depends only on the percept sequence to date

Environments

Components

Before designing an agent (the solution), the task environment (the problem) must be specified as fully as possible, including

  • the performance measure (P),
  • the environment (E),
  • the actuators (A), and
  • the sensors (S)

Russel and Norvig (2022) call the task environment PEAS.

Properties

Task environments can be categorized along following dimensions (Russel and Norvig 2022, 62–64):

  • Fully observable vs. partially observable
  • Single agent vs. multi-agent
  • Deterministic vs. nondeterministic
  • Episodic vs. sequential
  • Static vs. dynamic
  • Discrete vs. continuous
  • Known vs. unknown

The hardest case is partially observable, multi-agent, nondeterministic, sequential, dynamic, and continuous.

Exercise

Describe the task environment of a taxi driver agent.

Agent types

Simple reflex agents

Figure 2: A simple reflex agent

Model-based reflex agents

Figure 3: A model-based reflex agent

Goal-based agents

Figure 4: A model-based, goal-based agent

Utility-based agents

Figure 5: A model-based, utility-based agent

Recap

What are the main differences between the agents?

Main differences

In contrast to simple reflex agents, model-based reflex agents keep track of the state of the world (“stored” in the transition model). However, both act on fixed condition-action rules.

Goal-based agents , have some sort of goal information that describes situations that are desirable and enables them to make goal-based decisions making (i.e., involving consideration of the future based on the transition model.

The performance measure of utility-based agents is more general. It does not only consider a binary distinction between “goal achieved” and “goal not achieved” but allows comparing different world states according to their relative utility or expected utility, respectively.

Example: a goal-based agent for routing just selects actions based on a single, binary goal: reaching the destination; a utility-based agents also considers additional goals like spending as less time as possible on the road, spending as less money as possible, having the best scenery on the trip, etc. and tries to maximize overall utility across these goals. In this example, reaching the destiny is the ultimate goal, without achieving that utility would be zero. However, utility will increase or decrease related to how the actions chosen impact the achievement of the other goals, which importance need to be weighed.

Learning agents

Figure 6: A learning agent

✏️ Exercises

Concepts

Define in your own words the following terms:

  • Rationality
  • Autonomy
  • Agent
  • Environment
  • Sensor
  • Actuator
  • Percept
  • Agent function
  • Agent program

Agent types

Explain the differences between the following agent types in your own words. Describe the component(s) that is/are specific for each type.

  • Reflex agent
  • Model-based agent
  • Goal-based agent
  • Utility-based agent
  • Learning agent

Vacuum cleaner

Under which circumstances does a robotic vacuum cleaner act rational?

Describe the task environment of such an agent.

PEAS

For each of the following agents, specify the performance measure, the environment, the actuators, and the sensors.

  • Microwave oven
  • Chess program
  • Autonomous supply delivery

Performance measure

Describe a task environment in which the performance measure is easy to specify completely and correctly, and a in which it is not.

Assertions

For each of the following assertions, say whether it is true or false and support your answer with examples or counterexamples where appropriate.

  1. An agent that senses only partial information about the state cannot be perfectly rational.
  2. There exist task environments in which no pure reflex agent can behave rationally.
  3. There exists a task environment in which every agent is rational.
  4. Every agent is rational in an unobservable environment.
  5. A perfectly rational poker-playing agent never loses.

Task environment

For each of the following activities characterize the task environment it in terms of the properties discussed in the lecture notes.

  • Playing soccer
  • Exploring the subsurface oceans of Titan
  • Shopping for used AI books on the internet
  • Playing a tennis match

Task environment #2

For each of the following task environment properties, rank the example task environments from most to least according to how well the environment satisfies the property.

Lay out any assumptions you make to reach your conclusions.

  1. Fully observable: driving; document classification; tutoring a student in calculus; skin cancer diagnosis from images
  2. Continuous: driving; spoken conversation; written conversation; climate engineering by stratospheric aerosol injection
  3. Stochastic: driving; sudoku; poker; soccer
  4. Static: chat room; checkers; tax planning; tennis

Literature

Russel, Stuart, and Peter Norvig. 2022. Artificial Intelligence: A Modern Approach. Harlow: Pearson Education.
Wiener, Norbert. 1960. “Some Moral and Technical Consequences of Automation.” Science 131 (3410): 1355–58.