1.0 - Introduction to AI and Rational Agents

1.1 - What is AI?

1.1.1 - The Turing Test

Figure 1 - The Turing Test. From @fatihbildirici on medium.com

1.1.2 - What is AI? Definitions

2.0 - The History of Artificial Intelligence

Figure 2 - The History of Artificial Intelligence. From the Queensland Brain Institute (QBI), UQ

2.1 - Fear of AI

2.2 -Open Problems

3.0 - Intelligent (Computational) Agents

3.1 - Examples of Agents

3.2 - Goals of Artificial Intelligence

3.2.1 - In This Course

3.3 - Intelligent Agents Acting in an Environment

🧠 Recall our goal: To build a useful, intelligent agent

3.4 - Agents Acting in an Environment: Inputs and Outputs

Agent Inputs Agent Outputs
Autonomous Vehicle
Air-Conditioner Thermostat and Controller Agent

4.0 - The Agent Design Problem

Goal: To build a useful, intelligent agent.
Figure 3 - The Agent Design Problem. The diagram on the left can be simplified to an agent that perceives its environment and acts accordingly.

4.1 - Agent Design Components

The following components are required to solve an agent design problem:

4.1.1 - Agent Design Components - Utility Function

4.2 - 8-Puzzle

---

5.0 - Dimensions of Complexity

From P & M Chapter 1.5

Dimension Values
Modularity Flat, Modular, Hierarchical
Planning Horizon How far away the goal is / how far ahead you can plan
Representation States, Features, Relations
Computational Limits Perfect Rationality, Bounded Rationality
Learning Knowledge is given, knowledge is learned
Sensing Uncertainty Fully observable, partially observable
Effect Uncertainty Deterministic, stochastic
Preference Goals, complex preferences
Number of Agents Single agent, multiple agents
Interaction Offline, online

5.1.1 - Dimensions of Complexity - Modularity

Is the environment continuous or discrete?

5.1.2 - Dimensions of Complexity - Planning Horizon

How far the agent looks into the future when deciding what to do

5.1.3 - Dimensions of Complexity - Representation

5.1.4 - Dimensions of Complexity - Computational Limits

5.1.5 - Dimensions of Complexity - Learning (from Experience)

5.1.6 - Dimensions of Complexity - Uncertainty

There are two dimensions for uncertainty:

In this course, we restrict our focus to probabilistic models of uncertainty. Why?

Sensing Uncertainty

Whether an agent can determine the state from its stimuli

Effecting Uncertainty

If an agent knows the initial state and its action, could it predict the resulting state?

5.1.8 - Dimensions of Complexity - Preferences

5.1.9 - Dimensions of Complexity - Number of Agents

5.1.10 - Dimensions of Complexity - Interaction

When does the agent reason to determine what to do?

Dimension Values
Modularity Flat, Modular, Hierarchical
Planning Horizon How far away the goal is / how far ahead you can plan
Representation States, Features, Relations
Computational Limits Perfect Rationality, Bounded Rationality
Learning Knowledge is given, Knowledge is learned
Sensing Uncertainty Fully observable, Partially observable
Effect Uncertainty Deterministic, Stochastic
Preference Goals, Complex preferences
Number of Agents Single agent, Multiple agents
Interaction Offline, Online