AI is an attempt to build intelligent computers, but what is intelligence?
Thinking like humans - Building something like a brain! But how does a brain work?
Maybe machines can reach intelligence a different way to the human brain
Thinking rationally - Automated reasoning and logic are foundational stuff in AI
It is unclear if logic really captures the type of knowledge that people have or need help with.
Plus, itβs really hard to search through logical statements
Act like humans - The Turing Test - can a human tell if a computer is a computer?
Do we really want computers to act like humans?
Acting rationally - AKA intelligent agents (Approach taken in R&N and P&M texts)
1.1.1 - The Turing Test
In the Turing test, the computer is asked questions by a human interrogator
Computer passes the test if the interrogator cannot tell whether the responses come from a human or computer
The Turing test simplifies the question "is the machine intelligent" into "can the machine imitate a human?"
Turing's idea to try and define "(artificial) intelligence" more concretely has yielded useful results
Chat bots: Eliza, A.L.I.C.E, automated online assistance, etc.
CAPTCHA: Completely Automated Public Turing Test to tell Computers and Humans Apart.
Turing test, but the "interrogator" is a computer
1.1.2 - What is AI? Definitions
OECD: An AI system is a machine-based system that is capable of influencing the environment by producing an output (predictions, recommendations or decisions) for a given set of objectives. It uses machine and/or human-based data and inputs to
Perceive real and/or virtual environments;
Abstract these perceptions into models through analysis in an automated manner (e.g. with machine learning), or manually; and
Use model inference to formulate options for outcomes
AI systems are designed to operate with varying levels of autonomy
Association for the Advancement of Artificial Intelligence (AAAI) offers this on its home page
AI is the scientific understanding of the mechanisms underlying thought and intelligent behaviour and their embodiment in machines
Poole and Mackworth say
AI is the synthesis and analysis of computational agents that act intelligently
We say:
AI is the study and development of algorithms for solving problems that we typically associate with intelligence.
AI is a disperse collection of topics. We address core method and models in this course, which have found wide-spread use application and as building-blocks in more sophisticated AI systems.
2.0 - The History of Artificial Intelligence
Charles Babbage and Ada Lovelace develop the first mechanical machines that perform computation
These machines were intelligent in that they could perform more complex computations than humans
The first neural networks were established in 1943
Alan Turing was a pioneer for Artificial Intelligence methods
Developed the Turing Machine and the Turing Test
The term Artificial Intelligence is coined during the Dartmouth conference in 1955
ELIZA, a natural language processing program was developed in 1965
1997 - the AI system Deep Blue beets the top player in Chess
IBM Watson beats champions in game show Jeopardy!
2.1 - Fear of AI
As AI becomes more prevalent, the fear of it increases too
Similar to the industrial revolution
Jobs being replaced or obsoleted creates shifting job opportunities and new types of jobs.
2.2 -Open Problems
Handling uncertainty (e.g. self-driving cars)
Explainable AI
3.0 - Intelligent (Computational) Agents
An agent is something that acts in an environment
An agent acts intelligently if:
Its actions are appropriate for its goals and circumstances
It is flexible to changing environments and goals
It learns from experience
It makes appropriate choices given perceptual and computational limitations
3.1 - Examples of Agents
Organisations: Microsoft, Facebook, Government of Australia, UQ, ITEE
Always make the best decision given the available resources (knowledge, time, computational power, and memory)
Best: Maximise certain performance measure(s), usually represented as a utility function. More on this throughout the semester.
3.2.1 - In This Course
We are interested in building software systems (called agents) that behave rationally
i.e. systems that accomplish what they are supposed to do, well, given the available resources
Don't worry about how closely systems resemble humans and about philosophical questions on what "intelligence" is (not that we are not interested in this!)
But we may use inspirations from humans or other "intelligent" beings or systems
3.3 - Intelligent Agents Acting in an Environment
π§ Recall our goal: To build a useful, intelligent agent
To start with:
Computers perceive the world using sensors
Agents maintain models/representations of the world and use them for reasoning
Computers can learn from data
To achieve our goal, we need to define the agent in a way that we can program it
The problem of constructing an agent is called the agent design problem
Simply, it's about defining the components of the agent, so that when the agent acts rationally, it will accomplish the task it is supposed to perform, and do it well
3.4 - Agents Acting in an Environment: Inputs and Outputs
An agent performs an action in the environment
The environment generates a percept / stimuli / observation
The percept generated by the environment may depend on the sequence of actions the agent has done.
Agent Inputs
Abilities: What can the agent do? The set of possible actions it can perform
Goals: What is the agent working toward? What it wants, its desires, its values ...
Prior Knowledge: Information about the world before interacting with it. What it knows and believes initially, what it doesn't get from experiences
Past Experiences: Information about the world from previous interactions
History of Stimuli
(Current) Stimuli: What it receives from the environment now (observations, percepts)
Past Experiences: What it has received in the past.
Agent Outputs
Actions: Allows the agent to explore and exploit the environment to achieve its goals.
Autonomous Vehicle
Abilities: Steer, accelerate, brake
Goals: Safety, get to destination, timeliness
Prior Knowledge: Street maps, what signs mean, what to stop for
Stimuli: Vision, laser, GPS, voice commands
Past Experiences: How braking and steering affects direction and speed
Air-Conditioner Thermostat and Controller Agent
Abilities: Turn air-conditioner on or off
Goals: Comfortable temperature, save energy, save money
Prior Knowledge: 24 hour cycle, weekends
Stimuli: Temperature, set temperature, who is home, outside temperature, rooftop PV generation
Past Experiences: When people come and go, who likes what temperature, building thermal dynamics
4.0 - The Agent Design Problem
Goal: To build a useful, intelligent agent.
4.1 - Agent Design Components
The following components are required to solve an agent design problem:
Action Space (A): The set of all possible actions that the agent can perform
Percept Space (P): The set of all possible things that the agent can perceive
State Space (S): The set of all possible configurations of the world the agent is operating in
World Dynamics / Transition Function (T : S x A β S'): A function that specifies how the configuration of the world changes when the agent performs an action on it
Perception Function (Z: S β P): A function that maps a state to a perception
Utility Function (U: S β β): A function that maps a state (or sequence of states) to a real number, indicating how desirable it is for the agent to occupy that state or sequence of states
4.1.1 - Agent Design Components - Utility Function
The agents are performing rationally, which means that they are optimising one (or more) performance criteria
A rational agent selects an action that it believes will maximise its performance criteria, given the available knowledge, time, and computational resources
We define the performance of an agent using a Utility Function U: S β β
A function that maps a state (or a sequence of states) to a real number, indicating how desirable it is for the agent to occupy that state / sequence of states
Crafting the utility function is a key step in the agent design process.
4.2 - 8-Puzzle
Action Space (A) Move the empty cell left (L), right (R), up (U) or down (D)
Percept Space (P) The sequence of numbers in left-right and up-down direction, where the empty cell is marked with an underscore
State Space (S) Same as P (but this is not always the case)
World Dynamics (T) The change from one state to another, given a particular movement of the empty cell
Can be represented as a table.
Percept Function (Z = S β P) Identity Map
Utility Function (U) +1 for the goal state; 0 for all other states
alt; +1 for each tile in the correct position
8-puzzle utility function cannot be the number of steps to completion - How do you determine the number of steps to completion?
Much easier to quantify in terms of the number of tiles that are in the correct position
---
5.0 - Dimensions of Complexity
An alternative technique to the Agent Design Problem
Research proceeds by making simplifying assumptions, and gradually reducing them
Each simplifying assumption gives a dimension of complexity
Multiple values in a dimension: from simple to complex
Simplifying assumptions can be related in various combinations
Much of the history of AI can be seen as starting from the simple and adding in complexity in some of these dimensions
From P & M Chapter 1.5
Dimension
Values
Modularity
Flat, Modular, Hierarchical
Planning Horizon
How far away the goal is / how far ahead you can plan
Representation
States, Features, Relations
Computational Limits
Perfect Rationality, Bounded Rationality
Learning
Knowledge is given, knowledge is learned
Sensing Uncertainty
Fully observable, partially observable
Effect Uncertainty
Deterministic, stochastic
Preference
Goals, complex preferences
Number of Agents
Single agent, multiple agents
Interaction
Offline, online
5.1.1 - Dimensions of Complexity - Modularity
Flat: Model at one level of abstraction
Modular: Model with interacting modules that can be understood separately
Hierarchical: Model with modules that are (recursively) decomposed into modules
Flat representations are adequate for simple systems
Complex biological systems, computer systems, organisations are all hierarchical
Is the environment continuous or discrete?
A flat description is typically either continuous (exclusive) or discrete
Hierarchical reasoning is often a hybrid of continuous and discrete
5.1.2 - Dimensions of Complexity - Planning Horizon
How far the agent looks into the future when deciding what to do
Static: World does not change (while the agent is making a decision)
Finite Stage: Agent reasons about a fixed finite number of time steps
Indefinite Stage: Agent reasons about a finite, but not predetermined number of time steps
Infinite Stage: The agent plans for going on forever (i.e. process oriented)
5.1.3 - Dimensions of Complexity - Representation
Much of modern AI is about finding correct representations and exploiting the compactness for computational gains
An agent can reason in terms of:
Explicit states: A state is one way the world could be
Features or Propositions: States can be described using features
Individuals and Relations: There is a feature for each relationship on each tuple of individuals'
Often an agent can reason without knowing the individuals or when there are infinitely many individuals.
5.1.4 - Dimensions of Complexity - Computational Limits
Perfect Rationality: The agent can determine the best course of action, without taking into account its limited computational resources
Bounded Rationality: The agent must make good decisions based on its perceptual, computational, and memory limitations
5.1.5 - Dimensions of Complexity - Learning (from Experience)
Whether the model is fully specified a prior
Knowledge is given
Knowledge is learned from data or past experience
Always a mix of prior (innate, programmed) knowledge and learning (nature v nurture)
5.1.6 - Dimensions of Complexity - Uncertainty
There are two dimensions for uncertainty:
Sensing uncertainty: or noisy perception
Effect uncertainty:
In this course, we restrict our focus to probabilistic models of uncertainty. Why?
Agents need to act even if they are uncertain
Predictions are needed to decide what to do
Definitive predictions: You will be run over tomorrow
Point probabilities: Probability you will be run over tomorrow is 0.002 if you are careful and 0.05 if you are not careful.
Probability ranges: You will be run over with probability in range [0.001, 0.34]
Acting is gambling: Agents who don't use probabilities will lose to those who do
Probabilities can be learned from data and prior knowledge.
Sensing Uncertainty
Whether an agent can determine the state from its stimuli
Fully-Observable: The agent can observe the state of the world
Partially-Observable: There can be a number of states that are possible given the agent's stimuli
Effecting Uncertainty
If an agent knows the initial state and its action, could it predict the resulting state?
Deterministic: The resulting state is determined from the action and the state
Stochastic: There is uncertainty about the resulting state.
5.1.8 - Dimensions of Complexity - Preferences
What does the agent try to achieve?
Achievement goal: A goal to achieve. This can be a complex logical formula
Complex preferences: May involve trade-offs between various desiderata (things wanted or needed), perhaps at different times
Ordinal: Only the order matters
Cardinal: Absolute values
5.1.9 - Dimensions of Complexity - Number of Agents
Are there multiple reasoning agents that need to be taken into account?
Single agent reasoning: Any other agents are part of the environment
Multiple agent reasoning: An agent reasons strategically about the reasoning of other agents
Agents can have their own goals: Cooperative, competitive, or goals can be independent of each other
5.1.10 - Dimensions of Complexity - Interaction
When does the agent reason to determine what to do?
Reason Offline Before acting
Reason Online While interacting with the environment
Dimension
Values
Modularity
Flat, Modular, Hierarchical
Planning Horizon
How far away the goal is / how far ahead you can plan