Ai Module1
Ai Module1
Module 1
Introduction to Artificial Intelligence
• Homo Sapiens : The name is Latin for "wise man“
• Philosophy of AI - “Can a machine think and behave like humans do?”
• In Simple Words - Artificial Intelligence is a way of making a computer, a computer-
controlled robot, or a software think intelligently, in the similar manner the
intelligent humans think.
• Artificial intelligence (AI) is an area of computer science that emphasizes the
creation of intelligent machines that work and react like humans.
• AI is accomplished by studying how human brain thinks and how humans learn,
decide, and work while trying to solve a problem, and then using the outcomes of this
study as a basis of developing intelligent software and systems.
1. What is AI ?
Views of AI fall into four categories:
i. Thinking humanly
ii. Thinking rationally
iii. Acting humanly
iv. Acting rationally
Thinking Humanly Thinking Rationally
“The exciting new effort to make comput- ers “The study of mental faculties through the
think ... machines with minds, in the full and use of computational models.”
literal sense.” (Haugeland, 1985) (Charniak and McDermott, 1985)
“[The automation of] activities that we “The study of the computations that make it
associate with human thinking, activities such possible to perceive, reason, and act.”
as decision-making, problem solv- ing, (Winston, 1992)
learning .. .” (Bellman, 1978)
Acting Humanly Acting Rationally
“The art of creating machines that per- form “Computational Intelligence is the study
functions that require intelligence when of the design of intelligent agents.” (Poole
performed by people.” (Kurzweil, 1990) et al., 1998)
“The study of how to make computers do “AI . . . is concerned with intelligent be-
things at which, at the moment, people are havior in artifacts.” (Nilsson, 1998)
better.” (Rich and Knight, 1991)
Figure 1.1 Some definitions of artificial intelligence, organized into four categories.
What can AI do today? A concise answer is difficult because there are so many activities in
so many subfields. Here we sample a few applications; others appear throughout the book.
Robotic vehicles: A driverless robotic car named STANLEY sped through the rough
terrain of the Mojave dessert at 22 mph, finishing the 132-mile course first to win the 2005
DARPA Grand Challenge. STANLEY is a Volkswagen Touareg outfitted with cameras, radar,
and laser rangefinders to sense the environment and onboard software to command the steer-
ing, braking, and acceleration (Thrun, 2006). The following year CMU’s BOSS won the Ur-
ban Challenge, safely driving in traffic through the streets of a closed Air Force base, obeying
traffic rules and avoiding pedestrians and other vehicles.
Speech recognition: A traveler calling United Airlines to book a flight can have the en-
tire conversation guided by an automated speech recognition and dialog management system.
Autonomous planning and scheduling: A hundred million miles from Earth, NASA’s
Remote Agent program became the first on-board autonomous planning program to control
the scheduling of operations for a space craft (Jonsson et al., 2000). REMOTE AGENT gen-
erated plans from high-level goals specified from the ground and monitored the execution of
those plans—detecting, diagnosing, and recovering from problems as they occurred.
Succes-sor program MAPGEN (Al-Chang et al., 2004) plans the daily operations for NASA’s
MarsExploration Rovers, and MEXAR2 (Cesta et al., 2007) did mission planning—both
logistics and science planning—for the European Space Agency’s Mars Express mission in
2008.
Section 1.5. Summary 29
Game playing: IBM’s DEEP BLUE became the first computer program to defeat the
world champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5
in an exhibition match (Goodman and Keene, 1997). Kasparov said that he felt a “new
kind of intelligence” across the board from him. Newsweek magazine described the
match as “The brain’s last stand.” The value of IBM’s stock increased by $18 billion.
Human champions studied Kasparov’s loss and were able to draw a few matches in
subsequent years, but the most recent human-computer matches have been won
convincingly by the computer.
Spam fighting: Each day, learning algorithms classify over a billion messages as
spam, saving the recipient from having to waste time deleting what, for many users, could
comprise 80% or 90% of all messages, if not classified away by algorithms. Because the
spammers are continually updating their tactics, it is difficult for a static programmed
approach to keep up, and learning algorithms work best (Sahami et al., 1998; Goodman
and Heckerman, 2004).
Logistics planning: During the Persian Gulf crisis of 1991, U.S. forces deployed a
Dynamic Analysis and Replanning Tool, DART (Cross and Walker, 1994), to do
automated logistics planning and scheduling for transportation. This involved up to
50,000 vehicles, cargo, and people at a time, and had to account for starting points,
destinations, routes, and conflict resolution among all parameters. The AI planning
techniques generated in hours a plan that would have taken weeks with older methods.
The Defense Advanced Research Project Agency (DARPA) stated that this single
application more than paid back DARPA’s30-year investment in AI.
Robotics: The iRobot Corporation has sold over two million Roomba robotic
vacuum cleaners for home use. The company also deploys the more rugged PackBot to
Iraq and Afghanistan, where it is used to handle hazardous materials, clear explosives,
and identify the location of snipers.
Machine Translation: A computer program automatically translates from Arabic
to English, allowing an English speaker to see the headline “Ardogan Confirms That
Turkey Would Not Accept Any Pressure, Urging Them to Recognize Cyprus.” The
program uses astatistical model built from examples of Arabic-to-English translations and
from examples of English text totaling two trillion words (Brants et al., 2007). None of
the computer scientists on the team speak Arabic, but they do understand statistics and
machine learning algorithms.
These are just a few examples of artificial intelligence systems that exist today.
Not magic or science fiction—but rather science, engineering, and mathematics, to
which this book provides an introduction.
21AI54-Principles of Artificial Intelligence
Intelligent Agents
1. Agents and environment
An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators. simple idea is illustrated in Figure
2.1
Figure 2.3 Partial tabulation of a simple agent function for the vacuum-cleaner world
shown in Figure 2.2.
2. Concept of Rationality
A rational agent is one that does the right thing—conceptually speaking, every entry in
the table for the agent function is filled out correctly.
Rationality
Rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• The agent's prior knowledge of the environment.
• The actions that the agent can perform.
• The agent's percept sequence to date.
A definition of a rational agent: For each possible percept sequence, a rational
agent should select an action that is ex-pected to maximize its performance measure, given
the evidence provided by the percept sequence and whatever built-in knowledge the agent
has.
Consider the simple vacuum-cleaner agent that cleans a square if it is dirty and moves to the
other square if not; this is the agent function tabulated in Figure 2.3. Is this a rational agent?
That depends! First, we need to say what the performance measure is, what is known about
the environment, and what sensors and actuators the agent has. Let us assume the following:
• The performance measure awards one point for each clean square at each time step,
over a “lifetime” of 1000 time steps.
• The “geography” of the environment is known a priori (Figure 2.2) but the dirt distri-
bution and the initial location of the agent are not. Clean squares stay clean and sucking
cleans the current square. The Left and Right actions move the agent left and right
except when this would take the agent outside the environment, in which case the agent
remains where it is.
• The only available actions are Left , Right , and Suck .
• The agent correctly perceives its location and whether that location contains dirt.
We claim that under these circumstances the agent is indeed rational
The performance measure to which we would like our automated driver to aspire?
Desirable qualities include getting to the correct destination; minimizing fuel con- sumption
and wear and tear; minimizing the trip time or cost; minimizing violations of traffic laws and
disturbances to other drivers; maximizing safety and passenger comfort; maximizing
profits. Obviously, some of these goals conflict, so tradeoffs will be required.
What is the driving environment that the taxi will face? Any taxi driver must deal with a
variety of roads, ranging from rural lanes and urban alleys to 12-lane freeways. The roads
contain other traffic, pedestrians, stray animals, road works, police cars, puddles, and
potholes. The taxi must also interact with potential and actual passengers.
Note:
The simplest environment is fully observable, single-agent, deterministic, episodic,
static and discrete. Ex: simple vacuum cleaner
The table represents explicitly the agent function Ex: the simple vacuum cleaner
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability. All these agents can improve their performance and generate better action over the
time. These are given below:
• Simple Reflex Agent
• Model-based reflex agent
• Goal-based agents
• Utility-based agent
• Learning agent
For the braking problem, the internal state is not too extensive— just the previous
frame from the camera, allowing the agent to detect when two red lights at the edge of
the vehicle go on or off simultaneously.
For other driving tasks such as changing lanes, the agent needs to keep track of where
the other cars are if it can’t see them all at once. And for any driving to be possible at
all, the agent needs to keep track of where its keys are.
Updating this internal state information as time goes by requires two kinds of
knowledge to be encoded in the agent program.
First, we need some information about how the world evolves independently of the
agent—for example, that an overtaking car generally will be closer behind than it was
a moment ago.
Second, we need some information about how the agent’s own actions affect the
world—for example, that when the agent turns the steering wheel clockwise, the car
turns to the right, or that after driving for five minutes northbound on the freeway, one
is usually about five miles north of where one was five minutes ago.
This knowledge about “how the world works”—whether implemented in simple
Boolean circuits or in complete scientific theories—is called a model of the world. An
agent that uses such a model is called a model-based agent.
Goal-based agents
• The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.
• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different scenario
are called searching and planning, which makes an agent proactive.
• Sometimes goal-based action selection is straightforward: for example when goal
satisfaction results immediately from a single action.
• Sometimes it will be trickier: for example, when the agent has to consider long
sequences of twists and turns to find a way to achieve the goal.
• Search and planning are the subfields of AI devoted to finding action sequences that
achieve the agent’s goals.
Example: Example:
The reflex agent brakes when it sees A goal-based agent, in principle, could reason
brake lights that if the car in front has its brake lights on, it
will slow down.
Utility-based agents
• These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success at
a given state.
• Utility-based agent act based not only goals but also the best way to achieve the goal.
• The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
Utility-based Agents advantages wrt. goal-based:
with conflicting goals, utility specifies and appropriate tradeoff
with several goals none of which can be achieved with certainty, utility selects proper
tradeoff between importance of goals and likelihood of success
Learning Agents
Problem Previous agent programs describe methods for selecting actions
How are these agent programs programmed?
Programming by hand inefficient and ineffective!
Solution: build learning machines and then teach them (rather than instruct them)
Advantage: robustness of the agent program toward initially-unknown environments