HISTORY OF AI
Maturation of Artificial Intelligence (1943-1952)
• Year 1943: The first work which is now recognized as AI was done by
Warren McCulloch and Walter pits in 1943. They proposed a model
of artificial neurons.
• Year 1949: Donald Hebb demonstrated an updating rule for modifying
the connection strength between neurons. His rule is now
called Hebbian learning.
• Year 1950: The Alan Turing who was an English mathematician and
pioneered Machine learning in 1950. Alan Turing
publishes "Computing Machinery and Intelligence" in which he
proposed a test. The test can check the machine's ability to exhibit
intelligent behavior equivalent to human intelligence, called a Turing
test.
The birth of Artificial Intelligence (1952-1956)
• Year 1955: An Allen Newell and Herbert A. Simon created the "first
artificial intelligence program"Which was named as "Logic Theorist".
This program had proved 38 of 52 Mathematics theorems, and find new
and more elegant proofs for some theorems.
• Year 1956: The word "Artificial Intelligence" first adopted by American
Computer scientist John McCarthy at the Dartmouth Conference. For the
first time, AI coined as an academic field.
The golden years-Early enthusiasm (1956-1974)
• Year 1966: The researchers emphasized developing algorithms which
can solve mathematical problems. Joseph Weizenbaum created the first
chatbot in 1966, which was named as ELIZA.
• Year 1972: The first intelligent humanoid robot was built in Japan which
was named as WABOT-1.
The first AI winter (1974-1980)
A boom of AI (1980-1987)
• Year 1980: After AI winter duration, AI came back with "Expert System".
Expert systems were programmed that emulate the decision-making ability of
a human expert.
• In the Year 1980, the first national conference of the American Association of
Artificial Intelligence was held at Stanford University.
The second AI winter (1987-1993)
The emergence of intelligent agents (1993-2011)
• Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary
Kasparov, and became the first computer to beat a world chess champion.
• Year 2002: for the first time, AI entered the home in the form of Roomba, a
vacuum cleaner.
• Year 2006: AI came in the Business world till the year 2006. Companies like
Facebook, Twitter, and Netflix also started using AI.
Deep learning, big data and artificial general intelligence (2011-present)
• Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show,
where it had to solve the complex questions as well as riddles. Watson
had proved that it could understand natural language and can solve
tricky questions quickly.
• Year 2012: Google has launched an Android app feature "Google now",
which was able to provide information to the user as a prediction.
• Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a
competition in the infamous "Turing test."
• Year 2018: The "Project Debater" from IBM debated on complex topics
with two master debaters and also performed extremely well.
• Google has demonstrated an AI program "Duplex" which was a virtual
assistant and which had taken hairdresser appointment on call, and lady
on other side didn't notice that she was talking with the machine.
APPLICATIONS OF AI
Intelligent Agents
Rational agents as central to our approach to artificial intelligence.
1. Agents and Environments
An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.
Human agent
Robotic agent
Software agent
The term percept to refer to the content an agent’s sensors are perceiving.
An agent’s percept sequence is the complete history of everything the agent has
ever perceived.
In general, an agent’s choice of action at any given instant can depend on its built-
in knowledge and on the entire percept sequence observed to date, but not on
anything it hasn’t perceived.
An agent’s behaviour is described by the agent function that maps any given
percept sequence to an action.
Internally, the agent function for an artificial agent will be implemented by an
agent program.
The agent function is an abstract mathematical description; the agent program is a
concrete implementation, running within some physical system.
Good Behaviour: Rationality
A rational agent is one that does the right thing.
This sequence of actions causes the environment to go through a sequence of
states. If the sequence is desirable, then the agent has performed well. This notion
of desirability is captured by a performance measure that evaluates any given
sequence of environment states.
What is rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date.
Definition of rational agent:
“For each possible percept sequence, a rational agent should select an action that is
expected to maximize its performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the agent has.”
The Nature of Environments:
Task environments, which are essentially the “problems” to which rational
agents are the “solutions.”
The nature of the task environment directly affects the appropriate design
for the agent program.
Specifying the task environment:
Rationality of an agent specify
task environment: the performance measure, the environment, and the
agent’s actuators and sensors.
PEAS (Performance, Environment, Actuators, Sensors) description
Task Environment for Automated Car
More Examples
Properties of task environments:
The range of task environments that might arise in AI is obviously vast.
We can, however, identify a fairly small number of dimensions along
which task environments can be categorized.
These dimensions determine, to a large extent, the appropriate agent
design and the applicability of each of the principal families of techniques
for agent implementation.
FULLY OBSERVABLE VS. PARTIALLY OBSERVABLE:
If an agent’s sensors give it access to the complete state of the
environment at each point in time, then we say that the task
environment is fully observable.
An environment might be partially observable because of noisy and
inaccurate sensors or because parts of the state are simply missing from
the sensor data.
SINGLE-AGENT VS. MULTIAGENT:
An agent solving a crossword puzzle by itself is clearly in a single-agent
environment, whereas an agent playing chess is in a two agent
environment.
chess is a competitive multi agent environment. On the other hand, in
the taxi-driving environment, avoiding collisions maximizes the
performance measure of all agent s, so it is a partially cooperative multi
agent environment.
Deterministic vs. nondeterministic:
If the next state of the environment is completely determined by the
current state and the action executed by the agent(s), then we say the
environment is deterministic; otherwise, it is nondeterministic.
EPISODIC VS. SEQUENTIAL:
In an episodic task environment, the agent’s experience is divided into
atomic episodes.
In each episode the agent receives a percept and then performs a single
action.
next episode does not depend on the actions taken in previous episodes.
In sequential environments, on the other hand, the current decision could
affect all future decisions. Chess and taxi driving are sequential:
STATIC VS. DYNAMIC:
If the environment can change while an agent is deliberating, then we say
the environment is dynamic for that agent; otherwise, it is static.
If the environment itself does not change with the passage of time but the
agent’s performance score does, then we say the environment is semi
dynamic.
DISCRETE VS. CONTINUOUS:
The discrete/continuous distinction applies to the state of the
environment, to the way time is handled, and to the percepts and
actions of the agent.
For example, the chess environment has a finite number of distinct
states (excluding the clock).
Chess also has a discrete set of percepts and actions. Taxi driving is
a continuous-state and continuous-time problem: the speed and
location of the taxi and of the other vehicles sweep through a range
of continuous values and do so smoothly over time.
Taxi-driving actions are also continuous (steering angles, etc.). Input
from digital cameras is discrete, strictly speaking, but is typically
treated as representing continuously varying intensities and
locations.
KNOWN VS. UNKNOWN
• This distinction refers not to the environment itself but to the agent’s (or
designer’s) state of knowledge about the “laws of physics” of the
environment. In a known environment, the outcomes (or outcome
probabilities if the environment is nondeterministic) for all actions are
given.
• Obviously, if the environment is unknown, the agent will have to learn how
it works in order to make good decisions.
• The distinction between known and unknown environments is not the
same as the one between fully and partially observable environments. It is
quite possible for a known environment to be partially observable—for
example, in solitaire card games, I know the rules but am still unable to see
the cards that have not yet been turned over.
• Conversely, an unknown environment can be fully observable—in a new
video game, the screen may show the entire game state but I still don’t
know what the buttons do until I try them.
The Structure of Agents
• Agents are by describing behavior—the action that is performed after any given
sequence of percepts.
• The job of AI is to design an agent program that implements the agent function—
the mapping from percepts to actions.
• Agent architecture: Agent program will run on some sort of computing device with
physical sensors and actuators.
agent = architecture + program
• If the program is going to recommend actions like Walk, the architecture had better
have legs.
• The architecture might be just an ordinary PC, or it might be a robotic car with
several on board computers, cameras, and other sensors.
• In general, the architecture makes the percepts from the sensors available to the
program, runs the program, and feeds the program’s action choices to the actuators
as they are generated.
Agent programs:
They take the current percept as input from the sensors and return an
action to the actuators.
Difference between the agent program, which takes the current
percept as input, and the agent function, which may depend on the
entire percept history.
The agent programs in the simple pseudocode language:
The table— represents explicitly the agent function that the agent
program embodies.
To build a rational agent in this way, we as designers must construct a
table that contains the appropriate action for every possible percept
sequence.
four basic kinds of agent programs:
• Simple reflex agents;
• Model-based reflex agents;
• Goal-based agents; and
• Utility-based agents.
Simple reflex agents:
These agents select actions on the basis of the current percept,
ignoring the rest of the percept history,
a condition–action rule if car-in-front-is-braking then initiate-braking.