Iml U1
Iml U1
Introduction to AI
Introduction : What Is AI?, The Foundations of Artificial Intelligence, The History of
Artificial Intelligence, The State of the Art, Risks and Benefits of AI, Intelligent Agents:
Agents and Environments, Good Behavior: The Concept of Rationality, The Nature of
Environments, The Structure of Agents, Representation the AI Problems, Production System
Artificial Intelligence refers to the simulation of human intelligence in machines that are
programmed to think, reason, and learn like humans. Rather than being explicitly
programmed for specific tasks, AI systems use algorithms and vast amounts of data to
recognize patterns, make decisions, and improve their performance over time.
The foundation of artificial intelligence is data. An AI system needs huge amounts of data to
find patterns and acquire insights to "learn" and make decisions. The goal of knowledge
representation, on the other hand, is to organize this material such that computers can
understand, store, and retrieve it. The core of artificial intelligence is knowledge and data,
which work together to allow it to understand unprocessed data.
2. Algorithms
An AI system can assess data and draw conclusions from it because of algorithms, which are
detailed instructions or sets of rules. These algorithms have become more and more
advanced, enabling AI to perform incredibly difficult jobs like language translation and
image identification. These algorithms provide the basis of artificial intelligence, allowing it
to learn from data and make predictions.
The foundation for understanding and evaluating data patterns is mathematics, especially
statistics and calculus. Statistical techniques help in the development of models that show
connections in data, enabling AI to make sensible choices. AI would have very little
predictive capacity without mathematics.
In AI, ethical issues are becoming more and more important. By establishing guidelines for
how AI should function and be used in society, concerns about privacy, biases, and
responsibility help to build the field's foundation. Ethical standards ensure that advancements
in AI protect individual rights and are consistent with the values of society.
Together, these fundamental components form a strong framework that supports artificial
intelligence. Artificial intelligence systems wouldn't be able to carry out significant, practical
tasks without this foundation.
Potential risks of AI
1. Opacity
AI systems often lack transparency, and it is hard to know how they make decisions. The
law requires that users be informed whenever they interact with AI, and that they have
access to information on how AI works and what the associated risks are
2. Lack of accountability
Without clear guidelines and rules, AI can be used irresponsibly with no consequences.
The act sets out rules and penalties for non-compliance, which ensures that developers
and users are held accountable for their actions
3. Risks to safety
AI systems, especially in critical sectors such as transport or infrastructure, can pose
major health and safety risks if they make errors or are wrongly designed. The law sets
out strict requirements, such as risk assessments, human oversight, technical compliance
or periodic reviews
Before moving forward, we should first know about sensors, effectors, and actuators.
Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that converts energy into motion. The
actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.
Intelligent Agents:
An intelligent agent is an autonomous entity which act upon an environment using sensors
and actuators for achieving goals. An intelligent agent may learn from the environment to
achieve their goals. A thermostat is an example of an intelligent agent.
A rational agent is an agent which has clear preference, models uncertainty, and acts in a way
to maximize its performance measure with all possible actions.
A rational agent is said to perform the right things. AI is about creating rational agents to use
for game theory and decision theory for various real-world scenarios.
For an AI agent, the rational action is most important because in AI reinforcement learning
algorithm, for each best possible action, agent gets the positive reward and for each wrong
action, an agent gets a negative reward.
Rationality:
The rationality of an agent is measured by its performance measure. Rationality can be
judged on the basis of following points:
Structure of an AI Agent
The task of AI is to design an agent program which implements the agent function. The
structure of an intelligent agent is a combination of architecture and agent program. It can be
viewed as:
1. f:P* → A
Agent program: Agent program is an implementation of agent function. An agent program
executes on the physical architecture to produce function f.
PEAS Representation
PEAS is a type of model on which an AI agent works upon. When we define an AI agent or
rational agent, then we can group its properties under PEAS representation model. It is made
up of four words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
Here performance measure is the objective for the success of an agent's behavior.
The environment is where agent lives, operate and provide the agent with something to sense
and act upon it. An environment is mostly said to be non-feministic.
Features of Environment
As per Russell and Norvig, an environment can have various features from the point of view
of an agent:
o If an agent sensor can sense or access the complete state of an environment at each
point in time then it is a fully observable environment, it is partially observable. For
reference, Imagine a chess-playing agent. In this case, the agent can fully observe the
state of the chessboard at all times. Its sensors (in this case, vision or the ability to
access the board's state) provide complete information about the current position of all
pieces. This is a fully observable environment because the agent has perfect
information about the state of the world.
o A fully observable environment is easy as there is no need to maintain the internal
state to keep track of the history of the world. For reference, Consider a self-driving
car navigating a busy city. While the car has sensors like cameras, lidar, and radar, it
can't see everything at all times. Buildings, other vehicles, and pedestrians can
obstruct its sensors. In this scenario, the car's environment is partially observable
because it doesn't have complete and constant access to all relevant information. It
needs to maintain an internal state and history to make informed decisions even when
some information is temporarily unavailable.
o An agent with no sensors in all environments then such an environment is called
unobservable. For reference, think about an agent designed to predict earthquakes but
placed in a sealed, windowless room with no sensors or access to external data. In this
situation, the environment is unobservable because the agent has no way to gather
information about the outside world. It can't sense any aspect of its environment,
making it completely unobservable.
2. Deterministic vs Stochastic:
o If an agent's current state and selected action can completely determine the next state
of the environment, then such an environment is called a deterministic environment.
For reference, Chess is a classic example of a deterministic environment. In chess, the
rules are well-defined, and each move made by a player has a clear and predictable
outcome based on those rules. If you move a pawn from one square to another, the
resulting state of the chessboard is entirely determined by that action, as is your
opponent's response. There's no randomness or uncertainty in the outcomes of chess
moves because they follow strict rules. In a deterministic environment like chess,
knowing the current state and the actions taken allows you to completely determine
the next state.
o A stochastic environment is random and cannot be determined completely by an
agent. For reference, The stock market is an example of a stochastic environment. It's
highly influenced by a multitude of unpredictable factors, including economic events,
investor sentiment, and news. While there are patterns and trends, the exact behavior
of stock prices is inherently random and cannot be completely determined by any
individual or agent. Even with access to extensive data and analysis tools, stock
market movements can exhibit a high degree of unpredictability. Random events and
market sentiment play significant roles, introducing uncertainty.
o In a deterministic, fully observable environment, an agent does not need to worry
about uncertainty.
3. Episodic vs Sequential:
o In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action. For example, Tic-Tac-Toe is a classic example of an
episodic environment. In this game, two players take turns placing their symbols (X
or O) on a 3x3 grid. Each move by a player is independent of previous moves, and the
goal is to form a line of three symbols horizontally, vertically, or diagonally. The
game consists of a series of one-shot actions where the current state of the board is the
only thing that matters for the next move. There's no need for the players to remember
past moves because they don't affect the current move. The game is self-contained
and episodic.
o However, in a Sequential environment, an agent requires memory of past actions to
determine the next best actions. For example, Chess is an example of a sequential
environment. Unlike Tic-Tac-Toe, chess is a complex game where the outcome of
each move depends on a sequence of previous moves. In chess, players must consider
the history of the game, as the current position of pieces, previous moves, and
potential future moves all influence the best course of action. To play chess
effectively, players need to maintain a memory of past actions, anticipate future
moves, and plan their strategies accordingly. It's a sequential environment because the
sequence of actions and the history of the game significantly impact decision-making.
4. Single-agent vs Multi-agent
o If only one agent is involved in an environment, and operating by itself then such an
environment is called a single-agent environment. For example, Solitaire is a classic
example of a single-agent environment. When you play Solitaire, you're the only
agent involved. You make all the decisions and actions to achieve a goal, which is to
arrange a deck of cards in a specific way. There are no other agents or players
interacting with you. It's a solitary game where the outcome depends solely on your
decisions and moves. In this single-agent environment, the agent doesn't need to
consider the actions or decisions of other entities.
o However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment. For reference, A soccer match is an
example of a multi-agent environment. In a soccer game, there are two teams, each
consisting of multiple players (agents). These players work together to achieve
common goals (scoring goals and preventing the opposing team from scoring). Each
player has their own set of actions and decisions, and they interact with both their
teammates and the opposing team. The outcome of the game depends on the
coordinated actions and strategies of all the agents on the field. It's a multi-agent
environment because there are multiple autonomous entities (players) interacting in a
shared environment.
o The agent design problems in the multi-agent environment are different from single-
agent environments.
5. Static vs Dynamic:
o If the environment can change itself while an agent is deliberating then such an
environment is called a dynamic environment it is called a static environment.
o Static environments are easy to deal with because an agent does not need to continue
looking at the world while deciding on an action. For reference, A crossword puzzle is
an example of a static environment. When you work on a crossword puzzle, the
puzzle itself doesn't change while you're thinking about your next move. The
arrangement of clues and empty squares remains constant throughout your problem-
solving process. You can take your time to deliberate and find the best word to fill in
each blank, and the puzzle's state remains unaltered during this process. It's a static
environment because there are no changes in the puzzle based on your deliberations.
o However, for a dynamic environment, agents need to keep looking at the world at
each action. For reference, Taxi driving is an example of a dynamic environment.
When you're driving a taxi, the environment is constantly changing. The road
conditions, traffic, pedestrians, and other vehicles all contribute to the dynamic nature
of this environment. As a taxi driver, you need to keep a constant watch on the road
and adapt your actions in real time based on the changing circumstances. The
environment can change rapidly, requiring your continuous attention and decision-
making. It's a dynamic environment because it evolves while you're deliberating and
taking action.
6. Discrete vs Continuous:
o If in an environment, there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment it is
called a continuous environment.
o Chess is an example of a discrete environment. In chess, there are a finite number of
distinct chess pieces (e.g., pawns, rooks, knights) and a finite number of squares on
the chessboard. The rules of chess define clear, discrete moves that a player can make.
Each piece can be in a specific location on the board, and players take turns making
individual, well-defined moves. The state of the chessboard is discrete and can be
described by the positions of the pieces on the board.
o Controlling a robotic arm to perform precise movements in a factory setting is an
example of a continuous environment. In this context, the robot arm's position and
orientation can exist along a continuous spectrum. There are virtually infinite possible
positions and orientations for the robotic arm within its workspace. The control inputs
to move the arm, such as adjusting joint angles or applying forces, can also vary
continuously. Agents in this environment must operate within a continuous state and
action space, and they need to make precise, continuous adjustments to achieve their
goals.
7. Known vs Unknown
o Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
o In a known environment, the results of all actions are known to the agent. While in an
unknown environment, an agent needs to learn how it works in order to perform an
action.
o It is quite possible for a known environment to be partially observable and an
Unknown environment to be fully observable.
o The opening theory in chess can be considered as a known environment for
experienced chess players. Chess has a vast body of knowledge regarding opening
moves, strategies, and responses. Experienced players are familiar with established
openings, and they have studied various sequences of moves and their outcomes.
When they make their initial moves in a game, they have a good understanding of the
potential consequences based on their knowledge of known openings.
o Imagine a scenario where a rover or drone is sent to explore an alien planet with no
prior knowledge or maps of the terrain. In this unknown environment, the agent (rover
or drone) has to explore and learn about the terrain as it goes along. It doesn't have
prior knowledge of the landscape, potential hazards, or valuable resources. The agent
needs to use sensors and data it collects during exploration to build a map and
understand how the terrain works. It operates in an unknown environment because the
results and consequences of its actions are not initially known, and it must learn from
its experiences.
8. Accessible vs Inaccessible
o If an agent can obtain complete and accurate information about the state's
environment, then such an environment is called an Accessible environment else it is
called inaccessible.
o For example, Imagine an empty room equipped with highly accurate temperature
sensors. These sensors can provide real-time temperature measurements at any point
within the room. An agent placed in this room can obtain complete and accurate
information about the temperature at different locations. It can access this information
at any time, allowing it to make decisions based on the precise temperature data. This
environment is accessible because the agent can acquire complete and accurate
information about the state of the room, specifically its temperature.
o For example, Consider a scenario where a satellite in space is tasked with monitoring
a specific event taking place on Earth, such as a natural disaster or a remote area's
condition. While the satellite can capture images and data from space, it cannot access
fine-grained information about the event's details. For example, it may see a forest fire
occurring but cannot determine the exact temperature at specific locations within the
fire or identify individual objects on the ground. The satellite's observations provide
valuable data, but the environment it is monitoring (Earth) is vast and complex,
making it impossible to access complete and detailed information about all aspects of
the event. In this case, the Earth's surface is an inaccessible environment for obtaining
fine-grained information about specific events.
Problem representation in AI
1. Define the problem precisely :like what is initialsituation, what will be the final,
acceptablesolutions.
2. Analyze the problem: various possible techniquesfor solving the problem.
3. Isolate and represent the task knowledge that isnecessary to solve the problem.
4. Choose the best problem solving technique and apply it
o An intelligent agent needs knowledge about the real world for taking decisions
and reasoning to act efficiently.
o Knowledge-based agents are those agents who have the capability of maintaining an
internal state of knowledge, reason over that knowledge, update their knowledge
after observations and take actions. These agents can represent the world with
some formal representation and act intelligently.
o Knowledge-based agents are composed of two main parts:
o Knowledge-base and
o Inference system.
Production System in AI
A production system refers to a type of rule-based system that is designed to provide a
structured approach to problem solving and decision-making. This framework is
particularly influential in the realm of expert systems, where it simulates human decision-
making processes using a set of predefined rules and facts.
Let's consider an example of Expert System for Medical Diagnosis.
Scenario: A patient comes to a healthcare facility with the following symptoms: fever,
severe headache, sensitivity to light, and stiff neck.
Mediacal diagnosis operates in the following manner:
1. Input: A healthcare professional inputs the symptoms into MediDiagnose.
2. Processing:
MediDiagnose reviews its knowledge base for rules that match the given symptoms.
It identifies several potential conditions but recognizes a strong match for
meningitis based on the combination of symptoms.
3. Output:
The system suggests that meningitis could be a possible diagnosis and recommends
further tests to confirm, such as a lumbar puncture.
It also provides a list of other less likely conditions based on the symptoms for
comprehensive differential diagnosis.
MediDiagnose uses its rule-based system to quickly filter through vast amounts of medical
data to provide preliminary diagnoses. This assists doctors in focusing their investigative
efforts more efficiently and potentially speeds up the process of reaching an accurate
diagnosis.
Key Components of a Production System in AI
The key components of production system includes:
1. Knowledge Base: This is the core repository where all the rules and facts are stored. In
AI, the knowledge base is critical as it contains the domain-specific information and the
if-then rules that dictate how decisions are made or actions are taken.
2. Inference Engine: The inference engine is the mechanism that applies the rules to the
known facts to derive new facts or to make decisions. It scans the rules and decides
which ones are applicable based on the current facts in the working memory. It can
operate in two modes:
Forward Chaining (Data-driven): This method starts with the available data and
uses the inference rules to extract more data until a goal is reached.
Backward Chaining (Goal-driven): This approach starts with a list of goals and
works backwards to determine what data is required to achieve those goals.
3. Working Memory: Sometimes referred to as the fact list, working memory holds the
dynamic information that changes as the system operates. It represents the current state
of knowledge, including facts that are initially known and those that are deduced
throughout the operation of the system.
4. Control Mechanism: This governs the order in which rules are applied by the inference
engine and manages the flow of the process. It ensures that the system responds
appropriately to changes in the working memory and applies rules effectively to reach
conclusions or solutions.
Types of Production Systems
Production systems in AI can be categorized based on how they handle and process
knowledge. This categorization includes Rule-Based Systems, Procedural Systems, and
Declarative Systems, each possessing unique characteristics and applications.
1. Rule-Based Systems
1. Explanation of Rule-Based Reasoning
Rule-based systems operate by applying a set of pre-defined rules to the given data
to deduce new information or make decisions. These rules are generally in the form
of conditional statements (if-then statements) that link conditions with actions or
outcomes.
2. Examples of Rule-Based Systems in AI
Diagnostic Systems: Like medical diagnosis systems that infer diseases from
symptoms.
Fraud Detection Systems: Used in banking and insurance, these systems analyze
transaction patterns to identify potentially fraudulent activities.
2. Procedural Systems
1. Description of Procedural Knowledge
Procedural systems utilize knowledge that describes how to perform specific tasks.
This knowledge is procedural in nature, meaning it focuses on the steps or
procedures required to achieve certain goals or results.
2. Applications of Procedural Systems
Manufacturing Control Systems: Automate production processes by detailing
step-by-step procedures to assemble parts or manage supply chains.
Interactive Voice Response (IVR) Systems: Guide users through a series of steps
to resolve issues or provide information, commonly used in customer service.
3. Declarative Systems
1. Understanding Declarative Knowledge
Declarative systems are based on facts and information about what something is,
rather than how to do something. These systems store knowledge that can be
queried to make decisions or solve problems.
2. Instances of Declarative Systems in AI
Knowledge Bases in AI Assistants: Power virtual assistants like Siri or Alexa,
which retrieve information based on user queries.
Configuration Systems: Used in product customization, where the system decides
on product specifications based on user preferences and declarative rules about
product options.
Each type of production system offers different strengths and is suitable for various
applications, from straightforward rule-based decision-making to complex systems
requiring intricate procedural or declarative reasoning.
How Production Systems Function?
The operation of a production system in AI follows a cyclic pattern:
Match: The inference engine checks which rules are triggered based on the current
facts in the working memory.
Select: From the triggered rules, the system (often through the control mechanism)
selects one based on a set of criteria, such as specificity, recency, or priority.
Execute: The selected rule is executed, which typically modifies the facts in the
working memory, either by adding new facts, changing existing ones, or removing
some.
Applications of Production Systems in AI
Production systems are used across various domains where decision-making can be
encapsulated into clear, logical rules:
Expert Systems: For diagnosing medical conditions, offering financial advice, or
making environmental assessments.
Automated Planning: Used in logistics to optimize routes and schedules based on
current data and objectives.
Game AI: Manages non-player character behavior and decision-making in complex
game environments.