Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through
sensors.
Actuators: Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system. An actuator
can be an electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.
Intelligent Agents:
An intelligent agent is an autonomous entity which act upon an environment using
sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an example of an intelligent agent.
There are four main aspects that need to be taken into consideration when designing
an intelligent agent.
● Percepts
This is the information that the agent receives
● Actions
This is what the agent needs to do or can do to achieve its objectives.
● Goals
This is the factor that the agent is trying to achieve
● Environment
The final aspect is the environment in which the agent will be working in.
TYPES OF AGENTS
A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth, and
other body parts for actuators.
A robotic agent might have cameras and infrared range finders for sensors and various
motors for actuators.
A software agent receives keystrokes, file contents, and network packets as sensory
inputs and acts on the environment by displaying on the screen, writing files, and
sending network packets.
A rational agent is an agent which has clear preference, models uncertainty, and acts in
a way to maximize its performance measure with all possible actions.
A rational agent is said to perform the right things. AI is about creating rational agents
to use for game theory and decision theory for various real-world scenarios.
Structure of an AI Agent
The task of AI is to design an agent program which implements the agent function.
The structure of an intelligent agent is a combination of architecture and agent
program. It can be viewed as:
Agent = Architecture + Agent program
Following are the main three terms involved in the structure of an AI agent:
Architecture: Architecture is machinery that an AI agent executes on.
Agent Function: Agent function is used to map a percept to an action. Ie., f:P* → A
Agent program: Agent program is an implementation of agent function. An agent
program executes on the physical architecture to produce function f.
What is Artificial Intelligence?
Artificial Intelligence is a way of making a computer, a computer-controlled
robot, or a software think intelligently, in the similar manner the intelligent
humans think.
Goals of AI :
❖ To Create Expert Systems: The systems which exhibit intelligent behavior, learn,
demonstrate, explain, and advice its users.
❖ To Implement Human Intelligence in Machines: Creating systems that understand,
think, learn, and behave like humans.
AI has two major roles:
● Study the intelligent part concerned with humans.
● Represent those actions using computers.
What is knowledge representation?
Humans are best at understanding, reasoning, and interpreting knowledge. Human
knows things, which is knowledge and as per their knowledge they perform various
actions in the real world. But how machines do all these things comes under knowledge
representation and reasoning. Hence we can describe Knowledge representation as
following:
○ Knowledge representation and reasoning (KR, KRR) is the part of Artificial
intelligence which concerned with AI agents thinking and how thinking
contributes to intelligent behavior of agents.
○ It is responsible for representing information about the real world so that a
computer can understand and can utilize this knowledge to solve the complex
real world problems such as diagnosis a medical condition or communicating
with humans in natural language.
○ It is also a way which describes how we can represent knowledge in artificial
intelligence. Knowledge representation is not just storing data into some
database, but it also enables an intelligent machine to learn from that
knowledge and experiences so that it can behave intelligently like a human.
What to Represent:
Following are the kind of knowledge which needs to be represented in AI systems:
○ Object: All the facts about objects in our world domain. E.g., Guitars contains
strings, trumpets are brass instruments.
○ Events: Events are the actions which occur in our world.
○ Performance: It describe behavior which involves knowledge about how to do
things.
○ Meta-knowledge: It is knowledge about knowledge(what we know).
○ Facts: Facts are the truths about the real world and what we represent.
○ Knowledge-Base: The central component of the knowledge-based agents is the
knowledge base. It is represented as KB. The Knowledgebase is a group of the
Sentences (Here, sentences are used as a technical term and not identical with
the English language).
Knowledge: Knowledge is awareness or familiarity gained by experiences of facts, data,
and situations. Following are the types of knowledge in artificial intelligence:
Types of knowledge
Following are the various types of knowledge
1. Declarative Knowledge:
○ Declarative knowledge is to know about something.
○ It includes concepts, facts, and objects.
○ It is also called descriptive knowledge and expressed in declarative sentences.
○ It is simpler than procedural language.
2. Procedural Knowledge
○ It is also known as imperative knowledge.
○ Procedural knowledge is a type of knowledge which is responsible for knowing
how to do something.
○ It can be directly applied to any task.
○ It includes rules, strategies, procedures, agendas, etc.
○ Procedural knowledge depends on the task on which it can be applied.
3. Meta-knowledge:
○ Knowledge about the other types of knowledge is called Meta-knowledge.
4. Heuristic knowledge:
○ Heuristic knowledge is representing knowledge of some experts in a field or
subject.
○ Heuristic knowledge is rules of thumb based on previous experiences, awareness
of approaches, and which are good to work but not guaranteed.
5. Structural knowledge:
○ Structural knowledge is basic knowledge of problem-solving.
○ It describes relationships between various concepts such as kind of, part of, and
grouping of something.
○ It describes the relationship that exists between concepts or objects.
The relation between knowledge and intelligence:
Knowledge of real-worlds plays a vital role in intelligence and the same for creating
artificial intelligence. Knowledge plays an important role in demonstrating intelligent
behavior in AI agents. An agent is only able to accurately act on some input when he
has some knowledge or experience about that input.
Let's suppose if you met some person who is speaking in a language which you don't
know, then how you will able to act on that. The same thing applies to the intelligent
behavior of the agents.
As we can see in below diagram, there is one decision maker which act by sensing the
environment and using knowledge. But if the knowledge part will not present then, it
cannot display intelligent behavior.
AI knowledge cycle:
An Artificial intelligence system has the following components for displaying
intelligent behavior:
○ Perception
○ Learning
○ Knowledge Representation and Reasoning
○ Planning
○ Execution
Perception Block
This will help the AI system gain information regarding its surroundings
through various sensors, thus making the AI system familiar with its
environment and helping it interact with it. These senses can be in the
form of typical structured data or other forms such as video, audio, text,
time, temperature, or any other sensor-based input.
Learning Block
The knowledge gained will help the AI system to run the deep learning
algorithms. These algorithms are written in the learning block, making
the AI system transfer the necessary information from the perception
block to the learning block for learning (training).
Knowledge and Reasoning Block
As mentioned earlier, we use the knowledge, and based on it, we reason
and then take any decision. Thus, these two blocks are responsible for
acting like humans go through all the knowledge data and find the
relevant ones to be provided to the learning model whenever it is
required.
Planning and Execution Block
These two blocks though independent, can work in tandem. These blocks
take the information from the knowledge block and the reasoning block
and, based on it, execute certain actions. Thus, knowledge representation
is extremely useful for AI systems to work intelligently.
Approaches to knowledge representation:
There are mainly four approaches to knowledge representation, which are givenbelow:
1. Simple relational knowledge:
○ It is the simplest way of storing facts which uses the relational method, and each
fact about a set of the object is set out systematically in columns.
○ This approach of knowledge representation is famous in database systems
where the relationship between different entities is represented.
○ This approach has little opportunity for inference.
Example: The following is the simple relational knowledge representation.
Player Weight Age
Player1 65 23
Player2 58 18
Player3 75 24
2. Inheritable knowledge:
○ In the inheritable knowledge approach, all data must be stored into a hierarchy of
classes.
○ All classes should be arranged in a generalized form or a hierarchical manner.
○ In this approach, we apply inheritance property.
○ Elements inherit values from other members of a class.
○ This approach contains inheritable knowledge which shows a relation between
instance and class, and it is called instance relation.
○ Every individual frame can represent the collection of attributes and its value.
○ In this approach, objects and values are represented in Boxed nodes.
○ We use Arrows which point from objects to their values.
○ Example:
3. Inferential knowledge:
○ Inferential knowledge approach represents knowledge in the form of formal
logics.
○ This approach can be used to derive more facts.
○ It guaranteed correctness.
○ Example: Let's suppose there are two statements:
a. Marcus is a man
b. All men are mortal
Then it can represent as;
man(Marcus)
∀x = man (x) ----------> mortal (x)s
4. Procedural knowledge:
○ Procedural knowledge approach uses small programs and codes which describe
how to do specific things, and how to proceed.
○ In this approach, one important rule is used which is If-Then rule.
○ In this knowledge, we can use various coding languages such as LISP language
and Prolog language.
○ We can easily represent heuristic or domain-specific knowledge using this
approach.
○ But it is not necessary that we can represent all cases in this approach.
Knowledge Representation Techniques in AI
Logical Representation
It is the most basic form of representing knowledge to machines where a
well-defined syntax with proper rules is used. This syntax needs to have
no ambiguity in its meaning and must deal with prepositions. Thus, this
logical form of presentation acts as communication rules and is why it
can be best used when representing facts to a machine. Logical
Representation can be of two types-
Propositional Logic: This type of logical representation is also
known as propositional calculus or statement logic. This works
in a Boolean, i.e., True or False method.
Premise: If it's raining, then I can't play soccer.
Conclusion: If I can't play soccer, then it's raining.
Explanation: From the first statement, we are given a condition and a result: "raining" as a
condition and "I can't play soccer" as a result. The entire premise is phrased in such a way
that if the condition is fulfilled, then the result will occur. However, the conclusion shows that
if the result is fulfilled, then the condition will occur. This does not make sense because it is
not necessary for the condition to take place if the result occurs first. This is known as a
converse error.
In a general form, the argument for a converse error is as follows:
● If P occurs, then Q occurs.
● Q occurs.
● Therefore, P also occurs.
First-order Logic: This type of logical representation is also
known as the First Order Predicate Calculus Logic (FOPL).
This logical representation represents the objects in quantifiers
and predicates and is an advanced version of propositional
logic.
First-order logic statements can be divided into two parts:
○ Subject: Subject is the main part of the statement.
○ Predicate: A predicate can be defined as a relation, which binds two atoms
together in a statement.
Consider the statement: "x is an integer.", it consists of two parts, the first part x is the
subject of the statement and second part "is an integer," is known as a predicate.
Quantifiers in First-order logic:
○ A quantifier is a language element which generates quantification, and
quantification specifies the quantity of specimen in the universe of discourse.
○ These are the symbols that permit to determine or identify the range and scope
of the variable in the logical expression. There are two types of quantifier:
a. Universal Quantifier, (for all, everyone, everything)
b. Existential quantifier, (for some, at least one).
1. All birds fly.
In this question the predicate is "fly(bird)."
And since there are all birds who fly so it will be represented as follows.
∀x bird(x) →fly(x).
2. Every man respects his parent.
In this question, the predicate is "respect(x, y)," where x=man, and y= parent.
Since there is every man so will use ∀, and it will be represented as follows:
∀x man(x) → respects (x, parent).
3. Some boys play cricket.
In this question, the predicate is "play(x, y)," where x= boys, and y= game. Since there are
some boys so we will use ∃, and it will be represented as:
∃x boys(x) → play(x, cricket).
4. Not all students like both Mathematics and Science.
In this question, the predicate is "like(x, y)," where x= student, and y= subject.
Since there are not all students, so we will use ∀ with negation, so following
representation for this:
¬∀ (x) [ student(x) → like(x, Mathematics) ∧ like(x, Science)].
5. Only one student failed in Mathematics.
In this question, the predicate is "failed(x, y)," where x= student, and y= subject.
Since there is only one student who failed in Mathematics, so we will use following
representation for this:
∃(x) [ student(x) → failed (x, Mathematics) ∧∀ (y) [¬(x==y) ∧ student(y) →
¬failed (x, Mathematics)].
If you may or may not have noticed by now, this form of representation
is the basis of most of the programming languages we know of where we
use semantics to convey information, and this form is highly logical.
However, the downside of this method is that due to the strict nature of
representation (because of being highly logical), it is tough to work with
as it’s not very natural and less efficient at times.
Semantic Networks
In this form, a graphical representation conveys how the objects are
connected and are often used with a data network. The Semantic
networks consist of node/block (the objects) and arcs/edges (the
connections) that explain how the objects are connected. This form of
representation is also known as an alternative to the FPOL form of
representation. The relationships found in the Semantic Networks can be
of two types – IS-A and instance (KIND-OF). This form of representation
is more natural than logical. It is simple to understand however suffers
from being computationally expensive and do not have the equivalent of
quantifiers found in the logical representation.
Production Rules
It is among the most common ways in which knowledge is represented in
AI systems. In the simplest form, it can be understood as a simple if-else
rule-based system and, in a way, is the combination of Propositional and
FOPL logics. However, a more technical understanding of production
rules can be understood by first understanding what this representation
system is comprised of. This system comprises a set of production rules,
rule applier, working memory, and a recognize act cycle. For every input,
conditions are checked from the set of a production rule, and upon
finding a suitable rule, an action is committed. This cycle of selecting the
rule based on some conditions and consequently acting to solve the
problem is known as a recognition and act cycle, which takes place for
every input. This method has certain problems, such as the lack of
gaining experience as it doesn’t store the past results and can also be
inefficient as, during execution, many other rules may be active. The cost
of these disadvantages can be redeemed because the rules of this system
are expressed in natural language, where the rules can also be easily
changed and dropped (if required).
Frame Representation
A frame is a record like structure which consists of a collection of attributes and its
values to describe an entity in the world. Frames are the AI data structure which divides
knowledge into substructures by representing stereotypes situations. It consists of a
collection of slots and slot values. These slots may be of any type and sizes. Slots have
names and values which are called facets.
Facets: The various aspects of a slot is known as Facets. Facets are features of frames
which enable us to put constraints on the frames. Example: IF-NEEDED facts are called
when data of any particular slot is needed. A frame may consist of any number of slots,
and a slot may include any number of facets and facets may have any number of
values. A frame is also known as slot-filter knowledge representation in artificial
intelligence.
Example: 1
Let's take an example of a frame for a book
Slots Filters
Title Artificial Intelligence
Genre Computer Science
Author Peter Norvig
Edition Third Edition
Year 1996
Page 1152
Example 2:
Let's suppose we are taking an entity, Peter. Peter is an engineer as a profession, and his
age is 25, he lives in city London, and the country is England. So following is the frame
representation for this:
Slots Filter
Name Peter
Profession Doctor
Age 25
Marital status Single
Weight 78
Advantages of frame representation:
1. The frame knowledge representation makes the programming easier by grouping
the related data.
2. The frame representation is comparably flexible and used by many applications
in AI.
3. It is very easy to add slots for new attribute and relations.
4. It is easy to include default data and to search for missing values.
5. Frame representation is easy to understand and visualize.
Disadvantages of frame representation:
1. In frame system inference mechanism is not be easily processed.
2. Inference mechanism cannot be smoothly proceeded by frame representation.
3. Frame representation has a much generalized approach.
Requirements for knowledge Representation system:
A good knowledge representation system must possess the following properties.
1. Representational Accuracy:
KR system should have the ability to represent all kind of required knowledge.
2. Inferential Adequacy:
KR system should have ability to manipulate the representational structures to
produce new knowledge corresponding to existing structure.
3. Inferential Efficiency:
The ability to direct the inferential knowledge mechanism into the most
productive directions by storing appropriate guides.
4. Acquisitional efficiency- The ability to acquire new knowledge easily using
automatic methods.
Properties of Knowledge Representation
Whenever knowledge representation in AI is discussed, we
discuss creating the knowledge representation system that can
represent the various types of knowledge discussed above.
This system must manifest certain properties that can help us
in assessing the system. Following are these properties-
Representational Adequacy
A major property of a knowledge representation system is that
it is adequate and can make an AI system understand, i.e.,
represent all the knowledge required by it to deal with a
particular field or domain.
Inferential Adequacy
The knowledge representation system is flexible enough to
deal with the present knowledge to make way for newly
possessed knowledge.
Inferential Efficiency
The representation system cannot accommodate new
knowledge in the presence of the old knowledge, but it can add
this knowledge efficiently and in a seamless manner.
Acquisitional Efficiency
The final property of the knowledge representation system will
be its ability to gain new knowledge automatically, helping the
AI to add to its current knowledge and consequently become
increasingly smarter and productive.
https://www.analytixlabs.co.in/blog/what-is-knowledge-representation-in-artificial-int
elligence/
GAME PLAYING
Game Playing is an important domain of artificial intelligence. Games don’t require much
knowledge; the only knowledge we need to provide is the rules, legal moves and the conditions
of winning or losing the game. Both players try to win the game. So, both of them try to make
the best move possible at each turn.
Searching techniques like BFS(Breadth First Search) are not accurate for this as the branching
factor is very high, so searching will take a lot of time. So, we need another search procedures .
The goal of game playing in artificial intelligence is to develop algorithms that can learn how to
play games and make decisions that will lead to winning outcomes.
1. One of the earliest examples of successful game playing AI is the chess program
Deep Blue, developed by IBM, which defeated the world champion Garry Kasparov in 1997.
2. https://www.chess.com/terms/deep-blue-chess-computer
3. Since then, AI has been applied to a wide range of games, including two-player
games, multiplayer games, and video games.
2. There are two main approaches to game playing in AI, rule-based systems and machine
learning-based systems.
1. Rule-based systems use a set of fixed rules to play the game.
2. Machine learning-based systems use algorithms to learn from experience and
make decisions based on that experience.
AI algorithms can be used to develop more effective decision-making systems for real-world
applications.The most common search technique in game playing is Minimax search procedure.
It is depth-first depth-limited search procedure. It is used for games like chess and tic-tac-toe.
Minimax algorithm uses two functions –
MOVEGEN : It generates all the possible moves that can be generated from the current
position.
STATIC EVALUATION : It returns a value depending upon the goodness from the viewpoint of
two-player
This algorithm is a two player game, so we call the first player as PLAYER1 and second player
as PLAYER2. The value of each node is backed-up from its children. For PLAYER1 the
backed-up value is the maximum value of its children and for PLAYER2 the backed-up value is
the minimum value of its children. It provides most promising move to PLAYER1, assuming that
the PLAYER2 has make the best move. It is a recursive algorithm, as same procedure occurs at
each level.
Figure 1: Before backing-up of values
Figure 2: After backing-up of values We assume that PLAYER1 will start the game.
4 levels are generated. The value to nodes H, I, J, K, L, M, N, O is provided by STATIC
EVALUATION function. Level 3 is maximizing level, so all nodes of level 3 will take maximum
values of their children. Level 2 is minimizing level, so all its nodes will take minimum values of
their children. This process continues. The value of A is 23. That means A should choose C
move to win.
Reference : Artificial Intelligence by Rich and Knight
Advantages of Game Playing in Artificial Intelligence:
1. Advancement of AI: Game playing has been a driving force behind the development of
artificial intelligence.
2. Education and training: Game playing can be used to teach AI techniques and
algorithms to students and professionals,
3. Research: Game playing is an active area of research in AI and provides an opportunity
to study and develop new techniques for decision-making and problem-solving.
4. Real-world applications: The techniques and algorithms developed for game playing can
be applied to real-world applications, such as robotics, autonomous systems, and
decision support systems.
Disadvantages of Game Playing in Artificial Intelligence:
1. Limited scope: The techniques and algorithms developed for game playing may not be
well-suited for other types of applications and may need to be adapted or modified for
different domains.
2. Computational cost: Game playing can be computationally expensive, especially for
complex games.
Alpha-Beta Pruning
○ Alpha-beta pruning is a modified version of the minimax algorithm. It is
an optimization technique for the minimax algorithm.
○ This involves two threshold parameters Alpha and Beta for future
expansion, so it is called alpha-beta pruning. It is also called the
Alpha-Beta Algorithm.
○ Alpha-beta pruning can be applied at any depth of a tree, and
sometimes it not only prunes the tree leaves but also entire sub-tree.
○ The two-parameter can be defined as:
○ Alpha: The best (highest-value) choice we have found so far at any
point along the path of Maximizer. The initial value of alpha is -∞.
○ Beta: The best (lowest-value) choice we have found so far at any
point along the path of Minimizer. The initial value of beta is +∞.
○ The Alpha-beta pruning to a standard minimax algorithm returns the
same move as the standard algorithm does, but it removes all the
nodes which are not really affecting the final decision but making the
algorithm slow. Hence by pruning these nodes, it makes the algorithm
fast.
Condition for Alpha-beta pruning:
The main condition which required for alpha-beta pruning is:
α>=β
Key points about alpha-beta pruning:
○ The Max player will only update the value of alpha.
○ The Min player will only update the value of beta.
○ While backtracking the tree, the node values will be passed to upper
nodes instead of values of alpha and beta.
○ We will only pass the alpha, beta values to the child nodes.
Working of Alpha-Beta Pruning:
Let's take an example of two-player search tree to understand the working of
Alpha-beta pruning
Step 1: At the first step the, Max player will start first move from node A where α=
-∞ and β= +∞, these value of alpha and beta passed down to node B where again
α= -∞ and β= +∞, and Node B passes the same value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of
α is compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α
at node D and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as
this is a turn of Min, Now β= +∞, will compare with the available subsequent
nodes value, i.e. min (∞, 3) = 3, hence at node B now α= -∞, and β= 3.
In the next step, the algorithm traverses the next successor of Node B which is
node E, and the values of α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The
current value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node
E α= 5 and β= 3, where α>=β, so the right successor of E will be pruned, and
algorithm will not traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A.
At node A, the value of alpha will be changed the maximum available value is 3
as max (-∞, 3)= 3, and β= +∞, these two values now passes to right successor of A
which is Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0,
and max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3
still α remains 3, but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the
value of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3
and β= 1, and again it satisfies the condition α>=β, so the next child of C which is G
will be pruned, and the algorithm will not compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3.
Following is the final game tree which is the showing the nodes which are
computed and nodes which has never computed. Hence the optimal value for
the maximizer is 3 for this example.
○ Worst ordering: In some cases, alpha-beta pruning algorithm does not
prune any of the leaves of the tree, and works exactly as minimax
algorithm. In this case, it also consumes more time because of
alpha-beta factors, such a move of pruning is called worst ordering. In
this case, the best move occurs on the right side of the tree. The time
complexity for such an order is O(bm).
○ Ideal ordering: The ideal ordering for alpha-beta pruning occurs when
lots of pruning happens in the tree, and best moves occur at the left
side of the tree. We apply DFS hence it first search left of the tree and
go deep twice as minimax algorithm in the same amount of time.
Complexity in ideal ordering is O(bm/2).
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3.
Following is the final game tree which is the showing the nodes which are
computed and nodes which has never computed. Hence the optimal value for
the maximizer is 3 for this example.
What is Planning in AI?
Planning in AI is the process of coming up with a series of actions or procedures
to accomplish a particular goal.
Different types of planning in AI
AI planning comes in different types, each suitable for a particular situation.
Popular different types of planning in ai include:
● Classical Planning: In this style of planning, a series of actions is created to
accomplish a goal in a predetermined setting. It assumes that everything is
static and predictable.
● Hierarchical planning: By dividing large problems into smaller ones,
hierarchical planning makes planning more effective. A hierarchy of plans
must be established, with higher-level plans supervising the execution of
lower-level plans.
● Temporal Planning: Planning for the future considers time restrictions and
interdependencies between actions. It ensures that the plan is workable
within a certain time limit by taking into account the duration of tasks.
Components of planning system in AI
A planning system in AI is made up of many crucial parts that cooperate to
produce successful plans. These components of planning system in ai consist
of:
● Representation: The component that describes how the planning problem
is represented is called representation. The state space, actions,
objectives, and limitations must all be defined.
● Search: To locate a series of steps that will get you where you want to go,
the search component searches the state space. To locate the best plans,
a variety of search techniques, including depth-first search & A* search,
can be used.
● Heuristics: Heuristics are used to direct search efforts and gauge the
expense or benefit of certain actions. They aid in locating prospective
routes and enhancing the effectiveness of the planning process.
Benefits of AI Planning
Numerous advantages of AI planning contribute to the efficacy and efficiency of
artificial intelligence systems. Some key benefits include:
● Resource Allocation: With the help of AI planning, resources can be
distributed in the best way possible, ensuring that they are used effectively
to accomplish the desired objectives.
● Better Decision-Making: AI planning aids in making knowledgeable
judgments by taking a variety of aspects and restrictions into account. It
helps AI systems to weigh several possibilities and decide on the best
course of action.
● Automation of Complex Tasks: AI planning automates complicated tasks
that would otherwise need a lot of human work. It makes it possible for AI
systems to manage complex procedures and optimize them for better
results.
Applications of AI Planning
AI planning is used in many different fields, demonstrating its adaptability and
efficiency. A few significant applications are:
● Robotics: To enable autonomous robots to properly navigate their
surroundings, carry out activities, and achieve goals, planning is crucial.
● Gaming: AI planning is essential to the gaming industry because it enables
game characters to make thoughtful choices and design difficult and
interesting gameplay scenarios.
● Logistics: To optimize routes, timetables, and resource allocation and
achieve effective supply chain management, AI planning is widely utilized
in logistics.
● Healthcare: AI planning is used in the industry to better the quality and
effectiveness of healthcare services by scheduling patients, allocating
resources, and planning treatments.
Challenges in AI Planning
While AI planning has many advantages, many issues need to be resolved.
Typical challenges include:
● Complexity: Due to the wide state space, multiple possible actions, and
interdependencies between them, planning in complicated domains can be
difficult.
● Uncertainty: One of the biggest challenges in AI planning is overcoming
uncertainty. Actions' results might not always be anticipated, thus the
planning system needs to be able to deal with such ambiguous situations.
● Scalability: Scalability becomes a significant barrier as the complexity and
scale of planning problems rise. Large-scale issues must be effectively
handled via planning systems.
Strategies for Mastering AI Planning
Adopting strategies that improve planning abilities is crucial if you want to
master AI planning. Here are some strategies to think about:
● Domain Knowledge: Learn everything there is to know about the planning
domain. Better strategies can be made if you are aware of the complexities
and limitations of the domain.
● Algorithm Selection: It is essential to choose the right planning algorithm
for the particular issue at hand. Choosing the best algorithm can have a
big impact on the planning process because different algorithms have
different strengths and disadvantages.
● Improvement through iteration: Planning is an iterative process, and
improvement is essential. Analyze the effectiveness of plans, pinpoint
areas that need improvement, and adjust the planning system as
necessary.
Tools and Techniques for AI Planning
Various tools and strategies that support planning can be used to facilitate AI
planning. Techniques and tools that are commonly used include:
● Automated planners: Programmes that generate plans automatically
include STRIPS and PDDL, which offer a framework for specifying planning
problems.
● Constraint Programming: Using the strong technique of constraint
programming, complicated planning issues with a variety of constraints
can be modeled and solved.
● Machine Learning: Reinforcement learning is a machine learning technique
that can be used to enhance planning by learning from previous
experiences and refining plans in response to feedback.
Best Practices of AI Planning
Several important aspects are incorporated into best practices for AI planning:
● Formulation of the issue clearly: Clearly state your goals, restrictions, and
desired results.
● Optimal representation: Construct an appropriate representation of the
planning domain.
● Algorithm selection: Choose planning algorithms that strike a compromise
between complexity and optimality.
● Iterative improvement: Keep the planning process under constant review
and improvement.
● Management of uncertainty: Include methods, such as probabilistic
modeling, to deal with uncertainty.
● Utilizing human expertise: Ask for and use feedback from others to make
sure your aims are in line with theirs.
● Benchmarking and evaluation: Continually assess performance and
evaluate against pertinent indicators.
● Collaboration: Encourage cooperative planning by incorporating pertinent
parties.
● Scalability: Design planning systems with scalability in mind to effectively
tackle big problems.
● Real-time responsiveness: Create systems that can instantly adjust and
replan in response to shifting circumstances.
● Ethical considerations: Address AI ethical issues, promote justice, and
ensure accountability in the planning procedures..
● Documentation: Keep thorough records of the planning process for future
reference.
Classical planning
For example, consider the image above - we have blocks A, B, and C and a hand
that can pick up any of the blocks. One example of a predicate might be
On-Table(B) that specifies whether object B is on the table or not. In order to
represent the above state, we would need objects to represent the blocks A, B, and
C and then could create the following predicates:
● In-Hand(C)
● On-Table(B)
● On-Block(A, B)
● Clear(A)
● Clear(C)
● Example
●
Hierarchical Planning:
Implement goal stack planning for the following configurations from the blocks world.
What is Blocks World Problem?
•There is a table on which some blocks are placed.
•Some blocks may or may not be stacked on other blocks.
•We have a robot arm to pick up or put down the blocks.
•The robot arm can move only one block at a time, and no other block should be
stacked on top of the block which is to be moved by the robot arm.
•Our aim is to change the configuration of the blocks from the Initial State to
the Goal State, both of which have been specified in the diagram above.
•We keep solving these “goals” and “sub-goals” until we finally arrive at the
Initial State.
What is Goal Stack Planning?
Goal Stack Planning is one of the earliest methods in artificial
intelligence in which we work backwards from the goal state to
the initial state.
We start at the goal state and we try fulfilling the preconditions
required to achieve the initial state. These preconditions in turn have
their own set of preconditions, which are required to be satisfied first.
We keep solving these “goals” and “sub-goals” until we finally arrive at
the Initial State. We make use of a stack to hold these goals
that need to be fulfilled as well the actions that we need to
perform for the same.
Apart from the “Initial State” and the “Goal State”, we maintain a
“World State” configuration as well. Goal Stack uses this world state
to work its way from Goal State to Initial State. World State on the
other hand starts off as the Initial State and ends up being
transformed into the Goal state.
At the end of this algorithm we are left with an empty stack and a set
of actions which helps us navigate from the Initial State to the World
State.
Representing the configurations as a list of “predicates”
Predicates can be thought of as a statement which helps us convey the
information about a configuration in Blocks World.
Given below are the list of predicates as well as their intended
meaning
1. ON(A,B) : Block A is on B
2. ONTABLE(A) : A is on table
3. CLEAR(A) : Nothing is on top of A
4. HOLDING(A) : Arm is holding A.
5. ARMEMPTY : Arm is holding nothing
Using these predicates, we represent the Initial State and the Goal
State in our example like this:
Initial State — ON(B,A) ∧ ONTABLE(A) ∧ ONTABLE(C) ∧
ONTABLE(D) ∧ CLEAR(B) ∧ CLEAR(C) ∧ CLEAR(D) ∧
ARMEMPTY
Initial State
Goal State — ON(C,A) ∧ ON(B,D) ∧ ONTABLE(A) ∧ ONTABLE(D)
∧ CLEAR(B) ∧ CLEAR(C) ∧ ARMEMPTY
Goal State
Thus a configuration can be thought of as a list of predicates
describing the current scenario.
“Operations” performed by the robot arm
The Robot Arm can perform 4 operations:
1. STACK(X,Y) : Stacking Block X on Block Y
2. UNSTACK(X,Y) : Picking up Block X which is on top of Block Y
3. PICKUP(X) : Picking up Block X which is on top of the table
4. PUTDOWN(X) : Put Block X on the table
All the four operations have certain preconditions which need to be
satisfied to perform the same. These preconditions are represented in
the form of predicates.
The effect of these operations is represented using two lists ADD and
DELETE. DELETE List contains the predicates which will cease to be
true once the operation is performed. ADD List on the other hand
contains the predicates which will become true once the operation is
performed.
The Precondition, Add and Delete List for each operation is rather
intuitive and have been listed below.
Operations performed by the Robot Arm
For example, to perform the STACK(X,Y) operation i.e. to Stack
Block X on top of Block Y, No other block should be on top of Y
(CLEAR(Y)) and the Robot Arm should be holding the Block X
(HOLDING(X)).
Once the operation is performed, these predicates will cease to be true,
thus they are included in DELETE List as well. (Note : It is not
necessary for the Precondition and DELETE List to be the exact same).
On the other hand, once the operation is performed, The robot arm
will be free (ARMEMPTY) and the block X will be on top of Y
(ON(X,Y)).
The other 3 Operators follow similar logic, and this part is the
cornerstone of Goal Stack Planning.