Artificial Intelligence
UNIT VI
Planning
Prepared By
Mrs. Jayshri Dhere
Assistant Professor
Artificial Intelligence and Data Science
DYPIEMR, Pune
Syllabus
• Automated Planning,
• Classical Planning,
• Algorithms for Classical Planning,
• Heuristics for Planning,
• Hierarchical Planning,
• Planning and Acting in Nondeterministic Domains,
• Time, Schedules, and Resources,
• Analysis of Planning Approaches,
• Limits of AI,
• Ethics of AI,
• Future of AI,
• AI Components,
• AI Architectures.
PLANNING
Planning in AI can be defined as a problem that needs
decision making, by intelligent systems to accomplish
the given target.
Ex. Printer.
Planning is an activity where agent has to come up with
a sequence of actions to accomplish target.
Aim of agent is to find the proper sequence of actions
which will lead from starting state to goal state and
produce an efficient solution.
What is the Role of Planning in
Artificial Intelligence?
• Artificial intelligence is an important technology in the future.
Whether it is intelligent robots, self-driving cars, or smart
cities, they will all use different aspects of artificial
intelligence!!! But Planning is very important to make any
such AI project.
• Even Planning is an important part of Artificial Intelligence
which deals with the tasks and domains of a particular
problem. Planning is considered the logical side of acting.
• Everything we humans do is with a definite goal in mind, and
all our actions are oriented towards achieving our goal.
Similarly, Planning is also done for Artificial Intelligence.
Automated Planning
• In AI, automated planning and scheduling is the process of using
computers to automatically plan and schedule actions and
events.
• This can include planning and scheduling tasks, resources, and
events. Automated planning and scheduling can help
organizations and individuals to optimize their use of resources
and time, and to reduce the need for manual planning and
scheduling.
• Automated planning and scheduling in AI typically uses a variety
of algorithms and methods, including constraint satisfaction,
search, planning graphs, and Markov decision processes.
• These methods can be used to find solutions to planning and
scheduling problems, to optimize plans and schedules, and to
predict future events.
Automated Planning
• In AI, automated planning and scheduling is the process
of using computers to automatically plan and schedule
actions and events.
• This can include planning and scheduling tasks,
resources, and events. Automated planning and
scheduling can help organizations and individuals to
optimize their use of resources and time, and to reduce
the need for manual planning and scheduling.
Automated Planning
• Automated planning and scheduling in AI typically uses
a variety of algorithms and methods, including
constraint satisfaction, search, planning graphs, and
Markov decision processes.
• These methods can be used to find solutions to
planning and scheduling problems, to optimize plans
and schedules, and to predict future events.
CLASSICAL PLANNING
The problem-solving agent can find sequences of actions that
result in a goal state. But it deals with atomic representations of
states and thus needs good domain specific heuristics to perform
well.
The hybrid propositional logical agent can find plans without
domain specific heuristics because it uses domain-independent
heuristics based on the logical structure of the problem but it
relies on ground (variable-free) propositional inference, which
means that it may be swamped when the rearm any actions and
states.
For example, in the world, the simple action of moving a step
forward had to be repeated for all four agent orientations, T time
steps, and current locations.
Classical Planning
• Classical Planning is the planning where an agent takes
advantage of the problem structure to construct
complex plans of an action.
• The agent performs three tasks in classical planning:
1. Planning: The agent plans after knowing what is the
problem.
2. Acting: It decides what action it has to take.
3. Learning: The actions taken by the agent make him
learn new things.
A CLASSICAL PLANNING HAS FOLLOWING ASSUMPTIONS ABOUT
THE TASK ENVIRONMENT.
Fully observable -Agent can observe the current state of the environment.
Deterministic -Agent can determine the consequences of its actions.
Finite -There are finite set of actions which can be carried out by the agent
at every state in order to achieve the goal.
Static - Events are study external events which cannot be handled by agent
is not considered
Discrete - Events of the agents are distinct from the starting step to the
ending state in terms of time
PDDL(Planning Domain Definition
Language)
A language known as PDDL(Planning Domain
Definition Language) which is used to represent all
actions into one action schema
PDLL describes the four basic things needed in a
search problem:
1. Initial state: It is the representation of each state as
the conjunction of the ground and functionless atoms.
2. Actions: It is defined by a set of action schemas which
implicitly define the ACTION() and RESULT() functions.
3. Result: It is obtained by the set of actions used by the
agent.
4. Goal: It is same as a precondition, which is a
conjunction of literals (whose value is either positive
or negative)
PDDL(Planning Domain Definition
Language)
There are various examples which will make PDLL
understandable:
• Air cargo transport
• The spare tire problem
• The blocks world and many more
Air cargo transport
• This problem can be illustrated with the help of the following
actions:
• Load: This action is taken to load cargo.
• Unload: This action is taken to unload the cargo when it
reaches its
• destination.
• Fly: This action is taken to fly from one place to another.
• Therefore, the Air cargo transport problem is based on
loading and unloading the cargo and flying it from one place
to another.
An air cargo transport problem involving loading and unloading
cargo and flying it from place to place.
The problem can be defined with three actions:
Load , Unload , and Fly .
The actions affect two predicates: In(c, p) means that cargo c is
inside plane p, and At(x, a) means that object x (either plane or
cargo) is at airport a. Note that some care must be taken to make sure
the At predicates are maintained properly. When a plane flies from
one airport to another, all the cargo inside the plane goes with it. In
first-order logic it would be easy to quantify over all objects that are
inside the plane. But basic PDDL does not have a universal
quantifier, so we need a different solution.
The approach we use is to say that a piece of cargo ceases to be At
anywhere when it is In a plane; the cargo only becomes At the new
airport when it is unloaded. So At really means “available for use at
a given location.”
ALGORITHMS FOR CLASSICAL PLANNING
Several algorithms and techniques have been developed for
solving classical planning problems. Here are some of the most
well-known algorithms:
1. Forward State Space Search:
Breadth-First Search (BFS): Explores all possible actions at a
given depth level before moving to the next level.
Uniform-Cost Search (Dijkstra's Algorithm): Expands nodes
based on the cost of the path, ensuring the optimal solution.
FORWARD STATE SPACE SEARCH:
(PROGRESSION)
planning problems often have large state spaces. Consider an air
cargo problem with 10 airports, where each airport has 5 planes and
20 pieces of cargo. The goal is to move all the cargo at airport A to
airport B.
There is a simple solution to the problem: load the 20 pieces of cargo
into one of the planes at A, fly the plane to B, and unload the cargo.
Finding the solution can be difficult because the average branching
factor is huge: each of the 50 planes can fly to 9 other airports, and
each of the 200 packages can be either unloaded (if it is loaded) or
loaded into any plane at its airport (if it is unloaded). So in any state
there is a minimum of 450 actions
Breadth-First Search (BFS): Explores all possible
actions at a given depth level before moving to the next
level.
Uniform-Cost Search (Dijkstra's Algorithm): Expands
nodes based on the cost of the path, ensuring the optimal
solution.
BACKWARD (REGRESSION) RELEVANT-STATES
SEARCH:
In regression search we start at the goal and apply the
actions backward until we find a sequence of steps that
reaches the initial state. It is called relevant-states search
because we only consider actions that are relevant to the
goal (or current state). There is a set of relevant states to
consider at each step, not just a single state.
BACKWARD (REGRESSION) RELEVANT-STATES
SEARCH:…
Ex. We start with the goal, which is a conjunction of literals
forming a description of a set of states—for example, the goal
¬Poor ∧ Famous describes those states in which Poor is
false, Famous is true, and any other fluent can have any value.
If there are n ground flaunts in a domain, then there are 2n
ground states (each fluent can be true or false), but 3n
descriptions of sets of goal states (each fluent can be positive,
negative, or not mentioned).
Heuristic Search:
⚫ A Search*: Uses a heuristic function to estimate the cost to reach
the goal and combines it with the cost to reach a particular state. It
ensures optimality when using an admissible heuristic.
⚫ Best-First Search: Expands nodes based on a heuristic function
without considering the cost of the path.
HEURISTICS FOR PLANNING:
Heuristics for planning: Neither forward nor backward search
is efficient without a good heuristic function.
A heuristic function h(s) estimates the distance from a state s
to the goal and that if we can derive an admissible heuristic for
this distance—one that does not overestimate—then we can
use A∗ search to find optimal solutions. An admissible
heuristic can be derived by defining a relaxed problem that is
easier to solve. The exact cost of a solution to this easier
problem then becomes the heuristic for the original problem.
By definition, there is no way to analyze an atomic state,
and thus it it requires some ingenuity by a human analyst to
define good domain-specific heuristics for search problems
with atomic states.
Planning uses a factored representation for states and action
schemas. That makes it possible to define good
domain-independent heuristics and for programs to
automatically apply a good domainindependent heuristic
for a given problem.
Cooperation: Joint goals and plans
An agent (A, B) declares that there are two agents, A
and B who are participating in the plan.
Each action explicitly mentions the agent as a
parameter, because we need to keep track of which agent
does what.
A solution to a multiagent planning problem is a joint
plan consisting of actions for each agent
PLANNING GRAPHS:
All of the heuristics we have suggested can suffer from
inaccuracies. This section shows how a special data
structure called a planning graph can be used to give
better heuristic estimates. These heuristics can be applied
to any of the search techniques we have seen so far.
Alternatively, we can search for a solution over the space
formed by the planning graph, using an algorithm called
GRAPHPLAN.
A planning problem asks if we can reach a goal state from the initial state.
Suppose we are given a tree of all possible actions from the initial state to
successor states, and their successors, and so on. If we indexed this tree
appropriately, we could answer the planning question “can we reach state G
from state S0” immediately, just by looking it up. Of course, the tree is of
exponential size, so this approach is impractical.
A planning graph is polynomial- size approximation to this tree that can be
constructed quickly. The planning graph can’t answer definitively whether
G is reachable from S0, but it can estimate how many steps it takes to reach
G. The estimate is always correct when it reports the goal is not reachable,
and it never overestimates the number of steps, so it is an admissible
heuristic.
ANALYSIS OF PLANNING APPROACHES:
Planning combines the two major areas of AI we have
covered so far: search and logic
A planner can be seen either as a program that searches
for a solution or as one that (constructively) proves the
existence of a solution.
HIERARCHICAL PLANNING :
Hierarchical Planning is a planning method based on
hierarchical task network.
PLANNING AND ACTING IN NONDETERMINISTIC DOMAINS
Planning and acting in nondeterministic domains is a
challenging problem in the field of artificial intelligence.
Nondeterministic domains are those in which the
outcome of an action or the state transition is not
entirely predictable and may involve uncertainty or
probabilistic elements. This is common in real-world
scenarios where external factors or events can influence
the outcome. Dealing with such uncertainty requires
specialized approaches. Here are some key concepts and
techniques for planning and acting in nondeterministic
domains
Consider this problem: given a chair and a table, the goal is to
have them match—have the same color. In the initial state we
have two cans of paint, but the colors of the paint and the
furniture are unknown. Only the table is initially in the agent’s
field of view:
Init(Object(Table) ∧ Object(Chair ) ∧ Can(C1) ∧ Can(C2) ∧
InView (Table)) Goal (Color (Chair , c) ∧ Color (Table, c)) There
are two actions: removing the lid from a paint can and painting an
object using the paint from an open can.
• Markov Decision Processes (MDPs):
MDPs are a mathematical framework for modeling and solving
decision-making problems under uncertainty. They consist of states,
actions, transition probabilities, rewards, and a discount factor. Planning in
MDPs involves finding a policy that maximizes expected cumulative
rewards.
• POMDPs (Partially Observable MDPs):
In some situations, the agent may not have complete information about
the current state. POMDPs extend MDPs to handle partially observable
environments, introducing belief states and observation models.
• Stochastic Actions:
In nondeterministic domains, actions may have stochastic outcomes. Each
action can have a probability distribution over possible outcomes, making
planning and acting more challenging.
• Monte Carlo Methods:
• Monte Carlo methods, such as Monte Carlo Tree Search
(MCTS), are often used to handle non-determinism. They use
random sampling to estimate expected values, making them
suitable for domains with uncertain outcomes.
• Reinforcement Learning:
• Reinforcement learning (RL) algorithms can be used to learn
optimal policies in nondeterministic domains by interacting
with the environment and updating policies based on observed
outcomes. Techniques like Q-learning and policy gradient
methods are applicable.
• Value Iteration and Policy Iteration:
• These classic dynamic programming methods can be adapted
to MDPs with stochastic actions. They involve iterative
processes to find optimal policies or value functions.
• Model-Based and Model-Free Approaches:
• Model-based methods use a probabilistic model of the
environment to plan and act, while model-free methods
directly learn a policy or value function from interactions with
the environment.
• Uncertainty Handling:
• Techniques like Bayesian networks, decision trees, and
probabilistic graphical models can help represent and reason
about uncertainty in nondeterministic domains.
• Plan Robustness:
• In nondeterministic domains, plans should be designed to be
robust to uncertain outcomes. This might involve considering
multiple contingencies or using replanning strategies.
Analysis of Planning Approaches
The analysis of planning approaches involves evaluating
different methods, algorithms, and techniques used to solve
planning problems in various domains. Below, I'll provide an
analysis of some key planning approaches, highlighting their
strengths and weaknesses:
Classical Planning:
⚫ Strengths:
Well-defined and easy to model for deterministic, fully observable problems.
Many efficient algorithms and heuristic search methods available, such as A*
and SAT-based planners.
⚫ Weaknesses:
Limited applicability to nondeterministic and partially observable domains.
May struggle with large state spaces and complex domains.
Probabilistic Planning:
⚫ Strengths:
Suitable for modeling uncertainty in domains where outcomes are probabilistic.
Can handle partially observable states using POMDPs.
Useful for decision-making in stochastic environments.
⚫ Weaknesses:
Can be computationally expensive, especially for large POMDPs.
Requires knowledge of transition probabilities, which may be difficult to obtain.
Reinforcement Learning:
⚫ Strengths:
Can learn optimal policies from experience in nondeterministic environments.
Highly adaptable and suitable for continuous state and action spaces.
Can handle large state spaces and complex dynamics.
⚫ Weaknesses:
Learning can be data-intensive and time-consuming.
May require substantial exploration to learn optimal policies.
Not guaranteed to find globally optimal solutions.
Heuristic Search:
⚫ Strengths:
Effective for finding solutions in deterministic, fully observable domains.
Guarantees optimality with an admissible heuristic.
⚫ Weaknesses:
May not handle uncertainty and non determinism well.
Can struggle in large state spaces or with insufficient heuristics.
Monte Carlo Methods:
⚫ Strengths:
Useful for planning and acting in stochastic environments.
Suitable for estimating expected values and handling uncertain outcomes.
Doesn't require a complete model of the environment.
⚫ Weaknesses:
Relatively slow convergence compared to some other methods.
Results may have high variance, requiring many samples.
Online Planning and Execution:
⚫ Strengths:
Well-suited for dynamic and changing environments.
Can adapt to unexpected events and uncertainties in real-time.
⚫ Weaknesses:
Might not find globally optimal solutions.
Reactive approaches may not perform well in highly dynamic environments.
Multi-Agent Planning:
⚫ Strengths:
Suitable for cooperative or competitive multi-agent scenarios.
Offers game-theoretic approaches for strategic planning.
⚫ Weaknesses:
Complex, as it involves reasoning about the actions and beliefs of multiple
agents.
Scalability issues in large multi-agent environments.
Hybrid Approaches:
⚫ Strengths:
Combine the strengths of different approaches to address the weaknesses of
each.
⚫ Weaknesses:
Design and implementation can be complex.
May require domain-specific expertise.
Hierarchical Planning:
⚫ Strengths:
Effective for decomposing complex problems into manageable sub problems.
Provides a structured approach for handling large-scale domains.
⚫ Weaknesses:
Requires domain knowledge to design the hierarchy.
Hierarchical decomposition may not always be straightforward.
A typical job-shop scheduling problem, consists of a set of jobs, each
of which consists a collection of actions with ordering constraints
among them.
Each action has a duration and a set of resource constraints required
by the action.
Each constraint specifies a type of resource (e.g., bolts, wrenches, or
pilots), the number of that resource required, and whether that
resource is consumable (e.g., the bolts are no longer available for use)
or reusable (e.g., a pilot is occupied during a flight but is available
again when the flight is over).
Resources can also be produced by actions with negative
consumption, including manufacturing, growing, and resupply
actions.
A solution to a job-shop scheduling problem must specify the start
times for each action and must satisfy all the temporal ordering
constraints and resource constraints.
As with search and planning problems, solutions can be evaluated
according to a cost function; this can be quite complicated, with
nonlinear resource costs, time-dependent delay costs, and so on. For
simplicity, we assume that the cost function is just the total duration
of the plan, which is called the makespan.
Figure shows a simple example: a
problem involving the assembly of
two cars. The problem consists of two
jobs, each of the form [AddEngine,
AddWheels, Inspect]. Then the
Resources statement declares that
there are four types of resources, and
gives the number of each type
available at the start: 1 engine hoist, 1
wheel station, 2 inspectors, and 500
lug nuts. The action schemas give the
duration and resource needs of each
action. The lug nuts are consumed as
wheels are added to the car, whereas
the other resources are “borrowed” at
the start of an action and released at
the action’s end.
LIMITS OF AI
1. Lack of Common Sense Understanding: AI systems, including the most
advanced ones, often lack common-sense reasoning and understanding of the
world that humans possess. They may struggle with tasks that require basic,
intuitive knowledge.
2. Limited Contextual Understanding: AI systems can be sensitive to the
phrasing and context of a question or task. They might give incorrect
responses when the context is ambiguous or not adequately specified.
3. Data Dependence: Many AI algorithms, particularly machine learning
models, require vast amounts of high-quality data for training and may
perform poorly when data is scarce or unrepresentative.
4. Bias and Fairness: AI systems can inherit biases present in the training data,
leading to unfair or discriminatory outcomes. Ensuring fairness and
mitigating biases remains a significant challenge.
6. Lack of Creativity and Imagination: AI can excel at pattern
recognition and optimization tasks but still struggles with creativity,
imagination, and innovation.
7. Incompleteness: AI systems might provide partial or incorrect
solutions when faced with problems for which there is no clear
answer in their training data or knowledge base.
8. Resource Intensiveness: Many AI algorithms, especially deep
learning models, are computationally intensive and require powerful
hardware. Training and deploying such models can be costly and
energy-intensive.
9. Interpretable AI: The decision-making processes of some AI
models, especially deep neural networks, can be complex and
difficult to interpret. This makes understanding and explaining AI
decisions a challenge.
6. Security Concerns: AI can be vulnerable to adversarial attacks,
where small, carefully crafted perturbations in data can fool AI
systems into making incorrect decisions. Security and robustness
against attacks are ongoing concerns.
7. Ethical and Legal Issues: The deployment of AI in various
applications raises ethical and legal questions, including issues
related to accountability, transparency, and privacy.
11. Long-term Planning and Understanding: AI systems may struggle
with long-term planning and reasoning, particularly in domains with
complex, multi-step objectives.
12. Emotional and Social Intelligence: AI lacks emotional intelligence
and social understanding, making it challenging to develop AI
systems that can effectively engage with humans in emotionally
nuanced interactions.
11. Generalization to Unseen Scenarios: While AI
models can generalize from training data to some
extent, they may struggle to adapt to entirely new,
unseen scenarios.
12. Unstructured Environments: AI systems are often
designed for specific tasks and may not perform well in
unstructured, unpredictable environments.
13. Lack of Consciousness: AI systems are not conscious
beings; they do not possess self-awareness, emotions, or
subjective experiences.
ETHICS OF AI
The ethics of artificial intelligence (AI) is a critical and evolving field that addresses the
moral and societal implications of AI technologies. Here are some of the key ethical
considerations related to AI:
1. Bias and Fairness:
AI systems can inherit biases present in their training data, leading to discriminatory
outcomes. Ensuring fairness and minimizing bias in AI algorithms is crucial.
2. Transparency and Explainability:
AI systems, particularly deep learning models, can be complex and difficult to interpret.
Ensuring transparency and making AI decisions explainable is important, especially in
critical applications.
3. Accountability and Responsibility:
Determining who is responsible for the actions of AI systems is a complex issue.
Establishing legal and ethical accountability frameworks is vital.
4. Privacy and Data Security:
AI often relies on vast amounts of data, raising concerns about data privacy and security.
Safeguarding personal data and respecting privacy rights are key ethical principles.
5. Autonomy and Control:
AI has the potential to make autonomous decisions, which raises
questions about who should control these systems and when human
intervention is necessary.
6. Safety and Reliability:
Ensuring that AI systems are safe and reliable is essential, particularly
in applications like autonomous vehicles, healthcare, and finance.
7. Job Displacement and Economic Impact:
The automation of tasks through AI can lead to job displacement and
economic consequences. Ethical considerations involve retraining and
supporting workers affected by AI-driven changes.
8. Human Augmentation and Enhancement:
The use of AI to enhance human capabilities, such as through
brain-computer interfaces, raises ethical questions about human identity
and equality.
9. Long-term Implications:
Considering the long-term effects of AI on society, including issues like
the singularity and superintelligent AI, is an ethical concern.
10. Dual Use of AI:
AI technologies can be used for both beneficial and harmful purposes.
Ensuring the responsible use of AI and preventing malicious
applications is a key ethical challenge.
11. AI in Warfare:
The use of AI in autonomous weapons and warfare has ethical
implications, including questions of proportionality, discrimination,
and adherence to international law.
12. AI for Social Good:
Promoting the use of AI for social good, including addressing global
challenges like climate change, healthcare, and poverty, is an ethical
imperative.
13. Global Collaboration:
Collaboration on AI ethics and standards at an international level is
essential to ensure that AI technologies are developed and used
responsibly worldwide.
14. Informed Consent:
When AI systems collect and process personal data, individuals
should be informed and provide consent. Ensuring that users
understand how AI systems use their data is an ethical requirement.
15. Access and Inclusivity:
Ensuring that AI technologies are accessible to all, regardless of factors
like socioeconomic status, ethnicity, or geography, is an ethical goal.
The ethical development and deployment of AI require collaboration
among technologists, policymakers, ethicists, and the public.
FUTURE OF AI
The future of artificial intelligence (AI is a topic of great interest and
speculation, and while it's challenging to predict with certainty, there
are several trends and directions that are likely to shape the future of
AI:
Continued Advancements in Machine Learning:
⚫ Machine learning techniques, including deep learning, will continue to
evolve, leading to AI systems that are more capable of learning from data
and making sense of complex patterns.
AI in Healthcare:
⚫ AI will play a significant role in healthcare, aiding in diagnosis, drug
discovery, and personalized medicine. Telemedicine and remote health
monitoring will become more common.
AI in Autonomous Systems:
⚫ Autonomous vehicles, drones, and robots will become more prevalent,
transforming transportation, logistics, and manufacturing.
Natural Language Processing (NLP):
⚫ NLP will continue to improve, making AI systems better at understanding
and generating human language. This will impact customer service,
content generation, and more.
AI in Education:
⚫ AI will enhance personalized learning experiences, automate
administrative tasks, and provide intelligent tutoring.
AI in Climate and Environmental Research:
⚫ AI will be used to analyze vast amounts of environmental data, model
climate change, and optimize resource management for sustainability.
AI Ethics and Regulation:
⚫ The development of AI ethics and regulations will continue to address
issues like bias, privacy, accountability, and the responsible use of AI.
Explainable AI (XAI):
⚫ Efforts to make AI more transparent and interpretable will grow, allowing
users to understand how AI systems make decisions.
AI in Finance:
⚫ AI will play a role in risk assessment, fraud detection, algorithmic trading,
and personal finance management.
Quantum Computing and AI:
⚫ The intersection of quantum computing and AI is promising, with the
potential to solve complex problems that are currently intractable.
AI Assistants and Personalization:
⚫ AI-powered virtual assistants will become more integrated into daily life,
offering highly personalized recommendations and services.
AI for Creativity and Art:
⚫ AI-generated art, music, and literature will become more sophisticated,
blurring the line between human and machine creativity.
AI in Cyber security:
⚫ AI will be used for threat detection, anomaly detection, and proactive defense
against cyber attacks.
AI in Agriculture:
⚫ AI will optimize farming practices, crop management, and precision agriculture,
contributing to food security.
AI and Social Challenges:
⚫ AI will play a role in addressing social issues, such as poverty, inequality, and public
health, through data analysis and policy recommendations.
AI and Personal Wellness:
⚫ AI-driven health and wellness applications, including mental health support, will
continue to grow.
AI for Disaster Response:
⚫ AI will aid in disaster prediction, response coordination, and recovery efforts.
AI in Entertainment and Gaming:
⚫ AI will create more immersive and interactive gaming experiences and personalized
content recommendations in the entertainment industry.
Global Collaboration and Ethical Development:
⚫ Collaboration among countries and organizations will be essential to ensure that AI is
developed and used ethically and for the benefit of humanity.
AI COMPONENTS
Learning
Learning in AI occurs when machines or computer systems
memorize specific data or new material. Specifically,
advancements in deep machine learning now enable
enhancements in prescriptive and predictive analytics through
the use of operational data.
Machine learning can find hidden correlations in various data.
With this information, the network can create a predictive model
that is able to pinpoint future machine failures in manufacturing.
Machine learning may even predict when the failure will occur.
This can enable companies to know when and how many parts to
order.
AI Reasoning
AI uses the ability to make inferences when applying reasoning
based on commands it is given or other information at its
disposal. For example, virtual assistants will offer restaurant
recommendations based on the specific orders or questions it
receives.
The assistant will use reasoning to decide what restaurants to
suggest based on the questions it received and the nearest
location of various restaurants.
This type of reasoning involves drawing inferences. Inferences
include two categories: deductive and inductive reasoning.
Problem Solving
In the most basic of terms, an AI's problem-solving ability is based
on the application and manipulation of data, where the solution
needs to be x.
Alternatively, in more advanced applications, problem-solving
techniques in the context of AI can include the development of
efficient algorithms and performing root cause analysis with the goal
of discovering a desirable solution.
AI implements heuristics when solving problems by devising a
solution using trial and error techniques. Specific examples of
problem-solving in AI would include the use of predictive
technology in the area of online shopping.
When a shopper is looking for a product and doesn't know the exact
name of the product, AI can assist in dramatically reducing the
possibilities.
Perception
Perception is when different sense organs, whether real or
artificial, scan the environment. For example, AI scans the
environment through sense components such as temperature
sensors and cameras.
Autonomous driving is an example of how AI implements
perception. They are able to perceive and comprehend the
environment around them, including traffic lights, road lines,
and weather conditions.
Other examples include a GPS system or smart speakers that
respond to human queries. After capturing elements of the
surrounding environment, a perceiver will analyze the different
objects, extract their features, and analyze the relationships
among them.
Processing Language
AI processes language in something as seemingly simple as
spell check and autocorrect.
Computer programs use neural networks to scan large
bodies of text for misspelled words and language
irregularities.
Another way AI uses language processing is when it weeds
out spam in email systems.
For example, spam filters delegate specific messages as
spam when seeing certain words or combinations of words.
AI ARCHITECTURE
Artificial Intelligence (AI) architecture refers to the
structure and design of AI systems, which includes the
components, layers, and interactions that enable the AI
system to perform tasks, make decisions, and adapt to its
environment. AI architectures can vary significantly
based on the specific application and the AI approach
being used.
AI ARCHITECTURE FOR SELF DRIVING CARS
Thank You