Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
25 views30 pages

AI Solutions

The document provides an overview of various concepts in artificial intelligence, including learning agents, search strategies, PEAS descriptions, quantifiers, categories of AI, and the workings of reinforcement learning. It also discusses the characteristics of medical diagnosis systems, goal-based agents, and compares propositional logic with first-order logic. Additionally, the document illustrates the application of AI in robotics and outlines properties of task environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views30 pages

AI Solutions

The document provides an overview of various concepts in artificial intelligence, including learning agents, search strategies, PEAS descriptions, quantifiers, categories of AI, and the workings of reinforcement learning. It also discusses the characteristics of medical diagnosis systems, goal-based agents, and compares propositional logic with first-order logic. Additionally, the document illustrates the application of AI in robotics and outlines properties of task environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

ARTIFICIAL INTELLIGENCE

5 MARKS QUESTIONS :
1.__Explain the learning agent with suitable block diagram

Learning agent is a type of intelligent agent that can improve its performance over time by
learning from its experiences. Learning agents are at the heart of many AI systems, from self-
driving cars to recommendation engines. Here's a breakdown of what makes up a learning
agent:

1. Basic Structure of a Learning Agent


A learning agent typically has four main components:
1. Learning Element
o Responsible for making improvements by learning from experiences.
o Uses algorithms like supervised learning, reinforcement learning, or
unsupervised learning.
o It adjusts internal parameters based on feedback.
2. Performance Element
o Responsible for selecting external actions.
o Uses the knowledge acquired from the learning element to act in the
environment.
o For example, in a chess-playing AI, this would be the part that makes moves
during the game.
3. Critic
o Provides feedback to the learning element.
o Compares the agent’s actions to some standard or goal and signals how well
the agent is doing.
o This can be a reward/punishment system in reinforcement learning.
4. Problem Generator
o Suggests exploratory actions to help the agent learn more.
o Encourages the agent to try new things that might lead to better learning
outcomes in the long run.
2.__Difference between
Uninformed Search Informed Search
Does not use heuristics or extra knowledge Uses heuristics to guide the search
Searches blindly through the problem space Searches intelligently based on guidance
Usually slower and less efficient Usually faster and more efficient
Suitable when no extra info is available Suitable when heuristics are available
Explores more nodes Explores fewer nodes
Costly in terms of time and memory More optimized in terms of resources
Examples: BFS, DFS, Uniform Cost Search Examples: A*, Greedy Best-First Search

3.__Give PEAS and state space description for “Automobile Driver Agent”
Performance Environment Sensors and Actuators
Measures
- Reach destination - Steering wheel
- Roads and highways
safely and quickly - Accelerator (throttle)
- Other vehicles (cars,
- Follow traffic rules - Brakes
bikes, trucks)
- Minimize fuel - Turn signals
- Traffic signals and signs
consumption - Gear shift
- Pedestrians, cyclists
- Ensure passenger - Cameras (for lane detection,
- Weather and road
comfort traffic signs)
conditions
- LIDAR/RADAR (for detecting
other vehicles)
- GPS (for location and navigation)
- Speedometer
- Proximity sensors

The state space of an Automobile Driver Agent is the set of all possible situations or
configurations it can encounter. Each state includes:

 Location of the vehicle (coordinates on the map)


 Current speed and acceleration

 Lane position

 Distance to surrounding objects (vehicles, pedestrians, obstacles)

 Traffic light state (red/yellow/green)

 Road conditions (wet, dry, icy)

 Navigation goal (destination coordinates)

 Weather conditions (clear, rain, fog)

4.__Explain different quantifiers withs example.


Quantifiers are used in logic to state how many elements of a particular set or domain
satisfy a certain property or condition. Instead of writing separate statements for every
element, we use quantifiers to make generalized statements.

There are two primary types of quantifiers:

1. Universal Quantifier ( ∀ )

➤ Meaning:

"For all" or "For every"


This quantifier asserts that a property or condition is true for every element in a specific
domain.

➤ Symbol:

∀x means “for all x”

➤ Example:

Statement: ∀x (Human(x) → Mortal(x))


Read as: “For all x, if x is a human, then x is mortal.”

Interpretation: This means that every human is mortal.


It’s a generalized statement saying that being human implies being mortal, with no
exceptions.

2. Existential Quantifier ( ∃ )

➤ Meaning:

"There exists" or "There is at least one"


This quantifier states that there is at least one element in the domain for which the condition
is true.
➤ Symbol:

∃x means “there exists an x”

➤ Example:

Statement: ∃x (Student(x) ∧ Smart(x))


Read as: “There exists some x such that x is a student and x is smart.”

Interpretation: This means at least one student is smart.


We don’t know who, and there could be more than one—but at least one exists.

5.__ Describe different categories of AI


1. Based on Functionality:

 Narrow AI (Weak AI): AI designed for specific tasks (e.g., voice assistants,
recommendation systems).

 General AI (Strong AI): A theoretical AI capable of performing any intellectual task


a human can do.

 Superintelligent AI: AI that surpasses human intelligence in all aspects.

2. Based on Capabilities:
 Reactive Machines: AI that responds to specific stimuli but cannot learn from past
experiences (e.g., Deep Blue).
 Limited Memory AI: AI that can use past data to make better decisions (e.g., self-
driving cars).

 Theory of Mind AI: AI that understands emotions, beliefs, and intentions (currently
under development).

 Self-Aware AI: Hypothetical AI with consciousness and self-awareness.

3. Based on Techniques:

 Symbolic AI (Classical AI): Uses explicit rules and logic for decision-making (e.g.,
expert systems).

 Machine Learning (ML): AI that learns from data to improve performance (e.g.,
recommendation algorithms).

 Deep Learning: A subset of ML using neural networks with many layers for complex
tasks (e.g., image recognition).

 Natural Language Processing (NLP): AI that understands and generates human


language (e.g., chatbots).

 Computer Vision: AI that interprets and makes decisions based on visual data (e.g.,
facial recognition).
4. Based on Application Area:

 Healthcare AI: AI used for disease diagnosis, personalized treatment, and medical
imaging.

 Finance AI: AI for fraud detection, risk management, and algorithmic trading.

 Retail AI: AI for inventory management, customer recommendations, and sales


forecasting.

 Autonomous Vehicles: AI systems for self-driving cars and drones.

 Entertainment and Media AI: AI for content recommendations, gaming, and


creative media generation.

5. Based on Learning Paradigms:

 Supervised Learning: AI learns from labeled data to make predictions.

 Unsupervised Learning: AI identifies patterns from unlabeled data.


 Semi-Supervised Learning: A mix of supervised and unsupervised learning using
both labeled and unlabeled data.
 Reinforcement Learning: AI learns through interactions with the environment and
feedback (rewards or penalties).

6.__Describe the characteristics of a medical diagnosis system using the


PEAS properties
Performance Environment Actuators and sensors
Measure
- Diagnostic accuracy - Patients - Diagnostic reports
- Timely results - Medical data & records - Alerts for critical conditions
- Patient safety - Healthcare - Test recommendations
- Cost efficiency professionals - Treatment suggestions
- Clinics/Hospitals - Medical devices (e.g., ECG, BP
monitor)
- Imaging systems
- Electronic health records
- Symptom input by patient or
doctor
7.__Explain Goal based agent with diagram.

A goal-based agent is an intelligent system that acts to achieve a specific objective or goal.
Unlike simple reflex agents that respond only to current conditions, goal-based agents
consider the future consequences of their actions and make decisions that bring them closer
to a desired goal state.

Key Features:

 Goal-Oriented Behavior: The agent is provided with one or more goals, and it
evaluates actions based on whether they help achieve those goals.

 Decision-Making: It uses search and planning techniques to choose the best sequence
of actions that lead to the goal.

 Flexibility: It can adapt its behavior if the environment changes or if it encounters


obstacles.

Components:

1. Perception: The agent perceives its environment through sensors.

2. Knowledge Base: Stores information about the world and how actions affect it.
3. Goal Information: Specifies the desired outcome or target state.

4. Search/Planning Module: Determines the actions needed to reach the goal from the
current state.
5. Action Execution: Executes the chosen actions to move toward the goal
8.__Compare and contrast propositional logic and first order logic

Propositional Logic First Order Logic


Works with simple, indivisible statements or Works with predicates, objects, variables,
propositions. and functions.
Cannot express relationships between Can express complex relationships and
objects. properties of objects.
Does not use variables or quantifiers. Uses variables and supports quantifiers like
∀ (for all) and ∃ (there exists).
Less expressive; suitable for simple facts More expressive; suitable for complex
and rule-based systems. reasoning and real-world knowledge
representation.
Limited inference capabilities due to lack of Rich inference capabilities using logic and
structure. quantification.
Easier to implement and understand for More complex but powerful for AI, NLP,
basic logic problems. and knowledge-based systems.
Example: “It is raining” represented as P. Example: “All humans are mortal” →
∀x(Human(x) → Mortal(x)).

9.__ Explain the concept of Conditional Order Planning


 Conditional Order Planning, also known as Contingent Planning or Plan with
Branching, is a type of planning where the planner takes into account uncertainty in
the environment or in the effects of actions.

 Unlike classical planning (which assumes a fully observable, deterministic world),


conditional planning handles incomplete knowledge and non-deterministic outcomes
by planning for multiple possible scenarios.

Core Idea

 The plan is not a single sequence of actions but a tree-like structure that includes
branches for different conditions.

 At certain points, the agent may perform sensing actions (to gather information) and
then choose a branch of the plan depending on the outcome.

 For example, if a robot doesn’t know whether a door is open or closed, it can plan:
→ If the door is open, go through it.
→ If the door is closed, first open it, then go through.

10.__Explain working of reinforcement learning


 Reinforcement Learning is a type of machine learning where an agent learns by
interacting with an environment to achieve a goal.

 Instead of being given labeled data or direct instructions, the agent learns through
trial and error, receiving rewards or penalties based on its actions.
Core Components of Reinforcement Learning

1. Agent: The learner or decision-maker.

2. Environment: Everything the agent interacts with.

3. State (S): A representation of the current situation of the environment.

4. Action (A): A set of possible moves the agent can make.

5. Reward (R): A numerical feedback signal indicating the value of the action taken.

6. Policy (π): A strategy the agent uses to decide which action to take in a given state.

7. Value Function (V): Measures how good a state (or state-action pair) is in terms of
expected future rewards.

11.__Describe four categorizes of Artificial Intelligence


1. Reactive Machines

 Reactive Machines are the most basic form of AI. These systems can only react to
current situations based on pre-programmed rules.

 They do not store past experiences or use memory to influence future decisions.

 Example: IBM’s Deep Blue, the chess-playing computer, which analyzed possible
moves and counter-moves but had no memory or learning capability.

2. Limited Memory

 Limited Memory AI can learn from historical data and make decisions using past
experiences for a short period of time.

 Most modern AI systems fall into this category, including self-driving cars, which
observe and remember the speed of nearby vehicles, lane markings, and recent actions
to make decisions.

 These systems use techniques like machine learning and deep learning, and are
commonly used in image recognition, fraud detection, and recommendation
systems.
3. Theory of Mind

 Theory of Mind AI is still theoretical and refers to systems that can understand
human emotions, beliefs, intentions, and thought processes.
 This type of AI would be capable of social interaction and adapting based on what it
knows about other agents (including humans).
 Such AI would be essential for emotionally intelligent robots or advanced personal
assistants that can understand moods and respond appropriately.
4. Self-Aware AI

 Self-Aware AI is the most advanced and futuristic category. It refers to machines


that possess consciousness, self-awareness, and the ability to understand their own
existence.

 This type of AI would have its own desires, emotions, and understanding of the
world, much like humans.

 It currently does not exist, and its development raises deep philosophical and
ethical questions about the nature of consciousness and control over AI.

12.__Illustrate the application of AI in the Robotics


1. Perception and Sensing

 AI enables robots to perceive their environment using various sensors like


cameras, LiDAR, ultrasonic, and infrared.

 For example, warehouse robots use AI-based vision systems to identify and pick
products from shelves with high precision.

2. Navigation and Path Planning

 AI is used in autonomous navigation, allowing robots to move from one location


to another without human intervention.

 Self-driving cars are an excellent example, where AI helps the vehicle understand
roads, signals, other vehicles, and pedestrians.

3. Decision Making and Problem Solving

 AI empowers robots to make intelligent decisions based on the current scenario,


often using reinforcement learning, rule-based systems, or probabilistic
reasoning.
 For example, a service robot may decide whether to deliver a message, recharge
its battery, or avoid a crowded hallway based on real-time conditions.
4. Human-Robot Interaction (HRI)

 AI helps robots understand natural language, gestures, and even emotions,


enabling more natural interaction with humans.
 Robots like Pepper or Sophia use NLP (Natural Language Processing), speech
recognition, and emotion analysis to engage in meaningful conversations.
5. Learning and Adaptation

 AI enables robots to learn from experience, adapt to new tasks, and improve
performance over time.
 In manufacturing, collaborative robots (cobots) can learn to perform tasks by
watching a human perform them once or twice.

6. Automation and Efficiency in Industries

 AI-powered robots are used for automated assembly, welding, packaging, and
material handling in industries.

 For example, in automobile manufacturing, robots weld and assemble parts with
extreme precision.

10 Marks Questions:
1.__Explain various properties of task environment with suitable example
A task environment refers to everything an intelligent agent interacts with in order to
complete a task or achieve a goal. It includes the problem to be solved, the environment in
which the agent operates, and all the external factors that can influence the agent's behavior.

1. Fully or Partially Observable


This property describes how much of the environment the agent can perceive at any
given time. In a fully observable environment, the agent has complete access to all
relevant information needed to make a decision. On the other hand, in a partially
observable environment, the agent has limited visibility or access to information.
Example: A vacuum agent with only a local dirt sensor cannot tell whether there is
dirt in other squares.

2. Deterministic Vs stochastic.
If the next state of the environment is completely determined by the current state and
the action executed by the agent then we say the environment is deterministic
otherwise it is stochastic.

3. Episodic Vs Sequential.

In an episodic task environment, the agent experience is divided into atomic episodes
each episode consists of the agent perceiving and then performing a single action.
Crucially, the next episode does not depend on the actions taken in previous episodes.
In episodic environment, the choice of action in each episode depends only on the
episode itself. Many classification tasks are episodes.

Example: Chess and taxi driving are sequential, in both cases, short term actions can
have long term consequences.
4. Static Vs Dynamics.

A static environment remains unchanged while the agent is deciding what to do. The
environment only changes in response to the agent’s actions. Crossword puzzles are
static because the puzzle doesn’t change while a person is thinking about the next
word. A dynamic environment, on the other hand, can change on its own,
independent of the agent's actions.

For example, a self-driving car operates in a dynamic environment where other cars,
traffic signals, and pedestrians can change the situation at any time.

5. Discrete Vs Continuous.

The discrete/continuous distinction can be applied to the state of the environment, to


the way time is Handled and to the percepts and actions of the agent.

Example: A discrete state environment such as a chess game has a finite no of


distinct states chess also has a discrete set of percepts and actions.

6. Single agent Vs Multi Agent

In a single-agent environment, only one agent is working to achieve its goals. This
agent has complete control over its actions and decisions, and it doesn’t have to
interact with other agents to achieve its objectives.

Vacuum Cleaner Robot: A robot vacuum cleaner works alone to clean a room. It
navigates the space, detects dirt, and activates its cleaning mechanism without
interacting with any other agents.

2.__ What is Game Playing Algorithm? Draw a game tree for Tic-Tac-
Toe problem.
A game-playing algorithm is a type of algorithm used in AI to make decisions in games,
usually by evaluating potential future moves and selecting the best one. These algorithms
are typically used in environments where multiple players (agents) are involved, and the
game consists of a sequence of actions with well-defined rules.
The main goal of a game-playing algorithm is to maximize the player's chances of
winning while minimizing the chances of losing. The strategy depends on the type of
game, the information available, and the nature of the players (cooperative or
competitive).

Common Game Playing Algorithms:

1. Minimax Algorithm:
This is one of the most commonly used algorithms for two-player, zero-sum games
(like Tic-Tac-Toe, Chess, etc.). The Minimax algorithm evaluates moves by
assuming that the opponent will also play optimally to minimize the player's chances
of winning. It works by recursively simulating all possible moves, calculating their
outcomes, and choosing the move that leads to the best possible result.

o Maximizer: The AI player aims to maximize its score (or minimize the
opponent's).

o Minimizer: The opponent tries to minimize the AI player's score.

2. Alpha-Beta Pruning:
Alpha-beta pruning is an optimization technique for the minimax algorithm that
eliminates branches of the game tree that don’t need to be explored because they
cannot influence the final decision. This speeds up the search process.

3. Heuristic Evaluation:
In more complex games, like Chess, heuristics are used to evaluate non-terminal
states of the game, where a full evaluation is not possible within a reasonable time.
These heuristics give an approximation of the game state.
3.__Illustrate Forward chaining and Backward chaining with suitable
example. *
Forward Chaining (Data-Driven Reasoning)

1. Starts from known facts:


Forward chaining begins with a set of given facts or observations. These are the truths
the system already knows without any inference.

2. Applies inference rules:


The system scans all the rules and looks for the ones where the conditions (IF part)
match the current known facts. If a rule is applicable, its conclusion (THEN part) is
added to the knowledge base.

3. Repeats until goal is reached:


This process continues repeatedly, adding new facts as they are inferred, until the goal
is reached or no further inference can be made.

4. Example:
Suppose we have the following rules:

o Rule 1: If it rains, then the ground gets wet.

o Rule 2: If the ground is wet, then the grass is slippery.


Given the fact: "It rains."
The system applies Rule 1 to infer: "The ground gets wet", and then Rule 2 to
conclude: "The grass is slippery."
The system has successfully reached the goal using forward chaining.

Backward Chaining (Goal-Driven Reasoning)

1. Starts from the goal:


In backward chaining, we begin with a goal or query that we want to prove or
disprove. This technique works in reverse compared to forward chaining.

2. Searches for supporting rules:


The system looks for rules where the conclusion matches the goal. It then checks
whether the conditions of those rules can be satisfied with existing facts.

3. Proves conditions recursively:


If the conditions of the rule aren’t already known, they become sub-goals, and the
system tries to prove them using other rules or known facts. This continues until
either the original goal is proven or no further inferences can be made.

4. Example:
Suppose the goal is: "Is the grass slippery?"

o The system finds Rule 2: If the ground is wet, then the grass is slippery.

o Now it tries to prove: "The ground is wet."


o It finds Rule 1: If it rains, then the ground gets wet.

o Now it tries to prove: "It rains" — which is a known fact.


The goal "grass is slippery" is proven by working backward from the goal
to the facts.

4.__ Explain Hill Climbing Algorithm and problems that occurs in hill
climbing algorithm?(2023 5Marks)
Hill Climbing is a heuristic search algorithm used in Artificial Intelligence for
mathematical optimization problems. It is an iterative algorithm that starts with an arbitrary
solution and keeps improving it by making small changes, choosing the move that increases
the value (or reduces the cost) the most.

Working of Hill Climbing Algorithm

1. Start with a current state:


Begin with an initial solution (state), which could be randomly selected or provided.

2. Evaluate neighbors:
From the current state, look at neighboring states (i.e., states that differ slightly from
the current one).

3. Move to better state:


If a neighbor has a better value (higher or lower based on the goal), move to that state.

4. Repeat the process:


Continue until there is no neighboring state that is better than the current one. The
algorithm stops here, considering it a solution.

5. Goal:
The goal is to maximize or minimize the objective function (called the evaluation or
fitness function).

Example Scenario

Imagine you're on a hill in a foggy environment and want to reach the highest point. You can
only see your immediate surroundings. You take a step in the direction where the land rises
most steeply. You repeat this process until you reach a point where no neighboring step leads
to a higher elevation.

Problems in Hill Climbing Algorithm

Although hill climbing is simple and efficient in many cases, it has several limitations due to
its greedy nature:

1. Local Maximum
 Problem: The algorithm may reach a peak that is higher than its neighbors but
lower than the global maximum.
 Effect: It gets stuck there, falsely considering it as the best possible solution.

2. Plateau

 Problem: A flat area where neighboring states have the same value as the current
state.

 Effect: The algorithm cannot decide in which direction to move and may wander
aimlessly or halt.

3. Ridges

 Problem: The optimal path may lie along a narrow ridge or diagonal path that
requires a sequence of moves, not just the best local move.

 Effect: The algorithm may fail to find the ridge path and get stuck.

5.__ What do you mean by Resolution? Also discuss the steps in Resolution .
Resolution is a rule of inference used in propositional logic and first-order predicate logic
to derive conclusions by refuting the negation of a query. It is a fundamental technique used
in automated theorem proving and logic-based AI systems.

The resolution method is based on contradiction:


To prove a statement QQQ, we assume ¬Q\neg Q¬Q (not Q), add it to our knowledge base,
and try to derive a contradiction. If a contradiction is found, then the original statement QQQ
must be true.
It is a sound and complete method for propositional logic — meaning it will only derive
valid conclusions and can derive all conclusions that are logically entailed.

Steps Involved in the Resolution Process

1. Convert all statements to clause form (CNF)

All the statements in the knowledge base and the negated query must be converted to
Conjunctive Normal Form (CNF). CNF is a conjunction (AND) of disjunctions (OR) of
literals.
Example:
(A→B)(A \rightarrow B)(A→B) becomes (¬A∨B)(\neg A \lor B)(¬A∨B)

2. Negate the query

To use proof by contradiction, negate the query you want to prove and add it to the
knowledge base.
Example:
To prove QQQ, add ¬Q\neg Q¬Q to the KB.

3. Apply resolution rule


Use the Resolution Rule repeatedly on pairs of clauses that contain complementary literals
(one positive and one negative) to derive new clauses.
For example, from A∨BA \lor BA∨B and ¬B∨C\neg B \lor C¬B∨C, we can resolve on BBB
to get:
→ A∨CA \lor CA∨C

4. Continue until:

 You derive an empty clause (⊥), which represents a contradiction — meaning the
original query is proven true.
 Or, no new information can be derived — meaning the query cannot be proven with
the given knowledge base.

6.__Explain Partial Order Planning with suitable example.


Partial-order planning (POP) is an artificial intelligence planning technique where actions in a
plan are partially ordered rather than arranged in a strict, linear sequence. Unlike total-order
planning, which fixes the complete order of actions from the beginning, POP follows the
principle of least commitment, meaning it delays decisions about the exact sequence of
actions until absolutely necessary. This makes the planning process more flexible and often
more efficient, especially in complex scenarios where many actions are independent of each
other.

For example, consider a task where the goal is to make a sandwich and pour a drink. The
available actions include spreading peanut butter, spreading jelly, putting the bread together
to make a sandwich, and pouring a drink into a cup. In a partial-order plan, the actions
“spread peanut butter” and “spread jelly” must both occur before “put bread together,”
because the sandwich requires both spreads. However, there is no required order between
spreading peanut butter and jelly—they can occur in any sequence. Additionally, the action of
pouring a drink is completely independent of making the sandwich, so it can occur at any
time in the plan, even in parallel with the sandwich-making process.

This approach to planning offers several advantages. It allows more natural handling of
parallel tasks, avoids unnecessary sequencing of unrelated actions, and scales better in
dynamic or uncertain environments. POP is particularly useful in domains where flexibility
and concurrency are important.

7.__ Define Belief Network. Describe the steps of constructing belief


network with an example.
A Belief Network, also known as a Bayesian Network, is a graphical model that represents
probabilistic relationships among a set of variables. It is composed of:

 Nodes, which represent random variables.

 Directed edges, which represent conditional dependencies between the variables.


 Conditional probability tables (CPTs), which quantify the effect of the parent nodes
on a given node.

Belief networks are used in reasoning under uncertainty, diagnosis, prediction, and decision
making in AI systems.

Steps to Construct a Belief Network:

1. Identify the Variables: Determine all relevant variables in the domain. Each variable
becomes a node in the network.

2. Determine Dependencies: Identify direct dependencies between variables. If variable


A directly influences B, draw a directed edge from A to B.

3. Construct the Directed Acyclic Graph (DAG): Use the variables and dependencies
to build a graph where nodes represent variables and directed edges represent causal
or influential relationships. The graph must be acyclic.
4. Define the Conditional Probability Tables (CPTs): For each node, specify the
probability distribution given its parent nodes. If a node has no parents, specify its
prior probability.

5. Validate the Network: Ensure the structure and probabilities correctly represent the
problem domain and follow probability rules.

Example: Medical Diagnosis

Suppose we are building a belief network for diagnosing whether a person has a cold.
Step 1: Identify Variables

 Cold (Yes/No)

 Cough (Yes/No)

 Fever (Yes/No)

Step 2: Determine Dependencies

 A Cold can cause Cough and Fever, so:

o Cold → Cough
o Cold → Fever

Step 3: Construct the DAG

Cold

/ \

Cough Fever

Step 4: Define CPTs


 P(Cold) = {Yes: 0.2, No: 0.8}

 P(Cough | Cold):

o Cold = Yes: P(Cough=Yes) = 0.8, P(Cough=No) = 0.2

o Cold = No: P(Cough=Yes) = 0.1, P(Cough=No) = 0.9

 P(Fever | Cold):

o Cold = Yes: P(Fever=Yes) = 0.7, P(Fever=No) = 0.3

o Cold = No: P(Fever=Yes) = 0.05, P(Fever=No) = 0.95

Step 5: Validate

Check that all probabilities sum to 1 and the relationships match real-world knowledge.

8.__ Explain different applications of AI in Healthcare, Retail and Banking.


AI in Healthcare:

1. Disease Diagnosis: AI systems can analyze medical images (like X-rays, MRIs, and
CT scans) to detect diseases such as cancer, pneumonia, and fractures with high
accuracy.

2. Predictive Analytics: Machine learning models predict patient outcomes, potential


complications, or the likelihood of disease based on historical and real-time data.

3. Personalized Treatment: AI helps customize treatment plans based on a patient’s


genetics, lifestyle, and response to past treatments.

4. Virtual Health Assistants: Chatbots and virtual nurses provide 24/7 patient support,
answer questions, remind patients to take medication, and schedule appointments.

5. Drug Discovery: AI accelerates drug development by predicting molecule


interactions and simulating how compounds behave.

AI in Retail:

1. Customer Personalization: AI analyzes browsing history, purchase patterns, and


preferences to offer personalized product recommendations and marketing.

2. Inventory Management: Predictive AI models help forecast demand and optimize


stock levels, reducing overstock or out-of-stock issues.

3. Chatbots and Virtual Shopping Assistants: Retailers use AI-powered bots to assist
customers in real time, improving the shopping experience.

4. Visual Search and Image Recognition: Customers can upload images to find similar
products using AI-driven image recognition.
5. Fraud Detection: AI monitors transactions to detect suspicious behavior, minimizing
losses from fraudulent activities.

AI in Banking:

1. Fraud Detection and Prevention: AI identifies unusual transaction patterns and flags
potentially fraudulent activities in real time.

2. Credit Scoring and Loan Approval: AI evaluates creditworthiness by analyzing a


wider range of data beyond traditional credit scores, leading to fairer lending
decisions.

3. Chatbots for Customer Service: Banks use AI-driven chatbots to handle routine
inquiries like balance checks, transaction details, and branch info.

4. Risk Management: AI helps in assessing financial risks by analyzing market trends,


customer behavior, and economic indicators.

5. Algorithmic Trading: AI systems make trading decisions in milliseconds by


analyzing vast amounts of financial data, increasing market efficiency and profit
potential.

9.__ Alpha Beta Pruning


Alpha-Beta Pruning is an optimization technique used in the Minimax Algorithm for
decision-making in two-player games like chess, tic-tac-toe, or checkers. The purpose of
alpha-beta pruning is to reduce the number of nodes that need to be evaluated in the game
tree, speeding up the decision-making process while maintaining the same optimal result as
the Minimax algorithm.

How Alpha-Beta Pruning Works:

Alpha-beta pruning works by eliminating branches in the search tree that cannot possibly
affect the final decision. It keeps track of two values during the search:

1. Alpha (α): The best value that the maximizing player can guarantee so far. It
represents the lower bound.

2. Beta (β): The best value that the minimizing player can guarantee so far. It represents
the upper bound.

 Maximizing Player (Max): Tries to maximize the score.

 Minimizing Player (Min): Tries to minimize the score.

When searching the tree, alpha and beta values are used to prune branches:
 Pruning occurs when a node’s value is worse than the current alpha or beta
value (i.e., if a node’s value cannot affect the result, there’s no need to explore it
further).

Steps of Alpha-Beta Pruning:

1. Initialization: Start with the initial values:

o Alpha is set to negative infinity (∞).

o Beta is set to positive infinity (∞).

2. Traverse the tree: Perform a standard minimax search but update the alpha and beta
values as you traverse.

3. Pruning Condition: During the tree traversal, at any point if:

o Alpha is greater than or equal to Beta (i.e., α ≥ β), stop exploring that branch
of the tree.

4. Propagation of Values: After evaluating the children nodes, propagate the values
upwards:

o For the maximizing player, the alpha value is updated to the maximum of the
current alpha and the value of the node.

o For the minimizing player, the beta value is updated to the minimum of the
current beta and the value of the node.

10.__ Wumpus world Environment


1. What is the Wumpus World?

 The Wumpus World is a classic example of a knowledge-based agent environment


used to demonstrate logical reasoning and decision-making in AI.

 It is a grid-based world (typically 4×4) inhabited by the agent, a monster called the
Wumpus, deadly pits, and gold.

 The agent’s goal is to find the gold and exit safely, without falling into a pit or being
eaten by the Wumpus.

2. Elements of the Environment

 Agent: Starts at position (1,1), facing right. It can move forward, turn left/right, grab
objects, and shoot an arrow.

 Wumpus: A dangerous creature. If the agent enters a square with the Wumpus, it
dies—unless it has already killed it using an arrow.

 Pits: Deadly holes. Falling into a pit results in death.

 Gold: The object to be found and grabbed to win the game.


 Walls: Boundaries of the grid that the agent cannot pass through.

3. Percepts (Sensory Input)

 The agent perceives the environment using these limited, local clues:

o Stench: Perceived in squares adjacent to the Wumpus.

o Breeze: Perceived in squares adjacent to a pit.

o Glitter: Perceived in the square containing the gold.

o Bump: Felt when the agent hits a wall.

o Scream: Heard when the Wumpus is killed.

4. Actions Available to the Agent

 Move Forward

 Turn Left / Right

 Grab (to pick up gold)


 Shoot (to fire an arrow in the direction the agent is facing)

 Climb (to exit the cave if at the starting point)

5. Challenges in the Wumpus World

 The agent must reason under uncertainty, using inference and logic to deduce safe
moves from percepts.

 It doesn’t know the layout of the environment beforehand and has to build
knowledge step-by-step.

 Decisions must be made with partial information, making it a great testbed for
logical reasoning, planning, and decision-making in AI.

6. Importance in AI

 The Wumpus World is widely used to explain:

o Knowledge Representation

o First-order logic

o Inference and reasoning


o Planning and goal-based behavior

o Handling uncertainty
11.__What do you understand with informed and uninformed search
method? Explain with example.
Search methods are used in Artificial Intelligence to find solutions or paths in a problem
space. They are broadly classified into uninformed (blind) and informed (heuristic) search
methods based on whether they use additional knowledge about the goal.

1. Uninformed Search (Blind Search):


These search methods don’t use any extra information about the goal. They explore the
state space blindly using only what’s defined in the problem—like the start state, goal
state, and possible actions. Algorithms like Breadth-First Search (BFS) and Depth-
First Search (DFS) fall under this category.

Examples:

 Breadth-First Search (BFS): Explores all nodes at the current depth before going
deeper.

 Depth-First Search (DFS): Explores as deep as possible along each branch before
backtracking.

Example:

Imagine searching for a name in a phone book:

 Uninformed Search: You check each name from top to bottom without any clue —
time-consuming and inefficient.

2. Informed Search (Heuristic Search):

In contrast, informed search methods use heuristics, which are estimates or extra
knowledge that guide the search toward the goal more efficiently. This makes them faster
and smarter.

Examples:

 Greedy Best-First Search: Selects the node that appears to be closest to the goal
based on a heuristic.

 A* Search: Combines the cost to reach a node and estimated cost from that node to
the goal (f(n) = g(n) + h(n)).

Example:

In a GPS navigation system:

 Informed Search: It uses distance estimates or traffic info (heuristics) to find the
shortest or fastest path to your destination.
12.__What is planning in AI? Discuss partial order planning and
hierarchical planning in detail
Planning in Artificial Intelligence refers to the process where an intelligent agent determines
a sequence of actions to achieve a specific goal. It starts from a known initial state, aims to
reach a goal state, and figures out the necessary steps in between.

Majorly there are two types of planning in AI

2. Partial Order Planning (POP)


 Partial Order Planning is a technique where the planner does not fix the complete
order of actions from the start. Instead, it only defines the necessary order between
certain actions, allowing others to remain unordered unless required.

 In POP, plans are represented using a partial ordering of actions. This increases
flexibility and allows parallel execution where possible. Only those actions that have
logical dependencies are ordered.

 A partial order plan includes components like steps (actions), ordering constraints
(which action must come before which), causal links (action A provides a condition
needed by action B), and open preconditions (conditions that still need to be
established).
 The planning process starts with a minimal plan containing only a “start” and “finish”
action. Actions are then added incrementally to satisfy open preconditions, and threats
(actions that can undo the effects of others) are identified and resolved.

 The main advantage of POP is that it supports concurrency and produces more
flexible plans, making it ideal for multi-agent or uncertain environments.

3. Hierarchical Planning (HTN – Hierarchical Task Network)

 Hierarchical Planning focuses on breaking down high-level tasks into smaller, more
manageable subtasks until each can be executed directly by the agent. This technique
mimics how humans typically plan: starting with broad goals and refining them into
detailed steps.

 The central concept in hierarchical planning is the use of methods that describe how
to decompose a complex task. These methods help convert abstract goals into
concrete actions.

 The planner works by selecting a top-level task and applying suitable decomposition
methods. This continues recursively until all tasks are reduced to primitive actions,
which can then be performed by the agent.

 A key advantage of hierarchical planning is modularity. Once a task is decomposed


using a method, the same method can be reused in different contexts. This makes it
highly efficient for planning in large, complex domains.
 Hierarchical planning is particularly useful in structured environments like business
workflows, game strategy generation, and automated assistants, where tasks naturally
fall into subtasks.

13.__Explain concept of genetic programming


 Genetic Programming is an evolutionary algorithm-based technique in Artificial
Intelligence where computer programs are evolved to solve problems automatically. It
is a specialization of Genetic Algorithms (GA), but instead of optimizing numeric
values or fixed-length strings, it evolves entire programs or expressions.

How Does Genetic Programming Work?

 The process of genetic programming mimics natural selection and biological


evolution. A population of randomly generated computer programs is created. These
programs are typically represented as tree structures, where nodes are functions
(e.g., +, -, *, if) and leaves are inputs (variables or constants).

 Each program is evaluated using a fitness function that measures how well it solves
the given problem. Based on fitness, the best-performing programs are selected to
reproduce.

 Through operations like crossover (recombination) and mutation, new programs


(offspring) are created. Crossover exchanges parts between two parent programs,
while mutation makes random changes to a program.

 Over multiple generations, the population evolves, and ideally, better and better
programs emerge. The process continues until a program with acceptable performance
is found or a stopping condition (like number of generations) is met.

Key Components of Genetic Programming

1. Initial Population: Randomly generated programs or solutions.

2. Fitness Function: Evaluates how close a program is to solving the problem.


3. Selection: Picks better-performing programs for reproduction.

4. Crossover: Swaps subtrees between two programs to create offspring.

5. Mutation: Randomly changes parts of a program to maintain diversity.

6. Termination Condition: Stops when a suitable program is found or after a fixed


number of generations.

14.__ Applications of AI
1. Healthcare
 AI is revolutionizing healthcare by enabling early disease detection, diagnosis, and
treatment planning. AI-powered systems can analyze medical images (like X-rays,
MRIs), predict patient outcomes, and personalize treatments.

 Virtual health assistants and chatbots help in patient interaction, while AI models
can also assist in drug discovery and monitoring chronic diseases like diabetes and
heart conditions.

2. Automotive Industry

 AI is the backbone of autonomous vehicles (self-driving cars). It helps in lane


detection, traffic sign recognition, obstacle avoidance, and decision-making using
sensors and computer vision.
 AI is also used in driver-assistance systems like lane-keeping, adaptive cruise
control, and automatic braking, enhancing both safety and comfort.
3. Finance

 In the financial sector, AI is used for fraud detection, credit scoring, algorithmic
trading, and risk management. Machine learning models analyze spending patterns
and detect anomalies to prevent fraud.

 AI chatbots and robo-advisors are increasingly helping users manage investments and
savings efficiently.

4. Education
 AI supports personalized learning by adapting educational content to the pace and
style of each student. Intelligent tutoring systems provide instant feedback and
customized learning paths.

 It also assists in automated grading, student performance prediction, and


administrative tasks, freeing up time for teachers.

5. E-commerce

 AI enhances online shopping by powering recommendation engines that suggest


products based on user behavior and preferences.

 It also improves customer support through chatbots, automates inventory


management, and provides visual search capabilities.

6. Agriculture

 AI is helping farmers through smart irrigation systems, crop health monitoring,


and yield prediction.
 AI drones and robots are being used to scan fields, detect diseases early, and optimize
pesticide use, leading to better crop management and productivity.

7. Manufacturing and Robotics

 In manufacturing, AI enables predictive maintenance, quality control, and process


automation.

 Industrial robots powered by AI are used for repetitive tasks such as assembly,
packaging, and welding with high precision.

8. Cybersecurity

 AI systems are employed to detect cyber threats, phishing attempts, and network
intrusions in real time.

 These systems continuously learn from new threats and adapt security protocols
dynamically to protect sensitive data.

9. Entertainment and Media

 AI personalizes content recommendations on platforms like Netflix, YouTube, and


Spotify.

 It is also used in game development, automated video editing, deepfake creation,


and music composition, expanding creativity with technology.

10. Natural Language Processing (NLP)

 AI is widely used in language translation, sentiment analysis, text summarization,


voice assistants (like Siri, Alexa), and chatbots.

 It enables smoother human-computer interaction through speech recognition and text


understanding.

15.__Short note on Simulated annealing.


 Simulated Annealing (SA) is a probabilistic optimization algorithm inspired by
the process of annealing in metallurgy, where a material is heated and then slowly
cooled to remove defects and reach a stable crystalline structure.

 In AI, it is used to find an approximate global optimum in a large search space,


especially when the problem has many local optima that can trap other algorithms
like hill climbing.

Key Concept Behind Simulated Annealing


 The idea is to explore the solution space like hill climbing, but with a twist: the
algorithm is allowed to accept worse solutions with a certain probability.

 This probability of accepting a worse solution helps the algorithm escape local optima
and is controlled by a parameter called temperature (T), which decreases over time.
 Initially, the algorithm behaves randomly (high temperature), but as the temperature
decreases, it becomes more selective and behaves like a greedy algorithm.

How It Works (Algorithm Steps)

1. Start with an initial solution and an initial temperature.

2. Repeat until the system is "frozen" (temperature is very low):

o Generate a neighboring solution by making a small change to the current one.

o Calculate the change in cost or energy (ΔE = new_cost − current_cost).

o If the new solution is better, accept it.

o If it's worse, accept it with a probability P=e−ΔE/TP = e^{-\Delta E /


T}P=e−ΔE/T.

o Lower the temperature according to a cooling schedule (e.g., T=T×αT = T


\times \alphaT=T×α, where α<1\alpha < 1α<1).

16.__Explain depth limiting search and Depth first iterative deepening


search.
1. Depth-Limited Search (DLS)

 Depth-Limited Search is a variation of Depth-First Search (DFS) where the search


is limited to a specific depth ‘l’ in the search tree.

 This means the algorithm does not go beyond a certain level, which helps prevent
infinite loops in graphs or trees with infinite depth.

 It is useful when we know the approximate depth of the goal state in advance.

 However, if the limit is too low, the algorithm may miss the solution
(incompleteness); if the limit is too high, it behaves like DFS.

 Time Complexity: O(b^l), where b is the branching factor and l is the depth limit.
Space Complexity: O(l), since it behaves like DFS.

 Drawback: May cut off the solution if it lies just beyond the depth limit.

2. Depth-First Iterative Deepening Search (DFID or IDDFS)


 Depth-First Iterative Deepening Search combines the benefits of DFS and BFS
(Breadth-First Search).
 It performs repeated depth-limited searches, starting from depth 0 and
incrementally increasing the depth limit by 1 until the goal is found.
 At each level, it restarts the search from the root, performing DFS up to the current
depth limit.

 This method is both complete and optimal (if cost is a function of depth) and uses
less memory compared to BFS.

 Time Complexity: O(b^d), where d is the depth of the goal.


Space Complexity: O(d), much better than BFS’s O(b^d).

 Although it may seem inefficient due to repetition, the nodes at shallow depths
dominate the total number of nodes, so it’s quite efficient in practice.

17.__Language model of Natural Language Programming


1. What is a Language Model in NLP?

 A Language Model (LM) is a fundamental component in Natural Language


Processing that estimates the probability of a sequence of words.

 It helps machines understand, predict, and generate human language by modeling


how likely a word or phrase is to occur given the context.

 For example, given the phrase "I am going to the", a good language model will
predict the next word like "store" or "park" rather than "banana" (depending on
context).

2. Types of Language Models


a. Statistical Language Models

 These are based on probability and statistics, using techniques like:

o Unigram, Bigram, Trigram models (based on the number of words


considered)

o N-gram models: Predict the next word based on the last n-1 words.

 Example: In a bigram model, the probability of a sentence is computed as the product


of conditional probabilities of each word given the previous one.

b. Neural Language Models

 Use neural networks to model language. These include:

o Feedforward Neural Networks

o Recurrent Neural Networks (RNNs) and LSTM

o Transformers (e.g., GPT, BERT)


 These models learn word representations (embeddings) and capture long-range
dependencies in text better than statistical models.

3. Applications of Language Models


 Speech recognition – Understanding spoken words.

 Machine translation – Translating text from one language to another.

 Text generation – Writing articles, stories, or even code (like GPT models).

 Autocomplete and spell correction – Predicting and correcting user input.

 Chatbots and virtual assistants – Understanding and responding to queries.

4. Key Concepts in Language Models

 Contextual understanding: Advanced models like GPT-4 use large contexts to


understand and generate coherent language.

 Tokenization: Breaking down text into tokens (words, subwords, characters) to be


processed by the model.

 Training data: Models learn from huge datasets to predict and generate language.

5. Modern Language Models (Examples)

 GPT (Generative Pre-trained Transformer) – Used for text generation and


conversation.

 BERT (Bidirectional Encoder Representations from Transformers) – Best for


understanding and classification tasks.

 T5, XLNet, RoBERTa – Other transformer-based models tailored for specific NLP
tasks.

18.__Differentiation
Forward Chaining Backward Chaining
1 Data-driven approach Goal-driven approach
2 Starts from known facts Starts from the goal or hypothesis
3 Moves from facts to conclusions Moves from goal to supporting facts
4 Widely used in expert systems Used in theorem proving and diagnosis
5 Applies inference rules to known facts Tries to prove the goal using rules and facts
6 Explores all possible conclusions Focuses only on what is needed to prove
goal
7 Can be inefficient due to wide search More targeted and efficient
8 Useful when all facts are known Useful when a specific goal is given
9 Example: Medical system suggesting Example: Checking if a patient has a
diseases disease
10 Adds new facts to knowledge base Works backward by breaking goal into
continuously subgoals
11 More suited for data collection More suited for diagnostic systems
systems
12 Continues until goal or no rule applies Continues until facts prove or disprove goal
13 Breadth-first in nature Depth-first in nature
14 Often used in production systems Often used in logic programming
15 Example rule: If A and B → C, and A, Example rule: To prove C, check if A and B
B known → deduce C are true (C ← A ∧ B)

You might also like