AI Solutions
AI Solutions
5 MARKS QUESTIONS :
1.__Explain the learning agent with suitable block diagram
Learning agent is a type of intelligent agent that can improve its performance over time by
learning from its experiences. Learning agents are at the heart of many AI systems, from self-
driving cars to recommendation engines. Here's a breakdown of what makes up a learning
agent:
3.__Give PEAS and state space description for “Automobile Driver Agent”
Performance Environment Sensors and Actuators
Measures
- Reach destination - Steering wheel
- Roads and highways
safely and quickly - Accelerator (throttle)
- Other vehicles (cars,
- Follow traffic rules - Brakes
bikes, trucks)
- Minimize fuel - Turn signals
- Traffic signals and signs
consumption - Gear shift
- Pedestrians, cyclists
- Ensure passenger - Cameras (for lane detection,
- Weather and road
comfort traffic signs)
conditions
- LIDAR/RADAR (for detecting
other vehicles)
- GPS (for location and navigation)
- Speedometer
- Proximity sensors
The state space of an Automobile Driver Agent is the set of all possible situations or
configurations it can encounter. Each state includes:
Lane position
1. Universal Quantifier ( ∀ )
➤ Meaning:
➤ Symbol:
➤ Example:
2. Existential Quantifier ( ∃ )
➤ Meaning:
➤ Example:
Narrow AI (Weak AI): AI designed for specific tasks (e.g., voice assistants,
recommendation systems).
2. Based on Capabilities:
Reactive Machines: AI that responds to specific stimuli but cannot learn from past
experiences (e.g., Deep Blue).
Limited Memory AI: AI that can use past data to make better decisions (e.g., self-
driving cars).
Theory of Mind AI: AI that understands emotions, beliefs, and intentions (currently
under development).
3. Based on Techniques:
Symbolic AI (Classical AI): Uses explicit rules and logic for decision-making (e.g.,
expert systems).
Machine Learning (ML): AI that learns from data to improve performance (e.g.,
recommendation algorithms).
Deep Learning: A subset of ML using neural networks with many layers for complex
tasks (e.g., image recognition).
Computer Vision: AI that interprets and makes decisions based on visual data (e.g.,
facial recognition).
4. Based on Application Area:
Healthcare AI: AI used for disease diagnosis, personalized treatment, and medical
imaging.
Finance AI: AI for fraud detection, risk management, and algorithmic trading.
A goal-based agent is an intelligent system that acts to achieve a specific objective or goal.
Unlike simple reflex agents that respond only to current conditions, goal-based agents
consider the future consequences of their actions and make decisions that bring them closer
to a desired goal state.
Key Features:
Goal-Oriented Behavior: The agent is provided with one or more goals, and it
evaluates actions based on whether they help achieve those goals.
Decision-Making: It uses search and planning techniques to choose the best sequence
of actions that lead to the goal.
Components:
2. Knowledge Base: Stores information about the world and how actions affect it.
3. Goal Information: Specifies the desired outcome or target state.
4. Search/Planning Module: Determines the actions needed to reach the goal from the
current state.
5. Action Execution: Executes the chosen actions to move toward the goal
8.__Compare and contrast propositional logic and first order logic
Core Idea
The plan is not a single sequence of actions but a tree-like structure that includes
branches for different conditions.
At certain points, the agent may perform sensing actions (to gather information) and
then choose a branch of the plan depending on the outcome.
For example, if a robot doesn’t know whether a door is open or closed, it can plan:
→ If the door is open, go through it.
→ If the door is closed, first open it, then go through.
Instead of being given labeled data or direct instructions, the agent learns through
trial and error, receiving rewards or penalties based on its actions.
Core Components of Reinforcement Learning
5. Reward (R): A numerical feedback signal indicating the value of the action taken.
6. Policy (π): A strategy the agent uses to decide which action to take in a given state.
7. Value Function (V): Measures how good a state (or state-action pair) is in terms of
expected future rewards.
Reactive Machines are the most basic form of AI. These systems can only react to
current situations based on pre-programmed rules.
They do not store past experiences or use memory to influence future decisions.
Example: IBM’s Deep Blue, the chess-playing computer, which analyzed possible
moves and counter-moves but had no memory or learning capability.
2. Limited Memory
Limited Memory AI can learn from historical data and make decisions using past
experiences for a short period of time.
Most modern AI systems fall into this category, including self-driving cars, which
observe and remember the speed of nearby vehicles, lane markings, and recent actions
to make decisions.
These systems use techniques like machine learning and deep learning, and are
commonly used in image recognition, fraud detection, and recommendation
systems.
3. Theory of Mind
Theory of Mind AI is still theoretical and refers to systems that can understand
human emotions, beliefs, intentions, and thought processes.
This type of AI would be capable of social interaction and adapting based on what it
knows about other agents (including humans).
Such AI would be essential for emotionally intelligent robots or advanced personal
assistants that can understand moods and respond appropriately.
4. Self-Aware AI
This type of AI would have its own desires, emotions, and understanding of the
world, much like humans.
It currently does not exist, and its development raises deep philosophical and
ethical questions about the nature of consciousness and control over AI.
For example, warehouse robots use AI-based vision systems to identify and pick
products from shelves with high precision.
Self-driving cars are an excellent example, where AI helps the vehicle understand
roads, signals, other vehicles, and pedestrians.
AI enables robots to learn from experience, adapt to new tasks, and improve
performance over time.
In manufacturing, collaborative robots (cobots) can learn to perform tasks by
watching a human perform them once or twice.
AI-powered robots are used for automated assembly, welding, packaging, and
material handling in industries.
For example, in automobile manufacturing, robots weld and assemble parts with
extreme precision.
10 Marks Questions:
1.__Explain various properties of task environment with suitable example
A task environment refers to everything an intelligent agent interacts with in order to
complete a task or achieve a goal. It includes the problem to be solved, the environment in
which the agent operates, and all the external factors that can influence the agent's behavior.
2. Deterministic Vs stochastic.
If the next state of the environment is completely determined by the current state and
the action executed by the agent then we say the environment is deterministic
otherwise it is stochastic.
3. Episodic Vs Sequential.
In an episodic task environment, the agent experience is divided into atomic episodes
each episode consists of the agent perceiving and then performing a single action.
Crucially, the next episode does not depend on the actions taken in previous episodes.
In episodic environment, the choice of action in each episode depends only on the
episode itself. Many classification tasks are episodes.
Example: Chess and taxi driving are sequential, in both cases, short term actions can
have long term consequences.
4. Static Vs Dynamics.
A static environment remains unchanged while the agent is deciding what to do. The
environment only changes in response to the agent’s actions. Crossword puzzles are
static because the puzzle doesn’t change while a person is thinking about the next
word. A dynamic environment, on the other hand, can change on its own,
independent of the agent's actions.
For example, a self-driving car operates in a dynamic environment where other cars,
traffic signals, and pedestrians can change the situation at any time.
5. Discrete Vs Continuous.
In a single-agent environment, only one agent is working to achieve its goals. This
agent has complete control over its actions and decisions, and it doesn’t have to
interact with other agents to achieve its objectives.
Vacuum Cleaner Robot: A robot vacuum cleaner works alone to clean a room. It
navigates the space, detects dirt, and activates its cleaning mechanism without
interacting with any other agents.
2.__ What is Game Playing Algorithm? Draw a game tree for Tic-Tac-
Toe problem.
A game-playing algorithm is a type of algorithm used in AI to make decisions in games,
usually by evaluating potential future moves and selecting the best one. These algorithms
are typically used in environments where multiple players (agents) are involved, and the
game consists of a sequence of actions with well-defined rules.
The main goal of a game-playing algorithm is to maximize the player's chances of
winning while minimizing the chances of losing. The strategy depends on the type of
game, the information available, and the nature of the players (cooperative or
competitive).
1. Minimax Algorithm:
This is one of the most commonly used algorithms for two-player, zero-sum games
(like Tic-Tac-Toe, Chess, etc.). The Minimax algorithm evaluates moves by
assuming that the opponent will also play optimally to minimize the player's chances
of winning. It works by recursively simulating all possible moves, calculating their
outcomes, and choosing the move that leads to the best possible result.
o Maximizer: The AI player aims to maximize its score (or minimize the
opponent's).
2. Alpha-Beta Pruning:
Alpha-beta pruning is an optimization technique for the minimax algorithm that
eliminates branches of the game tree that don’t need to be explored because they
cannot influence the final decision. This speeds up the search process.
3. Heuristic Evaluation:
In more complex games, like Chess, heuristics are used to evaluate non-terminal
states of the game, where a full evaluation is not possible within a reasonable time.
These heuristics give an approximation of the game state.
3.__Illustrate Forward chaining and Backward chaining with suitable
example. *
Forward Chaining (Data-Driven Reasoning)
4. Example:
Suppose we have the following rules:
4. Example:
Suppose the goal is: "Is the grass slippery?"
o The system finds Rule 2: If the ground is wet, then the grass is slippery.
4.__ Explain Hill Climbing Algorithm and problems that occurs in hill
climbing algorithm?(2023 5Marks)
Hill Climbing is a heuristic search algorithm used in Artificial Intelligence for
mathematical optimization problems. It is an iterative algorithm that starts with an arbitrary
solution and keeps improving it by making small changes, choosing the move that increases
the value (or reduces the cost) the most.
2. Evaluate neighbors:
From the current state, look at neighboring states (i.e., states that differ slightly from
the current one).
5. Goal:
The goal is to maximize or minimize the objective function (called the evaluation or
fitness function).
Example Scenario
Imagine you're on a hill in a foggy environment and want to reach the highest point. You can
only see your immediate surroundings. You take a step in the direction where the land rises
most steeply. You repeat this process until you reach a point where no neighboring step leads
to a higher elevation.
Although hill climbing is simple and efficient in many cases, it has several limitations due to
its greedy nature:
1. Local Maximum
Problem: The algorithm may reach a peak that is higher than its neighbors but
lower than the global maximum.
Effect: It gets stuck there, falsely considering it as the best possible solution.
2. Plateau
Problem: A flat area where neighboring states have the same value as the current
state.
Effect: The algorithm cannot decide in which direction to move and may wander
aimlessly or halt.
3. Ridges
Problem: The optimal path may lie along a narrow ridge or diagonal path that
requires a sequence of moves, not just the best local move.
Effect: The algorithm may fail to find the ridge path and get stuck.
5.__ What do you mean by Resolution? Also discuss the steps in Resolution .
Resolution is a rule of inference used in propositional logic and first-order predicate logic
to derive conclusions by refuting the negation of a query. It is a fundamental technique used
in automated theorem proving and logic-based AI systems.
All the statements in the knowledge base and the negated query must be converted to
Conjunctive Normal Form (CNF). CNF is a conjunction (AND) of disjunctions (OR) of
literals.
Example:
(A→B)(A \rightarrow B)(A→B) becomes (¬A∨B)(\neg A \lor B)(¬A∨B)
To use proof by contradiction, negate the query you want to prove and add it to the
knowledge base.
Example:
To prove QQQ, add ¬Q\neg Q¬Q to the KB.
4. Continue until:
You derive an empty clause (⊥), which represents a contradiction — meaning the
original query is proven true.
Or, no new information can be derived — meaning the query cannot be proven with
the given knowledge base.
For example, consider a task where the goal is to make a sandwich and pour a drink. The
available actions include spreading peanut butter, spreading jelly, putting the bread together
to make a sandwich, and pouring a drink into a cup. In a partial-order plan, the actions
“spread peanut butter” and “spread jelly” must both occur before “put bread together,”
because the sandwich requires both spreads. However, there is no required order between
spreading peanut butter and jelly—they can occur in any sequence. Additionally, the action of
pouring a drink is completely independent of making the sandwich, so it can occur at any
time in the plan, even in parallel with the sandwich-making process.
This approach to planning offers several advantages. It allows more natural handling of
parallel tasks, avoids unnecessary sequencing of unrelated actions, and scales better in
dynamic or uncertain environments. POP is particularly useful in domains where flexibility
and concurrency are important.
Belief networks are used in reasoning under uncertainty, diagnosis, prediction, and decision
making in AI systems.
1. Identify the Variables: Determine all relevant variables in the domain. Each variable
becomes a node in the network.
3. Construct the Directed Acyclic Graph (DAG): Use the variables and dependencies
to build a graph where nodes represent variables and directed edges represent causal
or influential relationships. The graph must be acyclic.
4. Define the Conditional Probability Tables (CPTs): For each node, specify the
probability distribution given its parent nodes. If a node has no parents, specify its
prior probability.
5. Validate the Network: Ensure the structure and probabilities correctly represent the
problem domain and follow probability rules.
Suppose we are building a belief network for diagnosing whether a person has a cold.
Step 1: Identify Variables
Cold (Yes/No)
Cough (Yes/No)
Fever (Yes/No)
o Cold → Cough
o Cold → Fever
Cold
/ \
Cough Fever
P(Cough | Cold):
P(Fever | Cold):
Step 5: Validate
Check that all probabilities sum to 1 and the relationships match real-world knowledge.
1. Disease Diagnosis: AI systems can analyze medical images (like X-rays, MRIs, and
CT scans) to detect diseases such as cancer, pneumonia, and fractures with high
accuracy.
4. Virtual Health Assistants: Chatbots and virtual nurses provide 24/7 patient support,
answer questions, remind patients to take medication, and schedule appointments.
AI in Retail:
3. Chatbots and Virtual Shopping Assistants: Retailers use AI-powered bots to assist
customers in real time, improving the shopping experience.
4. Visual Search and Image Recognition: Customers can upload images to find similar
products using AI-driven image recognition.
5. Fraud Detection: AI monitors transactions to detect suspicious behavior, minimizing
losses from fraudulent activities.
AI in Banking:
1. Fraud Detection and Prevention: AI identifies unusual transaction patterns and flags
potentially fraudulent activities in real time.
3. Chatbots for Customer Service: Banks use AI-driven chatbots to handle routine
inquiries like balance checks, transaction details, and branch info.
Alpha-beta pruning works by eliminating branches in the search tree that cannot possibly
affect the final decision. It keeps track of two values during the search:
1. Alpha (α): The best value that the maximizing player can guarantee so far. It
represents the lower bound.
2. Beta (β): The best value that the minimizing player can guarantee so far. It represents
the upper bound.
When searching the tree, alpha and beta values are used to prune branches:
Pruning occurs when a node’s value is worse than the current alpha or beta
value (i.e., if a node’s value cannot affect the result, there’s no need to explore it
further).
2. Traverse the tree: Perform a standard minimax search but update the alpha and beta
values as you traverse.
o Alpha is greater than or equal to Beta (i.e., α ≥ β), stop exploring that branch
of the tree.
4. Propagation of Values: After evaluating the children nodes, propagate the values
upwards:
o For the maximizing player, the alpha value is updated to the maximum of the
current alpha and the value of the node.
o For the minimizing player, the beta value is updated to the minimum of the
current beta and the value of the node.
It is a grid-based world (typically 4×4) inhabited by the agent, a monster called the
Wumpus, deadly pits, and gold.
The agent’s goal is to find the gold and exit safely, without falling into a pit or being
eaten by the Wumpus.
Agent: Starts at position (1,1), facing right. It can move forward, turn left/right, grab
objects, and shoot an arrow.
Wumpus: A dangerous creature. If the agent enters a square with the Wumpus, it
dies—unless it has already killed it using an arrow.
The agent perceives the environment using these limited, local clues:
Move Forward
The agent must reason under uncertainty, using inference and logic to deduce safe
moves from percepts.
It doesn’t know the layout of the environment beforehand and has to build
knowledge step-by-step.
Decisions must be made with partial information, making it a great testbed for
logical reasoning, planning, and decision-making in AI.
6. Importance in AI
o Knowledge Representation
o First-order logic
o Handling uncertainty
11.__What do you understand with informed and uninformed search
method? Explain with example.
Search methods are used in Artificial Intelligence to find solutions or paths in a problem
space. They are broadly classified into uninformed (blind) and informed (heuristic) search
methods based on whether they use additional knowledge about the goal.
Examples:
Breadth-First Search (BFS): Explores all nodes at the current depth before going
deeper.
Depth-First Search (DFS): Explores as deep as possible along each branch before
backtracking.
Example:
Uninformed Search: You check each name from top to bottom without any clue —
time-consuming and inefficient.
In contrast, informed search methods use heuristics, which are estimates or extra
knowledge that guide the search toward the goal more efficiently. This makes them faster
and smarter.
Examples:
Greedy Best-First Search: Selects the node that appears to be closest to the goal
based on a heuristic.
A* Search: Combines the cost to reach a node and estimated cost from that node to
the goal (f(n) = g(n) + h(n)).
Example:
Informed Search: It uses distance estimates or traffic info (heuristics) to find the
shortest or fastest path to your destination.
12.__What is planning in AI? Discuss partial order planning and
hierarchical planning in detail
Planning in Artificial Intelligence refers to the process where an intelligent agent determines
a sequence of actions to achieve a specific goal. It starts from a known initial state, aims to
reach a goal state, and figures out the necessary steps in between.
In POP, plans are represented using a partial ordering of actions. This increases
flexibility and allows parallel execution where possible. Only those actions that have
logical dependencies are ordered.
A partial order plan includes components like steps (actions), ordering constraints
(which action must come before which), causal links (action A provides a condition
needed by action B), and open preconditions (conditions that still need to be
established).
The planning process starts with a minimal plan containing only a “start” and “finish”
action. Actions are then added incrementally to satisfy open preconditions, and threats
(actions that can undo the effects of others) are identified and resolved.
The main advantage of POP is that it supports concurrency and produces more
flexible plans, making it ideal for multi-agent or uncertain environments.
Hierarchical Planning focuses on breaking down high-level tasks into smaller, more
manageable subtasks until each can be executed directly by the agent. This technique
mimics how humans typically plan: starting with broad goals and refining them into
detailed steps.
The central concept in hierarchical planning is the use of methods that describe how
to decompose a complex task. These methods help convert abstract goals into
concrete actions.
The planner works by selecting a top-level task and applying suitable decomposition
methods. This continues recursively until all tasks are reduced to primitive actions,
which can then be performed by the agent.
Each program is evaluated using a fitness function that measures how well it solves
the given problem. Based on fitness, the best-performing programs are selected to
reproduce.
Over multiple generations, the population evolves, and ideally, better and better
programs emerge. The process continues until a program with acceptable performance
is found or a stopping condition (like number of generations) is met.
14.__ Applications of AI
1. Healthcare
AI is revolutionizing healthcare by enabling early disease detection, diagnosis, and
treatment planning. AI-powered systems can analyze medical images (like X-rays,
MRIs), predict patient outcomes, and personalize treatments.
Virtual health assistants and chatbots help in patient interaction, while AI models
can also assist in drug discovery and monitoring chronic diseases like diabetes and
heart conditions.
2. Automotive Industry
In the financial sector, AI is used for fraud detection, credit scoring, algorithmic
trading, and risk management. Machine learning models analyze spending patterns
and detect anomalies to prevent fraud.
AI chatbots and robo-advisors are increasingly helping users manage investments and
savings efficiently.
4. Education
AI supports personalized learning by adapting educational content to the pace and
style of each student. Intelligent tutoring systems provide instant feedback and
customized learning paths.
5. E-commerce
6. Agriculture
Industrial robots powered by AI are used for repetitive tasks such as assembly,
packaging, and welding with high precision.
8. Cybersecurity
AI systems are employed to detect cyber threats, phishing attempts, and network
intrusions in real time.
These systems continuously learn from new threats and adapt security protocols
dynamically to protect sensitive data.
This probability of accepting a worse solution helps the algorithm escape local optima
and is controlled by a parameter called temperature (T), which decreases over time.
Initially, the algorithm behaves randomly (high temperature), but as the temperature
decreases, it becomes more selective and behaves like a greedy algorithm.
This means the algorithm does not go beyond a certain level, which helps prevent
infinite loops in graphs or trees with infinite depth.
It is useful when we know the approximate depth of the goal state in advance.
However, if the limit is too low, the algorithm may miss the solution
(incompleteness); if the limit is too high, it behaves like DFS.
Time Complexity: O(b^l), where b is the branching factor and l is the depth limit.
Space Complexity: O(l), since it behaves like DFS.
Drawback: May cut off the solution if it lies just beyond the depth limit.
This method is both complete and optimal (if cost is a function of depth) and uses
less memory compared to BFS.
Although it may seem inefficient due to repetition, the nodes at shallow depths
dominate the total number of nodes, so it’s quite efficient in practice.
For example, given the phrase "I am going to the", a good language model will
predict the next word like "store" or "park" rather than "banana" (depending on
context).
o N-gram models: Predict the next word based on the last n-1 words.
Text generation – Writing articles, stories, or even code (like GPT models).
Training data: Models learn from huge datasets to predict and generate language.
T5, XLNet, RoBERTa – Other transformer-based models tailored for specific NLP
tasks.
18.__Differentiation
Forward Chaining Backward Chaining
1 Data-driven approach Goal-driven approach
2 Starts from known facts Starts from the goal or hypothesis
3 Moves from facts to conclusions Moves from goal to supporting facts
4 Widely used in expert systems Used in theorem proving and diagnosis
5 Applies inference rules to known facts Tries to prove the goal using rules and facts
6 Explores all possible conclusions Focuses only on what is needed to prove
goal
7 Can be inefficient due to wide search More targeted and efficient
8 Useful when all facts are known Useful when a specific goal is given
9 Example: Medical system suggesting Example: Checking if a patient has a
diseases disease
10 Adds new facts to knowledge base Works backward by breaking goal into
continuously subgoals
11 More suited for data collection More suited for diagnostic systems
systems
12 Continues until goal or no rule applies Continues until facts prove or disprove goal
13 Breadth-first in nature Depth-first in nature
14 Often used in production systems Often used in logic programming
15 Example rule: If A and B → C, and A, Example rule: To prove C, check if A and B
B known → deduce C are true (C ← A ∧ B)