Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
68 views28 pages

UNIT - 1 Notes

Uploaded by

Arun Gurjar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views28 pages

UNIT - 1 Notes

Uploaded by

Arun Gurjar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

UNIT-1

Introduction to Artificial Intelligence (AI)

Artificial Intelligence (AI) refers to the simulation of human intelligence to


machines. These systems are designed to perform decision-making, speech
recognition, visual perception, and translation between languages.

 AI is essentially about creating machines that can “think” and “learn” like
humans.

 It can be defined as the ability of machines to perform tasks that typically


require human intelligence by learning from past experiences without being
explicitly programmed for every specific task.

 This involves systems that adapt, improve, and make decisions based on
previous interactions or data, thus allowing them to generalize solutions to
new problems they have not been specifically trained for.

 In simpler terms, AI systems can adjust their behavior over time, leveraging
past experiences (data) to handle new and unforeseen scenarios, similar to
how humans learn from their experiences.
Example:

A self-driving car uses AI to navigate roads. It learns from previous driving


experiences (such as recognizing obstacles or understanding traffic signals)
and applies that knowledge to drive in new environments without needing
explicit instructions for each situation.

Machine Learning (ML):

Machine Learning (ML) is a subset of artificial intelligence (AI) that focuses on


building systems that can learn from data and improve their performance without
being explicitly programmed for each task. It involves using algorithms to identify
patterns, make decisions, and predict outcomes based on data. ML is categorized
into three main types:

1. Supervised Learning: The algorithm learns from labeled data, meaning the
input comes with corresponding correct outputs. Examples include
classification and regression tasks.
2. Unsupervised Learning: The algorithm works with unlabeled data and tries
to find hidden patterns or structures, such as clustering or association tasks.
3. Reinforcement Learning: The system learns through trial and error,
receiving rewards or penalties based on the actions it takes in a specific
environment.

Examples:
 Netflix Recommendations: Netflix uses machine learning to analyze users'
viewing habits and suggests movies or shows based on their preferences.
 Spam Detection: Email services like Gmail use machine learning models to
filter out spam by analyzing patterns and keywords commonly found in spam
emails.

Applications:

 Fraud Detection: In banking, ML models analyze transaction data to identify


unusual patterns that may indicate fraudulent activities.
 Healthcare: ML is used for predictive analytics, like predicting the likelihood
of a patient developing certain conditions based on medical history.
 Autonomous Vehicles: ML models in self-driving cars enable them to learn
from traffic data, improving their decision-making over time.

Deep Learning (DL):

Deep Learning (DL) is a specialized branch of machine learning that uses artificial
neural networks to model and solve complex problems. These networks have
multiple layers (hence "deep") that allow the system to learn hierarchical
representations of data. Deep learning is especially effective for tasks like image
recognition, natural language processing, and speech recognition.

Key components of deep learning include:

 Neural Networks: Inspired by the human brain, these are the building blocks
of deep learning, consisting of layers of interconnected nodes (neurons).
 Convolutional Neural Networks (CNNs): Primarily used for image data,
CNNs are specialized for tasks like image classification and object detection.
 Recurrent Neural Networks (RNNs): Useful for sequential data, RNNs are
often applied in tasks like language modeling or time series prediction.

Examples:

 Facial Recognition: Deep learning algorithms power facial recognition


systems, like those used in security or unlocking smartphones.
 Voice Assistants: Deep learning enables virtual assistants like Amazon's
Alexa or Apple's Siri to understand and process spoken language accurately.

Applications:

 Autonomous Driving: Deep learning is critical in object detection and


classification, enabling self-driving cars to identify pedestrians, other
vehicles, and road signs.
 Healthcare: DL models are used in medical imaging to automatically detect
diseases like cancer or analyze X-rays for abnormalities.
 Natural Language Processing (NLP): In NLP, deep learning models like
GPT-4 enable more accurate language translation, chatbot responses, and text
generation.

Goals of AI:

1. Automation: Systems that Operate Autonomously


AI aims to create machines and software that perform tasks on their own, without
human help. This is seen in:

 Industries: Robots in manufacturing, farming, and mining.


 Daily Life: Smart home devices, self-driving cars, and virtual assistants.
Automation increases efficiency by handling repetitive tasks, allowing
humans to focus on more complex activities.

2. Improved Decision Making: Data-Driven Support

AI helps humans make better decisions by analyzing large amounts of data,


identifying patterns, and predicting outcomes. Examples include:

 Healthcare: AI assists doctors with diagnoses.


 Finance & Business: AI predicts market trends or customer behavior. AI
speeds up data analysis, providing insights that help people make smarter
choices.

3. Problem-Solving: Tackling Complex Tasks

AI excels at solving difficult problems like:

 Language Translation: AI-powered tools like Google Translate help break


language barriers.
 Image Recognition: AI identifies objects or patterns in images, used in
healthcare, security, and self-driving cars. AI learns and improves over time,
handling more challenging tasks efficiently.

Types of AI:
1. Narrow AI: Focused on specific tasks. Most AI applications today fall under
this category.
o Example: Virtual assistants like Alexa and Siri.
2. General AI: AI that can perform any intellectual task that a human can do.
This level of AI doesn't yet exist.
o Example: A theoretical AI system that could do anything a human can,
from playing chess to cooking.
3. Super AI: This would surpass human intelligence in all areas, including
problem-solving, creativity, and social intelligence. It is still hypothetical.
o Example: Advanced AI in science fiction, such as HAL 9000 in 2001:
A Space Odyssey.

Historical Development and Foundation Areas of AI

Historical Overview:

 Pre-1950s: Philosophical roots can be traced to ancient Greece, with thinkers


such as Aristotle and Euclid discussing logic and reasoning.
 1950s: Alan Turing published "Computing Machinery and Intelligence,"
introducing the Turing Test, a way to measure a machine's ability to exhibit
human-like intelligence.
 1956: The term "Artificial Intelligence" was officially coined during the
Dartmouth Conference, which is considered the birth of AI as a field of
study.
 1970s-80s: AI faced its first "AI Winter," a period of reduced funding and
interest. However, during this time, expert systems were developed and
applied in industries such as healthcare.
 1990s-2000s: Machine learning began to flourish. AI made significant
strides with IBM's Deep Blue defeating world chess champion Garry
Kasparov (1997) and IBM's Watson winning Jeopardy! (2011).
 2010s-present: The rise of deep learning, large datasets, and increased
computational power has led to breakthroughs in areas like natural language
processing and computer vision.

Foundation Areas:

1. Mathematics: Probability theory and linear algebra are fundamental in


designing algorithms that AI systems use to learn from data.
o Example: Bayesian networks use probabilities to model uncertainty in
reasoning.
2. Computer Science: Data structures and programming languages provide the
backbone for AI applications.
o Example: Python is one of the most widely used languages for
developing AI systems.
3. Neuroscience: AI systems like artificial neural networks are inspired by the
human brain’s structure and functioning.
o Example: Deep learning models use layers of artificial neurons to
recognize patterns in data, such as identifying faces in images.
4. Cognitive Science: The study of how humans think and learn is applied in
AI to replicate intelligent behavior.
o Example: Cognitive AI systems, like IBM’s Watson, learn and process
information in ways similar to human cognitive functions.

Tasks and Application Areas of AI

Tasks AI Systems Can Perform:

1. Perception: AI systems can perceive the world using sensors (cameras,


microphones, etc.). For example, facial recognition systems analyze visual
data to identify individuals.
2. Reasoning and Decision Making: AI systems can analyze data and make
decisions based on logical rules. AI used in finance, for instance, can evaluate
financial risks and recommend actions.
3. Learning: Machine learning algorithms improve their performance as they
are exposed to more data.
o Example: Spam filters learn to identify junk emails by analyzing
patterns in messages that are flagged as spam.
4. Natural Language Processing (NLP): AI can understand and generate
human language. Virtual assistants like Google Assistant use NLP to process
voice commands and provide appropriate responses.
5. Planning and Problem-Solving: AI systems can solve complex problems
and plan steps to achieve a specific goal. Self-driving cars plan routes by
considering multiple factors like traffic and road conditions.

Key Application Areas:

1. Healthcare: AI is used in diagnostics (e.g., identifying cancer in radiology


images), personalized medicine, and drug discovery.
o Example: IBM Watson helps doctors by analyzing vast amounts of
medical literature to suggest treatment options.
2. Finance: AI-powered algorithms are used in fraud detection, stock trading,
and customer service chatbots.
o Example: JPMorgan’s COiN uses AI to review legal documents
quickly, a task that traditionally takes humans over 360,000 hours
annually.
3. Agriculture: AI helps in monitoring crop health using drones, optimizing
irrigation, and predicting yields.
o Example: AI-based precision agriculture systems guide farmers on
when to plant, water, and harvest crops.
4. Education: AI tools like adaptive learning platforms customize learning
experiences for students based on their strengths and weaknesses.
o Example: Platforms like Coursera use AI to recommend courses based
on a learner’s preferences and history.
5. Security: AI systems in surveillance, cyber threat detection, and fraud
prevention are becoming widespread.
o Example: AI-based systems monitor video feeds to detect unusual
activity in public spaces, improving security.
Intelligent Agents

What are Intelligent Agents?

An intelligent agent is a system that perceives its environment through sensors,


processes the information, and takes action to achieve specific goals.

Structure of an Intelligent Agent:

1. Perception: The agent collects data from its environment (e.g., cameras,
sensors).
2. Reasoning/Decision-Making: The agent processes the data and makes
decisions based on predefined goals or learned behavior.
3. Action: The agent takes actions that influence the environment to achieve its
goals.

Types of Intelligent Agents:

Simple Reflex Agents: These respond to current perceptions without considering


the history of the environment.

Examples:

 Thermostat: A thermostat senses the room temperature and switches the


heater on or off based on a predefined rule, such as "If temperature < 20°C,
turn on the heater."
 Automatic Door: An automatic door senses when someone is near (using
motion sensors) and opens immediately without considering anything else.
 Smoke Detector: It detects smoke and triggers an alarm instantly based on
the current situation, without considering other environmental factors.

Model-Based Agents: These keep an internal state of the environment, considering


past information to make decisions.

Examples:

 Robot Vacuum Cleaner (with memory): A robot vacuum can track which
areas of the floor have already been cleaned and navigate around obstacles
using memory of its previous actions.
 Self-Driving Car (Basic): A self-driving car may remember the positions of
nearby vehicles and obstacles and adjust its movement based on traffic flow
and road conditions.
 Smart Thermostat: A smart thermostat can remember user preferences and
predict when to adjust the temperature based on the time of day, weather
patterns, and household activity.

Goal-Based Agents: These act to achieve a specific goal by considering different


possibilities.

Examples:

 Chess Playing AI: In a chess game, the AI evaluates possible future moves
to checkmate the opponent, selecting moves that will bring it closer to that
objective.
 Route-Finding GPS System: A GPS navigation system finds the optimal
route to a destination based on the goal of reaching a location, considering
current traffic conditions.
 Delivery Drone: A drone programmed to deliver packages has the goal of
reaching a delivery point. It plans its route based on the goal while avoiding
obstacles.

Utility-Based Agents: These make decisions based on a utility function, which


assigns values to different outcomes.

Examples:

 Autonomous Trading Bot: A stock trading bot doesn't just seek to buy and
sell stocks but also evaluates various factors like profit margins, risk levels,
and market conditions to maximize returns.
 Autonomous Taxi: An AI taxi not only aims to reach a destination but also
optimizes for fuel efficiency, shortest path, and customer comfort to achieve
a balance between multiple factors.
 Personalized Recommendation System: A system like Netflix that
recommends shows based on past behavior and user ratings while trying to
maximize user satisfaction and engagement.

PEAS of Intelligent Agent:

The PEAS model is a framework used to describe the components of an intelligent


agent in AI. PEAS stands for:

 P: Performance Measure
 E: Environment
 A: Actuators
 S: Sensors

PEAS Components Explained

1. Performance Measure:
This defines the criteria for evaluating the success or effectiveness of an
agent's behavior. The performance measure specifies what the agent is trying
to achieve and how its success is measured.

Example:

o For a self-driving car, the performance measure could include safety,


speed, fuel efficiency, and comfort.
2. Environment:
The environment is the external situation in which the agent operates. It can
be fully observable or partially observable, static or dynamic, and may
include other agents or obstacles.

Example:

o For a self-driving car, the environment includes the roads, traffic


signs, pedestrians, other vehicles, and weather conditions.
3. Actuators:
Actuators are the mechanisms through which the agent takes actions to
affect the environment. These are the tools or components the agent uses to
execute decisions.
Example:

o For a self-driving car, the actuators include the steering wheel,


accelerator, brake, and signal indicators.
4. Sensors:
Sensors allow the agent to perceive the environment. They collect data from
the environment that the agent uses to make decisions and interact with the
environment.

Example:

o For a self-driving car, the sensors include cameras, radar, lidar, GPS,
and speedometers.

Example: PEAS for Different Agents

1. Self-Driving Car:
o P: Performance Measure: Safety, legality, comfort, speed, fuel
efficiency.
o E: Environment: Roads, traffic, pedestrians, traffic lights, weather.
o A: Actuators: Steering, accelerator, brake, signals, horn.
o S: Sensors: Cameras, radar, GPS, lidar, speedometers.
2. Vacuum Cleaning Robot:
o P: Performance Measure: Amount of dirt cleaned, time taken, energy
efficiency.
o E: Environment: Floors, furniture, obstacles like chairs or walls.
o A: Actuators: Brushes, suction, wheels for movement.
o S: Sensors: Dirt sensors, bump sensors, cliff sensors, and cameras for
navigation.
3. Personal Assistant (like Siri):
o P: Performance Measure: Accuracy of responses, user satisfaction,
response time.
o E: Environment: User commands (spoken or typed), internet
resources, calendar, contacts.
o A: Actuators: Audio response, text display, app notifications.
o S: Sensors: Microphone (for voice), keyboard (for typed input), touch
sensors.

Computer Vision

Computer vision is the field of AI that focuses on enabling machines to interpret and
understand visual data. It deals with acquiring, processing, and analyzing images to
extract useful information.

Process:

1. Image Acquisition: It captures images through cameras or sensors.


2. Preprocessing: This involves enhancing image quality, reducing noise, and
converting images into suitable formats (e.g., grayscale or binary).
3. Feature Extraction: Relevant features like edges, textures, shapes, or color
distributions are identified to reduce data complexity while preserving
important information.
4. Object Detection/Segmentation: The system identifies and classifies objects
within the image using algorithms like convolutional neural networks
(CNNs).
5. Recognition and Interpretation: The identified objects are labeled, and the
system makes sense of their spatial arrangement and relationships, leading to
tasks like image classification, facial recognition, or action detection.

Functions:

 Object Recognition: Identifying specific objects in images (e.g., recognizing


faces or cars).
 Image Classification: Categorizing images into predefined categories.
 Image Segmentation: Dividing an image into regions or objects for further
processing.
 Gesture Recognition: Interpreting human gestures in real-time for
interaction.
 Image Generation: Using techniques like Generative Adversarial Networks
(GANs) to generate new images.

Examples of Computer Vision

1. Facial Recognition
Application: Face unlocking on smartphones (e.g., iPhone Face ID)
Process:
o Input: Image of the user’s face.
o Steps: The system uses feature extraction to detect key facial
landmarks (e.g., distance between eyes, nose shape) and compares
them against stored templates.
o Output: If the face matches, access is granted.

Explanation: This process leverages convolutional neural networks (CNNs)


to detect, analyze, and recognize facial features, ensuring accuracy in various
lighting conditions or angles.

2. Object Detection in Autonomous Vehicles


Application: Self-driving cars (e.g., Tesla’s Autopilot)
Process:
o Input: Continuous video streams from cameras surrounding the
vehicle.
o Steps: The system detects objects like pedestrians, cars, traffic lights,
and obstacles using CNN-based algorithms like YOLO (You Only
Look Once).
o Output: The car navigates safely, avoiding obstacles and following
traffic signals.

Explanation: Real-time object detection allows autonomous vehicles to


understand their environment and make decisions based on object positions,
trajectories, and classifications.

3. Medical Image Analysis


Application: Detecting tumors in MRI or CT scans
Process:
o Input: MRI scan of a patient's brain.
o Steps: Preprocessing enhances the image, followed by feature
extraction to highlight abnormal patterns. Segmentation techniques
then isolate suspicious regions (e.g., tumors).
o Output: The system provides a classification (e.g., benign or
malignant) based on the detected features.

Explanation: Medical image analysis uses deep learning models like U-Net
for segmentation and ResNet for classification to assist radiologists in
diagnosing diseases faster and more accurately.

4. Gesture Recognition in Gaming


Application: Gaming consoles (e.g., Microsoft Kinect)
Process:
o Input: The player's movements are captured by depth-sensing
cameras.
o Steps: The system uses computer vision techniques to track the
player's body and detect specific gestures.
o Output: The gestures control the game (e.g., moving hands to shoot
arrows in a virtual environment).

Explanation: Gesture recognition relies on 3D sensors and computer vision


algorithms to map real-world body movements into virtual commands in the
game.

5. Optical Character Recognition (OCR)


Application: Scanning printed text into digital documents
Process:
o Input: Image of a printed document.
o Steps: Preprocessing improves contrast and removes noise. OCR
algorithms then segment the text and recognize individual characters.
o Output: The document is converted into editable text.

Explanation: OCR uses pattern recognition and text extraction techniques to


digitize printed documents for easier editing, storage, or searching.

Natural Language Processing (NLP)

NLP is a field of AI focused on the interaction between computers and humans using
natural language. NLP enables computers to read, understand, and generate human
language in a meaningful way.

Process/Steps:

1. Text Acquisition: Text is gathered from various sources like documents,


websites, or speech-to-text systems.
2. Text Preprocessing: This includes tokenization (splitting text into words),
stemming/lemmatization (normalizing word forms), and removing stopwords
(common words like “the,” “is”).
3. Parsing/Syntactic Analysis: The system constructs a parse tree or syntax
tree that represents the grammatical structure of sentences.
4. Semantic Analysis: Understanding the meaning of the text by associating
words with their meanings in context (word sense disambiguation).
5. Named Entity Recognition (NER): Identifying names, places, or specific
objects in text.
6. Sentiment Analysis: Detecting the sentiment (positive, negative, neutral)
expressed in the text.
7. Machine Translation or Text Generation: Producing translations or
generating meaningful responses.

Functions:

 Language Translation: Converting text from one language to another.


 Speech Recognition: Converting spoken language into text.
 Sentiment Analysis: Identifying the emotional tone behind a body of text.
 Text Summarization: Condensing a text to a shorter version while
preserving key information.
 Named Entity Recognition: Identifying specific names, organizations, dates,
etc., in text.

NLP PARSE TREE:

Syntax refers to the arrangement of words in a sentence and the set of rules that
govern how sentences are formed in any language. In Natural Language
Processing (NLP), syntax is critical for tasks such as parsing, sentence generation,
and grammar checking. The syntactic structure involves components like
sentences, clauses, phrases, and words that follow certain rules to form
grammatically correct sentences. Some of the syntactic categories of a natural
language are as follows:

Components of Syntax in Natural Language

1. Sentences:
A sentence is the largest unit of syntax. It typically consists of a subject,
predicate, and various complements. For example:
o Sentence (S): "The cat sat on the mat."
2. Clauses:
A clause can be a full sentence or part of a sentence, usually containing a
subject and a verb. For example:
o Main Clause: "The cat sat on the mat."
o Dependent Clause: "Although the cat was sleepy, it sat on the mat."
3. Phrases:
Phrases are groups of words that function as a single unit within a sentence.
There are different types of phrases, such as:
o Noun Phrase (NP): "The cat"
o Verb Phrase (VP): "sat on the mat"
o Prepositional Phrase (PP): "on the mat"
4. Words:
Words are the basic building blocks of sentences and phrases. They are
classified into syntactic categories or parts of speech, such as:
o Noun (N): "cat"
o Verb (V): "sat"
o Determiner (Det): "The"
o Preposition (P): "on"

Syntax Tree (Parse Tree)

A Syntax Tree (also called a Parse Tree) visually represents the syntactic
structure of a sentence by showing how words are grouped and combined to form a
grammatical sentence according to the rules of a language. The tree breaks down a
sentence into its syntactic components such as noun phrases, verb phrases, and
other parts of speech.

 "The cat sat on the mat."

 Sentence (S)
o Noun Phrase (NP): "The cat"
 Determiner (Det): "The"
 Noun (N): "cat"
o Verb Phrase (VP): "sat on the mat"
 Verb (V): "sat"
 Prepositional Phrase (PP): "on the mat"
 Preposition (P): "on"
 Noun Phrase (NP): "the mat"
 Determiner (Det): "the"
 Noun (N): "mat"

This hierarchical structure shows how each word or phrase belongs to a higher-
order syntactic category, all contributing to the complete sentence.
EX: "The cat sat on the mat"

/ \

NP VP

/\ / \

Det N V PP

| | | / \

The cat sat P NP

| /\

on Det N

| |

the mat
EX:
Examples of NLP

1. Machine Translation (Language Translation)


Input: "La computadora está encendida."
Output: "The computer is on."

Explanation: Using models like Google's Neural Machine Translation


(GNMT), NLP systems translate text from one language to another. The
system first parses the sentence, identifies the subject, verb, and object, and
then finds the equivalent words in the target language, maintaining
grammatical accuracy.

2. Named Entity Recognition (NER)


Input: "Apple is releasing a new iPhone in San Francisco next month."
Output:
o Entity 1: Apple (Organization)
o Entity 2: iPhone (Product)
o Entity 3: San Francisco (Location)
o Entity 4: Next month (Date)

Explanation: The NER system extracts key entities from a text by


recognizing their category (e.g., organizations, products, locations, or dates),
enabling better contextual understanding for applications like news
summarization or customer support systems.

3. Sentiment Analysis (Emotion Detection)


Input: "The new movie was fantastic!"
Output: Positive Sentiment
Explanation: Sentiment analysis models evaluate whether the sentence's tone
is positive, negative, or neutral. This technique is widely used in customer
reviews, social media analysis, or feedback monitoring.

4. Text Summarization
Input:
Original Text: "The report explains the impact of climate change on coastal
regions, where rising sea levels have been the cause of numerous natural
disasters. Governments need to take immediate action to mitigate these
effects, including investing in renewable energy and reducing carbon
emissions."
Output:
Summary: "The report discusses climate change's impact on coastal areas
and urges immediate government action."

Explanation: Text summarization condenses long documents or articles into


concise summaries, maintaining key information. Models like GPT or
BERT-based models handle this task by analyzing sentence importance and
generating summaries.

5. Chatbots/Virtual Assistants (Dialog Management)


Input (User): "What's the weather like in New York today?"
Output (Assistant): "Today's weather in New York is sunny with a high of
75°F."

Explanation: NLP powers virtual assistants like Siri, Alexa, and Google
Assistant. These systems break down the user's query, identify key phrases
(e.g., "weather," "New York"), and provide the corresponding information
by querying relevant databases.

Application of Syntax Trees in NLP

1. Grammar Checking: NLP tools use syntax trees to check whether a


sentence follows the grammar rules of a language.
2. Machine Translation: Syntax trees help models understand the structure of
sentences in the source language and translate them into grammatically
correct sentences in the target language.
3. Question Answering: Syntax trees are used to parse and analyze the
structure of both questions and answers in systems like chatbots.
4. Sentence Generation: Syntax trees help in generating sentences by
following the syntactic rules of a language.

Practice Questions on Computer Vision and NLP:

Q1) Consider an image processing system that classifies types of plants. Given an
input image of a plant, create a parser tree that outlines the steps from image
acquisition to plant classification, breaking down each function into
subcomponents.

A:

Plant Classification

|
└── Image Acquisition

└── Preprocessing (Noise Reduction, Scaling)

└── Feature Extraction (Leaves Texture, Shape)

└── Object Detection (Leaf Identification)

└── Plant Type Classification (Labeling)

Q2: Given a sentence, 'The cat sat on the mat,' construct a parser tree to represent
the syntactic structure, then explain how the system interprets the subject, verb,
and object.

A:

Sentence: "The cat sat on the mat"

└── Sentence (S)

├── Noun Phrase (NP): The cat

│ |
│ ├── Determiner (Det): The

│ └── Noun (N): cat

└── Verb Phrase (VP): sat on the mat

├── Verb (V): sat

└── Prepositional Phrase (PP): on the mat

├── Preposition (P): on

└── Noun Phrase (NP): the mat

├── Determiner (Det): the

└── Noun (N): mat

-------------------------------------------THANK YOU-------------------------------------------------

You might also like