Thanks to visit codestin.com
Credit goes to www.scribd.com

67% found this document useful (12 votes)
62K views18 pages

AI Class 10 Notes: Key Concepts

The document provides information about artificial intelligence (AI) including definitions of intelligence, artificial intelligence, how AI makes decisions, and how machines become artificially intelligent. It also discusses the AI project cycle which involves 5 stages: 1) problem solving, 2) data acquisition, 3) data exploration, 4) data modeling, and 5) evaluation. Finally, it covers problem scoping using the 4Ws canvas to identify who is affected, what the problem is, where it occurs, and why it needs to be addressed.

Uploaded by

Satyam Singla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
67% found this document useful (12 votes)
62K views18 pages

AI Class 10 Notes: Key Concepts

The document provides information about artificial intelligence (AI) including definitions of intelligence, artificial intelligence, how AI makes decisions, and how machines become artificially intelligent. It also discusses the AI project cycle which involves 5 stages: 1) problem solving, 2) data acquisition, 3) data exploration, 4) data modeling, and 5) evaluation. Finally, it covers problem scoping using the 4Ws canvas to identify who is affected, what the problem is, where it occurs, and why it needs to be addressed.

Uploaded by

Satyam Singla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Artificial Intelligence Class 10 Notes

What is Intelligence?
Humans have been creating machines that can simplify their lives. The purpose of machines
is to complete jobs that are either too time-consuming or tedious for people to
complete. Therefore, by doing our work for us, machines lighten our weight and assist us in
achieving our objectives.

What is Artificial Intelligence?


Artificial is something which is man-made, which does not occur naturally. But what
about Intelligence, how do we define that?

According to researchers, intelligence is the ‘ability to perceive or infer information, and to


retain it as knowledge to be applied towards adaptive behaviors within an environment or
context.’ 

Let us define each term mentioned above to get a proper understanding:


Artificial Intelligence can be defined:

1. Ability to interact with the real world 


1. To perceive, understand and act 
1. Example: Speech Recognition – Understanding and synthesis 
2. Example: Image Recognition 
3. Example: Ability to take action: to have an effect 
2. Reasoning and planning 
1. Modelling the external world, given input 
1. Solving new problems, planning and making decisions 
2. Ability to deal with unexpected problems, uncertainties 
3. Learning and adaptation 
1. Continuous learning and adapting graph 
1. Our internal models are always being updated 
2. Example: Baby learning to categorize and recognize animals
How AI make decisions?
The basis for decision-making depends on the information that is available and how we
perceive and comprehend it. Information in this page refers to our current knowledge, self-
awareness, intuition, and past experiences.

What is Artificial Intelligence?


Artificial intelligence is the ability of a machine to mimic human characteristics, such as
decision-making, future prediction, self-improvement, and learning.

In other words, a machine is artificially intelligent when it can do activities on its own,
including collecting data, comprehending it, analyzing it, learning from it, and improving it.

How do machines become Artificially Intelligent?

Humans become more and more intelligent with time as they gain experiences during their
lives. machines also become intelligent once they are trained with some information which
helps them achieve their tasks. AI machines also keep updating their knowledge to optimise
their output.

Applications of Artificial Intelligence

AI-powered machines are all around us. They are quickly taking over our daily lives and give
us the convenience of having even the most difficult and time-consuming activities
completed at the push of a button or with the help of a sensor.

1. Google – We use Google to search the internet without realising how effectively it
always provides us with precise responses. It not only quickly returns the results of
our search, but it also advises and corrects the grammar of our written words.
2. Hey Shiri – These days, we have personal assistants that respond to a single
command and perform numerous tasks. Several popular voice assistants that are an
integral component of our digital devices are Alexa, Google Assistant, Cortana, and
Siri.
3. Google Map – Apps like Uber and Google Maps come in handy for helping us find
our way about. Consequently, it is no longer necessary to stop and ask for directions
constantly.
4. FIFA – For its users, AI has significantly improved the gaming experience. Many
modern video games are supported by artificial intelligence (AI), which improves
graphics,  reates new challenges for players, etc.
5. Amazon – AI has taken care of our habits, likes, and dislikes in addition to making
our lives easier. Because of this, services like Netflix, Amazon, Spotify, and YouTube,
among others, display recommendations based on our preferences.
6. Social Media – The recommendations, however, go beyond simply reflecting our
preferences; they also take into account our desire to interact with friends via social
media networks like Facebook and Instagram. Additionally, they provide us
personalised notifications about our online buying information, construct playlists
automatically based on our demands, and more. Selfies have never been more
enjoyable because of Snapchat’s amazing filters.
7. Health App – That’s not everything. Our health is also being tracked by AI. There are
many chatbots and other health apps available that continuously track their users’
physical and emotional wellbeing.
8. Humanoid – These applications range from smart devices, such as Sophia, the first
humanoid robot intelligent enough to obtain citizenship, to humanoids, biometric
security systems, such as the face locks we have on our phones, real-time language
translators, weather predictions, and other things. If we keep adding things up, this
module would never end because the list is so long. Take some time, have a
conversation with a friend, and start noticing more and more AI uses in your
environment!
What is not AI?

Today, because there are so many different technologies all around us, we frequently
mistake any other technology for artificial intelligence. Because of this, we must clearly
define what constitutes AI and what does not.

Washing Machine – A fully automatic washing machine can function on its own, but
choosing the washing parameters and making the necessary preparations before each wash
require human participation, making the machine an example of automation rather than AI.

Air Conditioner – The internet can be used to remotely turn on and off an air conditioner,
but it still requires a human touch.

AI Project Cycle

The AI Project Cycle is a step-by-step process that a company must follow in order to derive
value from an AI project and to solve the problem.
The AI Project Cycle offers us a suitable framework that can guide us in the right direction.
The AI project cycle consists of five main stages:

1) Problem Solving
2) Data Acquisition
3) Data Exploration
4) Data modeling
5) Evaluation

Starting with problem scoping, you define the objective of your AI project by identifying the
issue that you hope to address. When problem scoping, we consider several factors that
have an impact on the issue we’re trying to address to make the situation more evident.

● You need to acquire data which will become the base of your project as it will help you in
understanding what the parameters that are related to problem scoping are.

● You go for data acquisition by collecting data from various reliable and authentic sources.
Since the data you collect would be in large quantities, you can try to give it a visual image
of different types of representations like graphs, databases, flow charts, maps, etc. This
makes it easier for you to interpret the patterns which your acquired data follows.

● After exploring the patterns, you can decide upon the type of model you would build to
achieve the goal. For this, you can research online and select various models which give a
suitable output.

● You can test the selected models and figure out which is the most efficient one.

● The most efficient model is now the base of your AI project and you can develop your
algorithm around it.

● Once the modelling is complete, you now need to test your model on some newly fetched
data. The results will help you in evaluating your model and improving it.

● Finally, after evaluation, the project cycle is now complete and what you get is your AI
project.

Problem Scoping

It is a fact that we are surrounded by problems. They could be small or big, sometimes
ignored or sometimes even critical. Many times, we become so used to a problem that it
becomes a part of our life. Identifying such a problem and having a vision to solve it, is what
Problem Scoping is about.

The Sustainable Development Goals are a set of 17 objectives that the United Nations has
stated. These objectives are meant to be accomplished by the year 2030. Every UN member
state has made a commitment to doing this.

These sustainable development objectives line up with the issues that we might also see in
our immediate environment. Such issues should be sought out and addressed because
doing so would improve the lives of many people and advance the objectives of our nation.

4Ws Problem Canvas

A problem can be difficult to scope since we need to have a deeper understanding of it in


order to have a clearer image when trying to solve it. Hence, we use the 4Ws Problem
Canvas to help us out. 

The 4 W’s of Problem Scoping are Who, What, Where, and Why. This 4 W’s helps to identify
and understand the problem in a better manner.
a. Who – The “Who” element helps us to understand and categorize who is directly and
indirectly affected by the problem, and who are known as Stakeholders.

b. What – The “What” section aids us in analyzing and recognizing the nature of the
problem, and you may also gather evidence to establish that the problem you’ve chosen
exists under this block.

c. Where – What is the situation, and where does the problem arise.

d. Why – Refers to why we need to address the problem and what the advantages will be
for the stakeholders once the problem is solved.

Problem Statement Template with space to fill details according to your Goal:
Data Acquisition

The method of collecting accurate and trustworthy data to work with is referred to as data
acquisition. Data can be acquired from a variety of sources, including websites, journals,
newspapers, and other media, such as text, video, photographs, and audio.

Dataset

Dataset is a collection of data in tabular format. Dataset contains numbers or values that are
related to a specific subject. For example, students’ test scores in a class is a dataset.

The dataset is divided into two parts

a. Training dataset – Training dataset is a large dataset that teaches a machine learning
model. Machine learning algorithms are trained to make judgments or perform a task
through training datasets. Maximum part of the dataset comes under training data (Usually
80%)

b. Test dataset – Data that has been clearly identified for use in tests, usually of a computer
program, is known as test data. 20% of data used in test data

Data Features

Recheck your problem description and try to identify the data attributes needed to solve
this challenge. The type of data you want to collect is referred to as data features. The salary
amount, increase percentage, increment time, bonus, etc. are examples of data features.

There can be various ways in which you can collect data. Some of them are:
Data Exploration

You must have noted while collecting data that it is a complicated thing; it is full of
numbers, and in order to make sense of it, one needs identify certain patterns in it.

For Example, if you go to the library and choose a random book, you try to rapidly go
through its content by turning pages and reading the description before you decide to
borrow it for yourself. This helps you determine whether or not the book is suitable for your
requirements and interests this is data exploration.

To analyze the data, you need to visualize it in some user-friendly format so that you
can:
● Quickly get a sense of the trends, relationships and patterns contained within the data.
● Define strategy for which model to use at a later stage.
● Communicate the same to others effectively. To visualize data, we can use various types of
visual representations.

Modelling

An AI model is a program that has been trained to recognize patterns using a set of data. AI
modeling is the process of creating algorithms, also known as models, that may be
educated to produce intelligent results. This is the process of programming code to create a
machine artificially.

Generally, AI models can be classified as follows:

Rule Based Approach

AI modelling in which the developer sets the rules. The machine executes its duty in
accordance with the rules or instructions specified by the developer.

Learning Based Approach

AI modelling in which the computer learns on its own. The AI model is trained on the data
provided to it under the Learning Based technique, and after that, it is able to create a
model that is flexible to the change in data.
The learning-based approach can further be divided into three parts:

1. Supervised Learning
2. Unsupervised Learning
3. Reinforcement Learning

Supervised Learning – In a supervised learning model, the dataset which is fed to the
machine is labelled. In
other words, we can say that the dataset is known to the person who is training the machine
only then he/she is able to label the data.

Unsupervised Learning – Such models work on continuous data. For example, if you wish
to predict your next salary, then you would put in the data of your previous salary, any
increments, etc., and would train the
model. Here, the data which has been fed to the machine is continuous. 

Unsupervised Learning – An unsupervised learning model works on unlabelled dataset.


This means that the data which is fed to the machine is random and there is a possibility
that the person who is training the model does not have any information regarding it. 

Unsupervised learning models can be further divided into two categories:

Clustering – Refers to the unsupervised learning algorithm which can cluster the unknown
data according to the patterns or trends identified out of it. The patterns observed might be
the ones which are known to the developer or it might even come up with some unique
patterns out of it.

Dimensionality Reduction – We humans are able to visualise upto 3-Dimensions only but
according to a lot of theories and algorithms, there are various entities which exist beyond
3-Dimensions. For example, in Natural language Processing

Evaluation

Once a model has been created and trained, it must undergo appropriate testing in order to
determine the model’s effectiveness and performance. Thus, the model is evaluated using
Testing Data (which was taken from the dataset generated at the Data Acquisition stage),
and the effectiveness of the model is determined using the parameters listed below –
Neural Networks

Neural networks are based in part on how neurons function in the human brain. The main
benefit of neural networks is their ability to automatically extract data features without the
assistance of a programmer.

Natural Language Processing

NLP (Natural Language Processing), is dedicated to making it possible for computers to


comprehend and process human languages. Artificial intelligence (AI) is a subfield of
linguistics, computer science, information engineering, and artificial intelligence that studies
how computers interact with human (natural) languages, particularly how to train computers
to handle and analyze massive volumes of natural language data.

Applications of Natural Language Processing


Most people utilize NLP apps on a regular basis in their daily lives. Following are a few
examples of real-world uses for natural language processing:

Automatic Summarization – Automatic summarization is useful for gathering data from


social media and other online sources, as well as for summarizing the meaning of
documents and other written materials.

Sentiment Analysis – To better comprehend what internet users are saying about a
company’s goods and services, businesses use natural language processing tools like
sentiment analysis to understand the customer requirement.

Indicators of their reputation – Sentiment analysis goes beyond establishing simple


polarity to analyse sentiment in context to help understand what is behind an expressed
view. This is very important for understanding and influencing purchasing decisions.
Text classification – Text classification enables you to classify a document and organise it
to make it easier to find the information you need or to carry out certain tasks. Spam
screening in email is one example of how text categorization is used.

Virtual Assistants – These days, digital assistants like Google Assistant, Cortana, Siri, and
Alexa play a significant role in our lives. Not only can we communicate with them, but they
can also facilitate our life.

Chatbots
A chatbot is one of the most widely used NLP applications. Many chatbots on the market
now employ the same strategy as we did in the instance above. Let’s test out a few of the
chatbots to see how they function.

• Mitsuku Bot*
https://www.pandorabots.com/mitsuku/

• CleverBot*
https://www.cleverbot.com/

• Jabberwacky*
http://www.jabberwacky.com/

• Haptik*
https://haptik.ai/contact-us

• Rose*
http://ec2-54-215-197-164.us-west-1.compute.amazonaws.com/speech.php

• Ochatbot*
https://www.ometrics.com/blog/list-of-fun-chatbots/

There are 2 types of chatbots 

1. Scriptbot
2. Smart-bot.
Scriptbot Smart-bot

Script bots are easy to make Smart-bots are flexible and powerful

Smart bots work on bigger databases and


Script bots work around a script which is
other
programmed in them
resources directly

Mostly they are free and are easy to


integrate Smart bots learn with more data
to a messaging platform

No or little language processing skills Coding is required to take this up on board

Limited functionality Wide functionality

Human Language VS Computer Language


Humans need language to communicate, which we constantly process. Our brain
continuously processes the sounds it hears around us and works to make sense of them.
Our brain continuously processes and stores everything, even as the teacher is delivering
the lesson in the classroom.

The Computer Language is understood by the computer, on the other hand. All input must
be transformed to numbers before being sent to the machine. And if a single error is made
while typing, the machine throws an error and skips over that area. Machines only use
extremely simple and elementary forms of communication.

Data Processing
Data Processing is a method of manipulation of data. It means the conversion of raw data
into meaningful and machine-readable content. It basically is a process of converting raw
data into meaningful information.

Since human languages are complex, we need to first of all simplify them in order to make
sure that the understanding becomes possible. Text Normalisation helps in cleaning up the
textual data in such a way that it comes down to a level where its complexity is lower than
the actual data. Let us go through Text Normalisation in detail.

Text Normalisation

The process of converting a text into a canonical (standard) form is known as text
normalisation. For instance, the canonical form of the word “good” can be created from the
words “gooood” and “gud.” Another illustration is the reduction of terms that are nearly
identical, such as “stopwords,” “stop-words,” and “stop words,” to just “stopwords.”

Sentence Segmentation

Under sentence segmentation, the whole corpus is divided into sentences. Each sentence is
taken as a different data so now the whole corpus gets reduced to sentences.

Tokenisation

Sentences are first broken into segments, and then each segment is further divided into
tokens. Any word, number, or special character that appears in a sentence is referred to as a
token. Tokenization treats each word, integer, and special character as a separate entity and
creates a token for each of them.

Removing Stopwords, Special Characters and Numbers

In this step, the tokens which are not necessary are removed from the token list. What can
be the possible words which we might not require?

Stopwords are words that are used frequently in a corpus but provide nothing useful.
Humans utilise grammar to make their sentences clear and understandable for the other
person. However, grammatical terms fall under the category of stopwords because they do
not add any significance to the information that is to be communicated through the
statement. Stopwords include a, an, and, or, for, it, is, etc.

Converting text to a common case

After eliminating the stopwords, we change the text’s case throughout, preferably to lower
case. This makes sure that the machine’s case-sensitivity does not treat similar terms
differently solely because of varied case usage.

Stemming
The remaining words are boiled down to their root words in this step. In other words,
stemming is the process of stripping words of their affixes and returning them to their
original forms.

Lemmatization

Stemming and lemmatization are alternate techniques to one another because they both
function to remove affixes. However, lemmatization differs from both of them in that the
word that results from the elimination of the affix (also known as the lemma) is meaningful. 

I
mage Source – CBSE | Image by – cbseacademic.nic.in
Bag of Words

A bag-of-words is a textual illustration that shows where words appear in a document.


There are two components: a collection of well-known words. a metric for the amount of
well-known words.

A Natural Language Processing model called Bag of Words aids in the extraction of textual
information that can be used by machine learning techniques. We gather the instances of
each term from the bag of words and create the corpus’s vocabulary.
Here is the step-by-step approach to implement bag of words algorithm:

1. Text Normalisation: Collect data and pre-process it


2. Create Dictionary: Make a list of all the unique words occurring in the corpus.
(Vocabulary)
3. Create document vectors: For each document in the corpus, find out how many times the
word from the unique list of words has occurred.
4. Create document vectors for all the documents.

Term Frequency

The measurement of a term’s frequency inside a document is called term frequency. The
simplest calculation is to count the instances of each word. However, there are ways to
change that value based on the length of the document or the frequency of the term that
appears the most often.

Inverse Document Frequency

A term’s frequency inside a corpus of documents is determined by its inverse document


frequency. It is calculated by dividing the total number of documents in the corpus by the
number of documents that contain the phrase.

You might also like