AI Unit 5 Notes
AI Unit 5 Notes
Unit No. 5
Natural Language processing & Expert, Fuzzy logics Systems: Planning: Overview, Components of a
planning system, Goal stack planning, Hierarchical planning and other planning techniques.
Natural Language processing: Introduction, Syntactic processing, Semantic analysis, Discourse &
pragmatic processing. Learning: Forms of learning, Inductive learning, Learning decision trees,
explanation based learning, Learning using relevant information, Neural net learning & genetic learning.
Expert Systems: Representing and using domain knowledge, Expert system shells and knowledge
acquisition, Fuzzy sets & fuzzy logics.
Planning in AI
Introduction
Artificial intelligence (AI) has fundamentally changed several industries and has become a part of daily
life. Planning, which entails putting together a series of actions to accomplish a particular objective, is
one of the fundamental elements of AI. In this Artificial Intelligence tutorial, we will explore planning in
AI, involving applying algorithms and approaches to create the best plans possible, which can improve
the efficacy and efficiency of AI systems.
AI planning comes in different types, each suitable for a particular situation. Popular different types of
planning in ai include:
Classical Planning: In this style of planning, a series of actions is created to accomplish a goal in
a predetermined setting. It assumes that everything is static and predictable.
Hierarchical planning: By dividing large problems into smaller ones, hierarchical planning makes
planning more effective. A hierarchy of plans must be established, with higher-level plans
supervising the execution of lower-level plans.
Temporal Planning: Planning for the future considers time restrictions and interdependencies
between actions. It ensures that the plan is workable within a certain time limit by taking into
account the duration of tasks.
Representation: The component that describes how the planning problem is represented is
called representation. The state space, actions, objectives, and limitations must all be defined.
Search: To locate a series of steps that will get you where you want to go, the search
component searches the state space. To locate the best plans, a variety of search techniques,
including depth-first search & A* search, can be used.
Heuristics: Heuristics are used to direct search efforts and gauge the expense or benefit of
certain actions. They aid in locating prospective routes and enhancing the effectiveness of the
planning process.
Goal Stack Planning
Introduction :
Planning is process of determining various actions that often lead to a solution.
Goal Stack Planning (in short GSP) is the one of the simplest planning algorithm that
is designed to handle problems having compound goals. And it utilizes STRIP as a
formal language for specifying and manipulating the world with which it is working.
This approach uses a Stack for plan generation. The stack can contain Sub-goal and
actions described using predicates. The Sub-goals can be solved one by one in any
order.
Algorithm:
Push the Goal state in to the Stack
Push the individual Predicates of the Goal State into the Stack
Loop till the Stack is empty
IF E is a Predicate
IF E is True then
Do Nothing
ELSE
Push the individual predicates of the Precondition of the action into the Stack
Else IF E is an Action
Explanation:
The Goal Stack Planning Algorithms works will the stack. It starts by pushing the
unsatisfied goals into the stack. Then it pushes the individual subgoals into the stack
and its pops an element out of the stack. When popping an element out of the stack
the element could be either a predicate describing a situation about our world or it
could be an action that can be applied to our world under consideration. So based on
the kind of element we are popping out from the stack a decision has to be made. If it
is a Predicate. Then compares it with the description of the current world, if it is
satisfied or is already present in our current situation then there is nothing to do
because already its true.On the contrary if the Predicate is not true then we have to
select and push relevant action satisfying the predicate to the Stack.
A dvert is em en t
So after pushing the relevant action into the stack its precondition should also has to
be pushed into the stack. In order to apply an operation its precondition has to be
satisfied. In other words the present situation of the world should be suitable enough
to apply an operation. For that, the preconditions are pushed into the stack once after
an action is pushed.
Example :
Lets start here with the example above, the initial state is our current description of
our world. The Goal state is what we have to achieve. The following list of actions
can be applied to the various situation in our problem.
Next push the individual predicates of the goal into the stack.
Now pop an element out from the stack
The popped element is indicated with a strike-through in the above diagram. The
element is ON(B,D) which is a predicate and it is not true in our current world. So the
next step is to push the relevant action which could achieve the subgoal ON(B,D) in to
the stack.
Now again push the precondition of the action Stack(B,D) into the stack.
The preconditions are highlighted with red and the actions are marked with Bold.
Now coming to the point we have to make the precondition to become true in order to
apply the action. Here we need to note an interesting thing about the preconditions
that, it can be pushed into the stack in any order but in certain situations we have to
make some exception and that is clearly seen in the above diagram. The
HOLDING(B) is pushed first and CLEAR(D) is pushed next indicating that the
HOLDING subgoal has to be done second comparing with the CLEAR. Because we
are considering the block world with single arm robot and everything that we usually
do here is depending on the robotic arm if we first achieve HOLDING(D) then we
have to undo the subgoal in order the achieve some other subgoal. So if our compound
goal has HOLDING (x) subgoal, achieve it at the end.
A dvert is em en t
POP the stack. Note here that on popping we could see that ON(B,C) ,CLEAR(B)
AND ARMEMPTY are true in our current world. So dont do anything.
A dvert is em en t
Now again pop the stack .
When we do that we will get an action, so just apply the action to the current world
and add that action to plan list.
Plan= { UNSTACK(B,C) }
Again pop an element. Now its STACK(B,D) which is an action so apply that to the
current state and add it to the PLAN.
Again pop the stack. The popped element is a predicate and it is not true in our current
world so push the relevant action into the stack.
STACK(C,A) is pushed now into the stack and now push the individual preconditions
of the action into the stack.
Now pop the stack. We will get CLEAR(A) and it is true in our current world so do
nothing.
Next element that is popped is HOLDING(C) which is not true so push the relevant
action into the stack.
In order to achieve HOLDING(C) we have to push the action PICKUP(C) and its
individual preconditions into the stack.
Again POP the stack, we will get STACK(C,A) which is an action apply it to the
world and insert it to the PLAN.
PLAN= { UNSTACK(B,D), STACK(B,D) ,PICKUP(C) ,STACK(C,A) }
Now pop the stack we will get CLEAR(C) which is already achieved in our current
situation. So we don’t need to do anything. At last when we pop the element we will
get all the three subgoal which is true and our PLAN will contain all the necessary
actions to achieve the goal.
NLP is used in a wide variety of everyday products and services. Some of the most
common technologies that use NLP are voice-activated digital assistants on
smartphones, email-scanning programs used to identify spam, and translation apps
that decipher foreign languages.
NLP benefits
Whether it’s being used to quickly translate a text from one language to another or
producing business insights by running sentiment analysis on hundreds of reviews,
NLP provides both businesses and consumers with a variety of benefits.
Unsurprisingly, then, you can expect to see more of it in the coming years. According
to research by Fortune Business Insights, they project the global market for NLP to
grow from $29.71 billion in 2024 to $158.04 billion in 2032 [1].
The ability to analyze both structured and unstructured data, such as speech,
text messages, and social media posts.
Natural Language Processing (NLP) is a field within artificial intelligence that allows
computers to comprehend, analyze, and interact with human language effectively. The
process of NLP can be divided into five distinct phases: Lexical Analysis, Syntactic
Analysis, Semantic Analysis, Discourse Integration, and Pragmatic Analysis. Each
phase plays a crucial role in the overall understanding and processing of natural
language.
Tokenization
The lexical phase in Natural Language Processing (NLP) involves scanning text and
breaking it down into smaller units such as paragraphs, sentences, and words. This
process, known as tokenization, converts raw text into manageable units called tokens
or lexemes. Tokenization is essential for understanding and processing text at the
word level.
In addition to tokenization, various data cleaning and feature extraction techniques are
applied, including:
These steps enhance the comprehensibility of the text, making it easier to analyze and
process.
Morphological Analysis
Types of Morphemes
1. Free Morphemes: Text elements that carry meaning independently and make
sense on their own. For example, "bat" is a free morpheme.
2. Bound Morphemes: Elements that must be attached to free morphemes to
convey meaning, as they cannot stand alone. For instance, the suffix "-ing" is a
bound morpheme, needing to be attached to a free morpheme like "run" to form
"running."
By identifying and analyzing morphemes, the system can interpret text correctly at the
most fundamental level, laying the groundwork for more advanced NLP applications.
Syntactic analysis, also known as parsing, is the second phase of Natural Language
Processing (NLP). This phase is essential for understanding the structure of a sentence
and assessing its grammatical correctness. It involves analyzing the relationships
between words and ensuring their logical consistency by comparing their arrangement
against standard grammatical rules.
Role of Parsing
Parsing examines the grammatical structure and relationships within a given text. It
assigns Parts-Of-Speech (POS) tags to each word, categorizing them as nouns, verbs,
adverbs, etc. This tagging is crucial for understanding how words relate to each other
syntactically and helps in avoiding ambiguity. Ambiguity arises when a text can be
interpreted in multiple ways due to words having various meanings. For example, the
word "book" can be a noun (a physical book) or a verb (the action of booking
something), depending on the sentence context.
Examples of Syntax
Despite using the same words, only the first sentence is grammatically correct and
makes sense. The correct arrangement of words according to grammatical rules is
what makes the sentence meaningful.
During parsing, each word in the sentence is assigned a POS tag to indicate its
grammatical category. Here’s an example breakdown:
POS Tags:
Assigning POS tags correctly is crucial for understanding the sentence structure and
ensuring accurate interpretation of the text.
By analyzing and ensuring proper syntax, NLP systems can better understand and
generate human language. This analysis helps in various applications, such as
machine translation, sentiment analysis, and information retrieval, by providing a
clear structure and reducing ambiguity.
Semantic Analysis is the third phase of Natural Language Processing (NLP), focusing
on extracting the meaning from text. Unlike syntactic analysis, which deals with
grammatical structure, semantic analysis is concerned with the literal and contextual
meaning of words, phrases, and sentences.
Semantic analysis aims to understand the dictionary definitions of words and their
usage in context. It determines whether the arrangement of words in a sentence makes
logical sense. This phase helps in finding context and logic by ensuring the semantic
coherence of sentences.
Discourse Integration is the fourth phase of Natural Language Processing (NLP). This
phase deals with comprehending the relationship between the current sentence and
earlier sentences or the larger context. Discourse integration is crucial for
contextualizing text and understanding the overall message conveyed.
Discourse integration examines how words, phrases, and sentences relate to each
other within a larger context. It assesses the impact a word or sentence has on the
structure of a text and how the combination of sentences affects the overall meaning.
This phase helps in understanding implicit references and the flow of information
across sentences.
Importance of Contextualization
Anaphora Resolution: "Taylor went to the store to buy some groceries. She
realized she forgot her wallet."
o In this example, the pronoun "she" refers back to "Taylor" in the first
sentence. Understanding that "Taylor" is the antecedent of "she" is
crucial for grasping the sentence's meaning.
Pragmatic Analysis is the fifth and final phase of Natural Language Processing (NLP),
focusing on interpreting the inferred meaning of a text beyond its literal content.
Human language is often complex and layered with underlying assumptions,
implications, and intentions that go beyond straightforward interpretation. This phase
aims to grasp these deeper meanings in communication.
Pragmatic analysis goes beyond the literal meanings examined in semantic analysis,
aiming to understand what the writer or speaker truly intends to convey. In natural
language, words and phrases can carry different meanings depending on context, tone,
and the situation in which they are used.
In human communication, people often do not say exactly what they mean. For
instance, the word "Hello" can have various interpretations depending on the tone and
context in which it is spoken. It could be a simple greeting, an expression of surprise,
or even a signal of anger. Thus, understanding the intended meaning behind words
and sentences is crucial.
Introduction
Inductive learning is a type of machine learning that aims to identify patterns in data
and generalize them to new situations. The inductive learning algorithm
(ILA) creates a model based on a set of training examples that are then used to predict
new examples. Inductive learning is often used in supervised learning, where the data
is labeled, meaning the correct answer is provided for each example. Based on these
labeled examples, the model is then trained to map inputs to outputs.
Inductive learning algorithms can be used to analyze a wide range of financial data to
predict credit risk. Large databases of past borrower data can be used to train these
algorithms so they can discover trends and pinpoint the variables that most accurately
predict credit risk.
The potential for lenders to consider various criteria when making decisions
regarding credit is one of the main advantages of inductive learning for credit risk
assessment. Conventional credit risk assessment methods frequently rely on a small
collection of unreliable or biased variables, like income and credit score. On the
other hand, inductive learning algorithms can examine a considerably wider range of
elements, including social media activity, debt-to-income ratios, and employment
histories, to produce more precise and sophisticated credit risk evaluations.
Inductive learning algorithms for credit risk assessment may also be more flexible
and adaptable in addition to being more accurate. They can be updated with
fresh information as it becomes available, enabling lenders to continuously improve
their evaluations of credit risk based on the most recent data. This is especially
critical given the current economy's rapid economic change and emerging new
financial products and services.
Disease Diagnosis
One of the key benefits of inductive learning in disease diagnosis is the ability to
analyze large datasets of medical information, including patient histories, lab
results, and imaging data. Inductive learning algorithms can find patterns and trends
by examining these big datasets that could be challenging for medical professionals to
spot on their own.
For example, huge collections of medical pictures, such as X-rays or MRIs, can be
analyzed using inductive learning algorithms to assist in the diagnosis of illnesses
like cancer and heart disease. Inductive learning algorithms can learn to recognize
patterns and anomalies that may be challenging for human doctors to spot. In the long
run, patients would gain since earlier and more accurate diagnoses might arise from
this.
We can use
inductive learning to examine patient data, including medical histories, symptoms,
and lab findings to diagnose various diseases. Inductive learning algorithms can learn
to recognize patterns and trends that are suggestive of particular diseases or disorders
by examining vast datasets of patient data. This can aid medical professionals in
developing more precise diagnoses and efficient treatment methods.
Face Recognition
We can train inductive learning algorithms on large facial picture data datasets to
discover patterns and pinpoint distinctive traits for every person. Inductive learning
algorithms can properly identify people with a high degree of accuracy by analyzing
face traits such as the distance between the eyes, the curve of the nose, and the
curvature of the lips.
One of the key benefits of inductive learning for face recognition is that it can adapt
to new data and changing conditions. For example, in a security or surveillance
context, inductive learning algorithms can learn to recognize individuals
under various lighting and environmental conditions, including different angles,
lighting, and facial expressions. This makes them more reliable and effective than
traditional face recognition systems, which may struggle to accurately identify
individuals under changing conditions.
We can also use inductive learning algorithms to improve the accuracy of facial
recognition systems over time. By continually training on new data and incorporating
new features into the recognition process, inductive learning algorithms can improve
their accuracy and reduce the likelihood of false positives and false negatives.
Inductive learning algorithms can also be used to improve the performance and
reliability of self-driving vehicles over time. By continually analyzing data on
driving behavior and road conditions, inductive learning algorithms can identify
patterns and trends that can be used to improve the performance and safety of self-
driving vehicles.
Overview
Explanation-based learning in artificial intelligence is a branch of machine learning
that focuses on creating algorithms that learn from previously solved problems. It is a
problem-solving method that is especially helpful when dealing with complicated,
multi-faceted issues that necessitate a thorough grasp of the underlying processes.
Introduction
Since its beginning, machine learning has come a long way. While early machine-
learning algorithms depended on statistical analysis to spot patterns and forecast
outcomes, contemporary machine-learning models are intended to learn from subject
experts' explanations. Explanation-based learning in artificial intelligence has proven
to be a potent tool in its development that can handle complicated issues more
efficiently.
The problem solver analyses these sources and provides reasoning to the generalizer.
The generalizer uses general ideas from the knowledge base as input and compares
them to the problem solver's reasoning to come up with an answer to the given
problem.
Genetic Algorithm
Genetic algorithm (GAs) are a class of search algorithms designed on the natural
evolution process. Genetic Algorithms are based on the principles of survival of the
fittest.
The advancement of ANNs is a subject that has been broadly dealt with extremely
different techniques. The world of evolutionary algorithms is no exemption, and
evidence of that is the incredible amount of works that have been published about the
various techniques in this area, even with genetic algorithms or GP. As a general rule,
the field of ANNs generation using evolutionary algorithms is separated into three
principal fields: Evolution of weight, Architectures, Learning rules.
Initially, the weight evolution begins from an ANN with a previously determined
topology. The issue to be solved is the training of the association weights, attempting
to limit the network error. With the utilization of an evolutionary algorithm, the
weights can be represented either as the connection of binary or real values.
At the first option, direct encoding, there is a balanced analogy between all of the
genes and their resulting phenotypes. The most typical encoding technique comprises
a matrix that represents an architecture where each component reveals the presence or
absence of association between two nodes.
In the encoding schemes, GP has been utilized to create both architecture and
association weights at the same time, either for feed-forward or recurrent ANNs, with
no limitations in their architecture. This new codification scheme also permits the
acquiring of basic networks with a minimum number of neurons and associations, and
the outcomes published are auspicious. Apart from direct encoding, there are some
indirect encoding techniques. In these techniques, just a few characteristics of the
architecture are encoded in the chromosome. These techniques have various types of
representation. First, the parametric representations portray the network as a group of
parameters. For example, numbers of nodes for each layer, the number of associations
between two layers, the number of hidden layers, etc. Another no direct representation
type depends on grammatical rules. In this system, the network is represented by a
group of regulations, build as production rules that make a matrix that represents the
network. With respect to the evolution of the learning rule, there are various
approaches, however, most of them are just based on how learning can alter or
manage the evolution and also on the relationship between the architecture and the
association weights.
Start:
Fitness:
New Population:
It generates a new population by repeating the following steps until the New
population is finished.
Selection:
It chooses two parent chromosomes from a population as per their fitness. The better
fitness, the higher the probability of getting selected.
Crossover:
In crossover probability, cross over the parents to form new offspring (children). If no
crossover was performed, the offspring is the exact copy of the parents.
Mutation:
Accepting:
Replace:
It uses the newly generated population for a further run of the algorithm.
Test:
If the end condition is satisfied, then it stops and returns the best solution in the
current population.
Loop:
In this step, you need to go to the second step for fitness evaluation.
The basic principle behind the genetic algorithms is that they generate and maintain a
population of individuals represented by chromosomes. Chromosomes are a character
string practically equivalent to the chromosomes appearing in DNA. These
chromosomes are usually encoded solutions to a problem. It undergoes a process of
evolution as per rules of selection, reproduction, and mutation. Each individual in the
environment (represented by chromosome) gets a measure of its fitness in the
environment. Reproduction chooses individuals with high fitness values in the
population. Through crossover and mutation of such individuals, a new population is
determined in which individuals might be an even better fit for their environment. The
process of crossover includes two chromosomes swapping chunks of data and is
analogous to the process of reproduction. Mutation introduces slight changes into a
little extant of the population, and it is representative of an evolutionary step.
Difference between traditional and genetic approach:
The expert system is a part of AI, and the first ES was developed in the year 1970, which
was the first successful approach of artificial intelligence. It solves the most complex issue
as an expert by extracting the knowledge stored in its knowledge base. The system helps in
decision making for compsex problems using both facts and heuristics like a human
expert. It is called so because it contains the expert knowledge of a specific domain and
can solve any complex problem of that particular domain. These systems are designed for a
specific domain, such as medicine, science, etc.
The performance of an expert system is based on the expert's knowledge stored in its
knowledge base. The more knowledge stored in the KB, the more that system improves its
performance. One of the common examples of an ES is a suggestion of spelling errors
while typing in the Google search box.
Below is the block diagram that represents the working of an expert system:
Note: It is important to remember that an expert system is not used to replace
the human experts; instead, it is used to assist the human in making a complex
decision. These systems do not have human capabilities of thinking and work on
the basis of the knowledge base of the particular domain.
Below are some popular examples of the Expert System:
o DENDRAL: It was an artificial intelligence project that was made as a chemical analysis
expert system. It was used in organic chemistry to detect unknown organic molecules
with the help of their mass spectra and knowledge base of chemistry.
o MYCIN: It was one of the earliest backward chaining expert systems that was designed
to find the bacteria causing infections like bacteraemia and meningitis. It was also used
for the recommendation of antibiotics and the diagnosis of blood clotting diseases.
o PXDES: It is an expert system that is used to determine the type and level of lung
cancer. To determine the disease, it takes a picture from the upper body, which looks
like the shadow. This shadow identifies the type and degree of harm.
o CaDeT: The CaDet expert system is a diagnostic support system that can detect cancer
at early stages.
Characteristics of Expert System
o High Performance: The expert system provides high performance for solving any type
of complex problem of a specific domain with high efficiency and accuracy.
o Understandable: It responds in a way that can be easily understandable by the user. It
can take input in human language and provides the output in the same way.
o Reliable: It is much reliable for generating an efficient and accurate output.
o Highly responsive: ES provides the result for any complex query within a very short
period of time.
1. User Interface
With the help of a user interface, the expert system interacts with the user, takes queries as
an input in a readable format, and passes it to the inference engine. After getting the
response from the inference engine, it displays the output to the user. In other words, it is
an interface that helps a non-expert user to communicate with the expert system to
find a solution.
3. Knowledge Base
o The knowledgebase is a type of storage that stores knowledge acquired from the
different experts of the particular domain. It is considered as big storage of knowledge.
The more the knowledge base, the more precise will be the Expert System.
o It is similar to a database that contains information and rules of a particular domain or
subject.
o One can also view the knowledge base as collections of objects and their attributes.
Such as a Lion is an object and its attributes are it is a mammal, it is not a domestic
animal, etc.
Components of Knowledge Base
■ Knowledge Base
A store of factual and heuristic knowledge. Expert system tool provides one or more
knowledge representation schemes for expressing knowledge about the application
domain. Some tools use both Frames (objects) and IF-THEN rules. In PROLOG the
knowledge is represented as logical statements.
■ Reasoning Engine
Explanation subsystem
A subsystem that explains the system's actions. The explanation can range from how
the final or intermediate solutions were arrived at justifying the need for additional
data.
User Interface
A means of communication with the user. The user interface is generally not a part of
the expert system technology. It was not given much attention in the past. However,
the user interface can make a critical difference in the pe eived utility of an Expert
system.
What is Fuzzy Logic?
The 'Fuzzy' word means the things that are not clear or are vague. Sometimes, we
cannot decide in real life that the given problem or statement is either true or false. At
that time, this concept provides many values between the true and false and gives the
flexibility to find the best solution to that problem.
Fuzzy logic contains the multiple logical values and these values are the truth values
of a variable or problem between 0 and 1. This concept was introduced by Lofti
Zadeh in 1965 based on the Fuzzy Set Theory. This concept provides the
possibilities which are not given by computers, but similar to the range of possibilities
generated by humans.
In the Boolean system, only two possibilities (0 and 1) exist, where 1 denotes the
absolute truth value and 0 denotes the absolute false value. But in the fuzzy system,
there are multiple possibilities present between the 0 and 1, which are partially false
and partially true.
1. This concept is flexible and we can easily understand and implement it.
2. It is used for helping the minimization of the logics created by the human.
3. It is the best method for finding the solution of those problems which are
suitable for approximate or uncertain reasoning.
4. It always offers two values, which denote the two possible solutions for a
problem and statement.
5. It allows users to build or create the functions which are non-linear of arbitrary
complexity.
6. In fuzzy logic, everything is a matter of degree.
7. In the Fuzzy logic, any system which is logical can be easily fuzzified.
8. It is based on natural language processing.
9. It is also used by the quantitative analysts for improving their algorithm's
execution.
10. It also allows users to integrate with the programming.
1. Rule Base
2. Fuzzification
3. Inference Engine
4. Defuzzification
1. Rule Base
Rule Base is a component used for storing the set of rules and the If-Then conditions
given by the experts are used for controlling the decision-making systems. There are
so many updates that come in the Fuzzy theory recently, which offers effective
methods for designing and tuning of fuzzy controllers. These updates or developments
decreases the number of fuzzy set of rules.
2. Fuzzification
3. Inference Engine
This component is a main component in any Fuzzy Logic system (FLS), because all
the information is processed in the Inference Engine. It allows users to find the
matching degree between the current fuzzy input and the rules. After the matching
degree, this system determines which rule is to be added according to the given input
field. When all rules are fired, then they are combined for developing the control
actions.
4. Defuzzification
Defuzzification is a module or component, which takes the fuzzy set inputs generated
by the Inference Engine, and then transforms them into a crisp value. It is the last
step in the process of a fuzzy logic system. The crisp value is a type of value which is
acceptable by the user. Various techniques are present to do this, but the user has to
select the best one for reducing the errors.
Fuzzy Set
The set theory of classical is the subset of Fuzzy set theory. Fuzzy logic is based on
this theory, which is a generalisation of the classical theory of set (i.e., crisp set)
introduced by Zadeh in 1965.
A fuzzy set is a collection of values which exist between 0 and 1. Fuzzy sets are
denoted or represented by the tilde (~) character. The sets of Fuzzy theory were
introduced in 1965 by Lofti A. Zadeh and Dieter Klaua. In the fuzzy set, the partial
membership also exists. This theory released as an extension of classical set theory.
This theory is denoted mathematically asA fuzzy set (Ã) is a pair of U and M, where
U is the Universe of discourse and M is the membership function which takes on
values in the interval [ 0, 1 ]. The universe of discourse (U) is also denoted by Ω or X.
Given à and B are the two fuzzy sets, and X be the universe of discourse with the
following respective member functions:
The operations of Fuzzy set are as follows:
Example:
then,
For X1
For X2
μA∪B(X2) = max (μA(X2), μB(X2))
μA∪B(X2) = max (0.2, 0.8)
μA∪B(X2) = 0.8
For X3
For X4
Example:
then,
A∩B = {( X1, 0.3), (X2, 0.2), (X3, 0.4), (X4, 0.1)}
For X1
For X2
For X3
For X4
μĀ(x) = 1-μA(x),
Example:
then,
For X1
μĀ(X1) = 1-μA(X1)
μĀ(X1) = 1 - 0.3
μĀ(X1) = 0.7
For X2
μĀ(X2) = 1-μA(X2)
μĀ(X2) = 1 - 0.8
μĀ(X2) = 0.2
For X3
μĀ(X3) = 1-μA(X3)
μĀ(X3) = 1 - 0.5
μĀ(X3) = 0.5
For X4
μĀ(X4) = 1-μA(X4)
μĀ(X4) = 1 - 0.1
μĀ(X4) = 0.9
1. This theory is a class of those sets having 1. This theory is a class of those sets having
sharp boundaries. un-sharp boundaries.
2. This set theory is defined by exact 2. This set theory is defined by ambiguous
boundaries only 0 and 1. boundaries.