Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
13 views13 pages

AI Mod 6 Notes

Expert Systems are knowledge-based systems designed to solve specific problems using human expertise, represented as rules or data within a computer. They differ from conventional systems by separating the knowledge base from processing, allowing for easier updates and consistent performance. Key components include a knowledge base, inference engine, and user interface, with roles for domain experts, knowledge engineers, and system engineers in their development.

Uploaded by

Shubham Barge
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views13 pages

AI Mod 6 Notes

Expert Systems are knowledge-based systems designed to solve specific problems using human expertise, represented as rules or data within a computer. They differ from conventional systems by separating the knowledge base from processing, allowing for easier updates and consistent performance. Key components include a knowledge base, inference engine, and user interface, with roles for domain experts, knowledge engineers, and system engineers in their development.

Uploaded by

Shubham Barge
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Expert Systems

Based on the sources, Expert Systems are knowledge-based systems that utilize human knowledge
to solve problems that typically require human intelligence. They are designed for specific problem
domains and embody non-algorithmic expertise, representing this expertise as data or rules within
the computer. These systems can be called upon when needed to solve problems and are often
developed using specialized software tools called shells, which come equipped with an inference
mechanism.

Here's a more detailed breakdown of Expert Systems:

Definition and Key Idea:

• Expert systems solve problems by applying specific knowledge rather than specific
techniques.

• They use human knowledge to address issues in specific problem areas, aiming to replicate
the abilities of a human expert.

Comparison with Conventional Systems:

Expert systems differ from conventional systems in several key ways:

• In conventional systems, information and processing are combined, whereas in expert


systems, the knowledge base is separated from the processing mechanism.

• Conventional systems may make mistakes, while ideally, expert systems do not.

• Changes in conventional systems can be tedious, but in expert systems, they are generally
easy due to the separation of knowledge and processing.

• Conventional systems typically operate only when completed, but expert systems can
operate even with a few rules.

• Data processing in conventional systems is often repetitive, whereas in expert systems,


knowledge engineering is an inferential process.

• Conventional systems rely on algorithmic approaches, while expert systems often use
heuristic approaches.

Major Components:

An expert system typically consists of the following major components:

• Knowledge base: A declarative representation of the


expertise, often in the form of IF-THEN rules.

• Working storage: The data specific to the problem


being solved.

• Inference engine: The core code that derives


recommendations from the knowledge base and problem-
specific data in the working storage.

• User interface: The code that controls the dialogue


between the user and the system.
A shell is a piece of software that contains the user interface, a format for declarative knowledge in
the knowledge base, and an inference engine. A major advantage of a customized shell is that the
format of the knowledge base can be designed to facilitate the knowledge engineering process.

Major Roles of Individuals:

Building an expert system involves several key roles:

• Domain expert: A current expert in the field whose


knowledge the system is intended to replicate.

• Knowledge engineer: An individual who establishes a


dialogue with the human expert to elicit (obtain) knowledge
and then encodes this knowledge in a declarative form that
the expert system can use.

• User: The person who will consult with the system to get
advice that would have been provided by the expert.

• System engineer: The individual who builds the user


interface, designs the declarative format of the knowledge
base, and implements the inference engine. The knowledge
engineer and the system engineer might be the same person,
depending on the project's size.

The process of building an expert system involves the knowledge engineer eliciting knowledge from
the expert, coding it into the knowledge base, and then the expert evaluating the system and
providing feedback to the knowledge engineer.

Knowledge Representation:

Knowledge in expert systems can be represented in various forms:

• Rules (IF-THEN statements).

• Semantic Nets.

• Frames.

• Scripts.

• Object-Oriented representations.

• Other methods like KL-1, KRYPTON, and Conceptual Graphs.

Languages, Shells, and Tools:

Expert systems can be built using various tools:

• Procedural languages (e.g., C).

• More modern languages (e.g., Java).


• Expert system languages (e.g., CLIPS - C Language Integrated Production System, designed at
NASA/Johnson Space Center) which focus on ways to represent knowledge and support rule-
based, object-oriented, and procedural programming.

• Specialized software tools called shells that provide an inference mechanism and a format
for the knowledge base.

Expert System Features:

Expert systems possess several key features:

• Goal-driven reasoning (backward chaining): An inference technique that uses IF-THEN rules
to break down a goal into smaller, easier-to-prove sub-goals. The aim is to pick the best
choice from many enumerated possibilities.

• Coping with uncertainty: The ability to reason with rules and data that are not precisely
known. This is important as expert rules might be vague, and users might be unsure of their
answers. Expert systems can associate a numeric value with each piece of information to
handle uncertainty.

• Data-driven reasoning (forward chaining): An inference technique that uses IF-THEN rules to
deduce a problem solution from initial data. These systems keep track of the current state
and look for rules that move the state closer to a final solution.

• Data representation: The way in which problem-specific data is stored and accessed within
the system.

• User interface: A user-friendly portion of the code that allows for easy interaction with the
system.

• Explanations: The ability of the system to explain the reasoning process it used to reach a
recommendation by knowing which rules were used during the inference.

Considerations for Building Expert Systems:

Several factors should be considered before building an expert system:

• Can the problem be solved effectively by conventional programming?

• Is there a need and desire for an expert system?

• Is there at least one human expert willing to cooperate?

• Can the expert explain their knowledge in a way that the knowledge engineer can
understand?

• Is the problem-solving knowledge mainly heuristic and uncertain?

Early Expert Systems Examples:

Several notable early expert systems include:

• DENDRAL: Used in chemical mass spectroscopy to identify chemical constituents.

• MYCIN: For medical diagnosis of illness.

• DIPMETER: Used for geological data analysis for oil.


• PROSPECTOR: For geological data analysis for minerals.

• XCON/R1: For configuring computer systems.

• Diagnostic applications in areas like people, machinery, and healthcare.

• Systems for tasks like playing chess, making financial planning decisions, underwriting
insurance policies, and monitoring real-time systems.

Advantages of Expert Systems:

Expert systems offer several advantages:

• Increased availability of expertise.

• Reduced cost compared to hiring multiple human experts.

• Reduced danger by allowing systems to handle hazardous tasks.

• Consistent performance.

• Ability to incorporate multiple areas of expertise.

• Increased reliability.

• Explanation capabilities.

• Fast response times.

• Steady, unemotional, and complete responses at all times.

• Can function as intelligent databases.

Disadvantages of Expert Systems:

Despite their benefits, expert systems also have disadvantages:

• Lack common sense.

• Cannot make creative responses as human experts can.

• Domain experts may not always be able to explain their logic and reasoning effectively,
leading to potential errors in the knowledge base.

• Cannot adapt easily to changing environments.

One of the major bottlenecks in developing expert systems is the knowledge engineering process, as
coding the expertise into a declarative rule format can be difficult and tedious, and minimizing the
semantic gap between the expert's understanding and the system's representation is crucial.
NLP
Natural Language Processing (NLP) is the discipline focused on building machines that can
manipulate human language or data that resembles human language, whether written, spoken, or
organized. Alternatively, it can be defined as the automatic (or semi-automatic) processing of human
language. Terms like 'Language Technology' or 'Language Engineering' are also frequently used
nowadays. NLP involves analyzing the structure and use of language to enable machines to
understand, generate, and predict human language.

NLP can be divided into two overlapping subfields:

• Natural Language Understanding (NLU): This subfield focuses on semantic analysis or


determining the intended meaning of text. It involves taking some spoken or typed
sentence and working out what it means.

• Natural Language Generation (NLG): This subfield focuses on text generation by a machine,
which seeks to parse spoken language into words, turning sound into text and vice versa. It
involves taking some formal representation of what you want to say and working out a way
to express it in a natural (human) language.

Types of Language Models in NLP:

Language models are machine learning models used to predict the next word in a sequence given
the previous words. They analyze and understand the structure and use of human language, enabling
machines to process and generate contextually appropriate text. Language models are categorized
into two main types:

• Pure Statistical Methods: These form the basis of traditional language models and rely on
the statistical properties of language to predict the next word in a sentence, given the
previous words. They include:

o N-grams: An n-gram is a sequence of n items (phonemes, syllables, letters, words,


etc.) from a text or speech sample. N-gram models use the frequency of these
sequences in training data to predict the likelihood of word sequences. For example,
a bigram model predicts the next word based on the previous word, and a trigram
model uses the two preceding words. While simple and computationally efficient,
their ability to accurately estimate probabilities of less common sequences
decreases as n increases due to the exponential growth in possible n-grams.

o Exponential Models: These models are more flexible and powerful than n-gram
models. They predict the probability of a word based on a wide range of features,
including previous words and other contextual information, by assigning weights to
these features and combining them using an exponential function. The Maximum
Entropy Model (MaxEnt) is an example, using features like word presence, part-of-
speech tags, and syntactic patterns to predict the next word. Although more flexible,
they are more complex and computationally intensive to train and still struggle with
long-range dependencies due to reliance on fixed-length context windows.

o Skip-gram Models: These models are used to predict the context words (surrounding
words) given a target word. They are also used in Word2Vec and are effective for
capturing semantic relationships between words by optimizing the likelihood of
context words appearing around a target word. Word2Vec includes two main
architectures: skip-gram (predicting context from a target word) and Continuous
Bag-of-Words (CBOW) (predicting a target word from context words). Both are
trained using neural networks but are conceptually simple and computationally
efficient.

• Neural Models: These models utilize neural networks like Recurrent Neural Networks
(RNNs), Transformer-based models, and Large Language Models.

o Recurrent Neural Networks (RNNs): Designed for sequential data, making them
suitable for language modeling. RNNs maintain a hidden state to capture information
about previous inputs, allowing them to consider the context of words in a
sequence.

o Transformer-based Models: These models process the entire input simultaneously,


making them more efficient for parallel computation. Key components include the
Self-Attention Mechanism, which allows the model to weigh the importance of
different words in a sequence, capturing dependencies regardless of distance, the
Encoder-Decoder Structure, where the encoder processes the input and the decoder
generates the output, and Positional Encoding, which adds positional information to
consider word order. Examples of transformer-based models include BERT, GPT-3,
and T5.

o Large Language Models (LLMs): Characterized by their vast size (billions of


parameters) and ability to perform a wide range of tasks with minimal fine-tuning.
Training involves feeding them vast amounts of text data and optimizing their
parameters with significant computational resources. The training typically includes
unsupervised pre-training and supervised fine-tuning. Despite remarkable
performance, training requires substantial resources, and their size can make them
difficult to interpret and control, raising ethical and bias concerns.

Types of Grammar in NLP:

Grammar provides the rules for forming well-structured sentences. It denotes the syntactical rules
used for conversation in natural languages. Mathematically, a grammar G can be represented as a 4-
tuple (N, T, S, P), where N is the set of non-terminal symbols, T is the set of terminal symbols, S is the
start symbol, and P is the set of production rules.

• Context Free Grammar (CFG):


Consists of a set of rules expressing
how symbols of the language can be
grouped and ordered, along with a
lexicon of words and symbols.
Context-free rules can be
hierarchically embedded. CFG is
used to represent complex relations
and can be efficiently implemented.
It has four components: a finite set
of non-terminals (syntactic
variables), a finite set of terminals (tokens), a finite set of production rules explaining how
terminals and non-terminals can be combined, and a designated start symbol (often S) from
which the language's strings can be derived.
• Constituency Grammar: Also
known as Phrase structure
grammar, it is based on the
constituency relation, where
constituents can be any word,
group of words, or phrases. It
aims to organize any sentence
into its constituents based on
their properties. It studies
clause structure in terms of
noun phrases (NP) and verb
phrases (VP). For example, a sentence can be organized into a subject, a context, and an
object.

• Dependency Grammar: This is the opposite of constituency grammar and is based on the
dependency relation. It states that words in a sentence are dependent on other words,
connected by directed links. The verb is considered the center of the clause structure, and
every other syntactic unit is connected to the verb via a directed link representing a
dependency. It organizes words according to their dependencies and is used to infer the
structure and semantic dependencies between words.

Types of Parsing in NLP:

Parsing is the process of examining the grammatical structure and relationships within a given
sentence or text. It involves analyzing text to determine the roles of words and their
interrelationships, exposing sentence structure by constructing parse trees or dependency trees.
Parsing enables machines to extract meaning and perform tasks like machine translation and
sentiment analysis. There are different types of parsing in NLP:

• Syntactic Parsing: Deals with a sentence’s grammatical structure to determine parts of


speech, sentence boundaries, and word relationships.

o Constituency Parsing: Builds parse trees that break down a sentence into its
constituents, displaying its hierarchical structure.

o Dependency Parsing: Depicts grammatical links between words by constructing a


tree where each word depends on another, focusing on relationships like subject-
verb-object.

• Semantic Parsing: Attempts to understand the roles of words in a specific task context and
how they interact. It is used in applications like question answering and knowledge base
population.
There are also different parsing techniques:

• Top-Down Parsing: The parser


attempts to create a parse tree
from the root node (S) down to
the leaves. It starts with the
start symbol and applies
production rules to expand
non-terminal nodes until the
leaf nodes match the input
string. Backtracking occurs if a
path does not lead to a match.

• Bottom-Up Parsing: This technique begins with the words of the input and attempts to build
trees upwards by applying grammar rules one at a time, aiming to reach the start symbol (S).
----------------------------------------------------------SUMMARISED--------------------------------------------------------

1. Subfields

• Natural Language Understanding (NLU)

• Natural Language Generation (NLG)

2. Language Models

2.1 Pure Statistical Methods

• N-grams

• Exponential Models

o Maximum Entropy Model (MaxEnt)

• Skip-gram Models

2.2 Neural Models

• Recurrent Neural Networks (RNNs)

• Transformer-based Models

o Examples: BERT, GPT-3, T5

• Large Language Models (LLMs)

3. Grammar

• Context Free Grammar (CFG)

• Constituency Grammar (Phrase Structure Grammar)

• Dependency Grammar

4. Parsing

4.1 Types of Parsing

• Syntactic Parsing

o Constituency Parsing

o Dependency Parsing

• Semantic Parsing

4.2 Parsing Techniques

• Top-Down Parsing

• Bottom-Up Parsing
Stages in NLP
A general progression in NLP often starts with the raw input and moves towards a deeper
understanding of meaning and context, potentially culminating in the generation of language.
Considering this flow, we can outline the following stages based on the sources:

1. Speech Recognition (for spoken language): When the input to an NLP system is spoken
language, the initial stage involves converting the analog audio signal into a digital
representation, often through a frequency spectrogram. This step is mentioned as the input
to a generic NLP system.

2. Morphological Analysis: This stage focuses on the structure of words. It involves recognizing
individual words and their components, such as prefixes, suffixes, and root forms (stems or
lemmas). The goal is to identify the base form of a word and its grammatical category. For
instance, "unusually" can be broken down into "un-", "usual", and "-ly".

3. Syntactic Analysis (Parsing): Once the words are identified, syntactic analysis, or parsing,
examines the way words are used to form phrases and sentences. This stage involves
analyzing the grammatical structure of a sentence to determine parts of speech, sentence
boundaries, and the relationships between words. Parsers check if a sentence is
grammatically correct according to a defined grammar and may produce a parse tree
representing the sentence's structure. Different types of parsing, such as constituency
parsing and dependency parsing, focus on different aspects of this structure.

4. Semantic Analysis: This stage aims at understanding the meaning of words and sentences.
It goes beyond the grammatical structure to extract the logical meaning. Semantic analysis
often involves mapping words and phrases to concepts and relationships in a knowledge
base or ontology. The goal is to understand "what it means".

5. Pragmatic Analysis: Pragmatics deals with the practical usage of language and meaning in
context. It considers factors beyond the literal meaning of words and sentences, such as the
speaker's intentions, the context of the utterance (who, when, where, why), and the overall
communicative goals. For example, understanding if "you're late" is a simple statement of
fact or a criticism requires pragmatic analysis.

6. Discourse Analysis: This stage focuses on understanding a text as a whole, considering the
relationships between sentences and larger units of text. It looks at how sentences connect,
how pronouns refer to previous entities (anaphora), and how the meaning of one sentence
can influence the interpretation of others. Understanding the flow of information and the
overall coherence of a text falls under discourse analysis.

7. Natural Language Generation (NLG): While the previous stages primarily focus on
understanding, NLG is concerned with the process of generating human language. It takes a
formal representation of what needs to be communicated and determines how to express it
in a natural language like English. This could involve choosing appropriate words, phrasing,
and grammatical structures to convey the intended meaning effectively.

It's important to note that these stages can be overlapping and are not always strictly sequential in
modern NLP systems. Some approaches may integrate aspects of these stages or perform them
iteratively. Furthermore, the specific stages emphasized can vary depending on the task and the
approach (e.g., classical symbolic methods versus neural network-based methods).
Robotics
Robotics is the intersection of science, engineering, and technology that produces machines called
robots that replicate or substitute for human actions. It describes the field of study focused on
developing robots and automation. The term ‘robot’ is derived from the word ‘robota,’ which means
“forced labor”.

According to Russell and Norvig, a robot is “an active artificial agent whose environment is the
physical world”. The Robot Institute of America defines a robot as “a programmable, multifunction
manipulator designed to move material, parts, tools or specific devices through variable
programmed motions for the performance of a variety of tasks”. Essentially, a robot is a
programmable machine that can complete a task.

Hardware Components of a Robot:

Robots consist of several key hardware components that enable them to function:

• Control System: This includes the robot’s central processing unit, which is programmed to
tell the robot how to utilize its specific components to complete a task. This is similar to how
the human brain sends signals throughout the body.

• Sensors: Sensors provide the robot with stimuli in the form of electrical signals that are
processed by the controller, allowing the robot to interact with the outside world. Common
sensors include video cameras (eyes), photoresistors (light), and microphones (ears). These
allow the robot to capture its surroundings and process information to make decisions.

• Actuators: Actuators are made up of motors that receive signals from the control system to
carry out the necessary movements to complete an assigned task. Actuators can be made of
various materials and operated using compressed air (pneumatic), oil (hydraulic), or
electricity.

• Power Supply: Robots can be powered by AC power through a wall outlet or more
commonly via an internal battery, such as lead-acid or silver-cadmium batteries. Future
power sources may include solar, hydraulic, or nuclear power.

• End Effectors: These are the physical, typically external components that allow robots to
finish carrying out their tasks. In factories, robots often have interchangeable tools like paint
sprayers and drills, while surgical robots might have scalpels. Other robots can be equipped
with gripping claws or hands for tasks like deliveries or packing.

Robotics Applications:

Robots are being used in a wide range of applications across various industries:

• Manufacturing: Industrial robots can assemble products, sort items, perform welds, and
paint objects. They can also maintain other machines in factories. Their ability to perform
basic and repetitive tasks with greater efficiency and accuracy makes them ideal for this
sector.

• Healthcare: Medical robots can transport supplies, perform surgical procedures, and offer
emotional support to patients.

• Companionship: Social robots can support children with learning disabilities and have
business applications like customer service in hotels.
• Home Use: Consumer robots include robot vacuum cleaners (like Roomba), lawn-cutting
robots, and personal assistants that can play music and help with household tasks.

• Search and Rescue: Robots can be used to save people in dangerous situations, such as
those stuck in floodwaters or remote areas, and to deliver supplies.

Pros of Robotics:

Utilizing robots offers several advantages:

• Increased accuracy: Robots can perform tasks with greater precision and accuracy than
humans.

• Enhanced productivity: Robots can work at a faster pace and don’t get tired, leading to
more consistent and higher-volume production.

• Improved safety: Robots can take on tasks and operate in environments unsafe for humans,
protecting workers from injuries.

• Rapid innovation: Robots equipped with sensors and cameras can collect data, allowing
teams to quickly refine processes.

• Greater cost-efficiency: Gains in productivity may make robots a more cost-effective option
for businesses compared to hiring more human workers.

Cons of Robotics:

There are also potential drawbacks to the increasing use of robots:

• Job losses: Robotic process automation may lead to unemployment for human employees,
especially those in roles that are easily automated.

• Limited creativity: Robots may not react well to unexpected situations as they lack the
same problem-solving skills as humans.

• Data security risks: Robots connected to the Internet of Things can be vulnerable to cyber
attacks, potentially exposing large amounts of data.

• Maintenance costs: Robots can be expensive to repair and maintain, and faulty equipment
can cause disruptions.

• Environmental waste: The production and disposal of robots and their components can lead
to environmental waste and pollution.

Relevance to Artificial Intelligence:

Artificial intelligence plays a crucial role in modern robotics. AI and computer vision technologies
enable robots to identify and recognize objects, understand details, and improve navigation and
avoidance. The relevance of AI is seen in:

• Effectors (Actuators): AI can control the movements and actions of robot effectors.

• Sensors: AI algorithms process data from robot sensors to understand the environment.

• Architecture: AI influences the design of robot control architectures, such as behavior-based


architectures that allow robots to reason about the behavior of objects, build maps, and
make decisions.
• Integration: AI is essential for integrating various inputs from sensors and coordinating
different robot functionalities.

• Hierarchy of information representation: AI helps in processing raw sensor data into


meaningful cognitive and conceptual features that the robot can use.

Challenges in Robotics:

Despite significant advancements, several fundamental problems remain in robotics:

• Developing robot swarms: Enabling large groups of robots to work collaboratively.

• Improving navigation and exploration: Creating robots that can navigate complex and
unknown environments effectively.

• Developing artificial intelligence that can “learn how to learn” and use common sense to
make moral and social decisions. This involves endowing robots with more human-like
reasoning and decision-making capabilities.

You might also like