Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
6 views1 page

LLM & Rag

Large Language Models (LLMs) generate text based on internal, static knowledge from training data, while Retrieval-Augmented Generation (RAG) enhances responses by retrieving relevant information from external sources. RAG offers improved accuracy and real-time information but relies on the quality of external knowledge. It is particularly useful in functions where high accuracy and current data are essential.

Uploaded by

huypqhe187326
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views1 page

LLM & Rag

Large Language Models (LLMs) generate text based on internal, static knowledge from training data, while Retrieval-Augmented Generation (RAG) enhances responses by retrieving relevant information from external sources. RAG offers improved accuracy and real-time information but relies on the quality of external knowledge. It is particularly useful in functions where high accuracy and current data are essential.

Uploaded by

huypqhe187326
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1

Core Distinctions:

Feature Large Language Model (LLM) Retrieval-Augmented Generation


(RAG)

Primary Generates human-like text Enhances an LLM's response by


Function based on patterns in its training retrieving relevant information from an
data. external knowledge source.

Knowledge Internal, static knowledge from External, dynamic knowledge from


Source its training data (with a connected databases, documents, or
knowledge cut-off date). the internet.

Process Takes a prompt and directly Takes a prompt, retrieves relevant


generates a response. information, combines it with the prompt,
and then has an LLM generate a
response.

Key Fluency, creativity, and the Improved accuracy, reduced


Advantage ability to understand and hallucinations, and the ability to provide
generate nuanced language. responses based on current and specific
information.

Limitation Can "hallucinate" or provide Dependent on the quality and


outdated information due to its accessibility of the external knowledge
static knowledge base. source.

=> RAG is like the upgraded version of LLM. Instead of querying answers directly from the
trained data sources, now the AI is going to try to find all related information, extract data
from it, and then provide you with the correct answers.

- RAG should be applied in functions which require high accuracy and real-time information.

You might also like