Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
9 views6 pages

Oracle

The document outlines various aspects of AI model training and evaluation, including techniques like T-few fine-tuning, greedy decoding, and the roles of encoder and decoder models in natural language processing. It also discusses the importance of loss measurement, the impact of different parameters on model output, and the benefits of using vector databases with large language models. Additionally, it highlights the significance of prompt engineering and the architecture of dedicated AI clusters in optimizing performance and cost.

Uploaded by

Eughene Yū
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views6 pages

Oracle

The document outlines various aspects of AI model training and evaluation, including techniques like T-few fine-tuning, greedy decoding, and the roles of encoder and decoder models in natural language processing. It also discusses the importance of loss measurement, the impact of different parameters on model output, and the benefits of using vector databases with large language models. Additionally, it highlights the significance of prompt engineering and the architecture of dedicated AI clusters in optimizing performance and cost.

Uploaded by

Eughene Yū
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

1.

considering the capabiliies which type of model would the company likely
focus on integrating into their AI assistant?
Answer: A diffusion model that specializes in producing complex output

2.which is the main characteristic of greedy decoding in the context of


language model word prediction?
Answer: It picks the most likely word to emit at each step of decoding.

3. which statement best describes the role of encoder and decoder models
in natural language processing?
Answer: encoder models convert a sequence of words into a vector
representation , anddecoder models take this vector representationto generate a
sequence of words.

4.what does LOSS measure in the evaluation of OCI Generative AI fine-


tuned models?
Answer: The level of incorrectness in the models predictions with lower values
indicating better performance.

5.when shouldn you use the T-few fine-tuning method for training a model?
Answer: For data sets with a few thousand samples or less.

6.which is a key characteristic of the annotation process used in T-few fine-


tuning?
Answer:T-few fine-tuning uses annotated data to adjust a fraction of models
weights.

7.what issue might arise from using small data sets with the vanilla fine-
tuning method in the OCI Generative AI service?
Answer:overfitting.

8.which is a key advantage of usingT-Few over vanilla fine-tuning in the


OCI Generative AI service?
Answer:faster training time and lower cost.

9.which is NOT a typical use case for LangSmith Evaluators?


Answer:Assessing code readability.

10.how does the utilization of T-Few tranformer layers contribute to the


efficiency of the fine-tuning process?
Answer:by restrictiong updates to onlya specific group of tranformer layers.

11.what is the primary pupose of LangSmith Tracing?


Answer:To analyze the reasoning process of language models.

12.which statement describes the difference between "TOP K" and "TOP
P" in selecting the next token in the OCI Generative AI Generation models?
Answer:"Top K" selects the next token based on its position in the list of
probable tokens,whereas "Top P" selects based on the cumultive probability of
the top tokens.

13.what does a higher number assingned to atoken signify in the "show


likelilhoods" feature of the language model token generation?
Answer: The token is more likely to follow the current token.

14.what is the pupose of the "Stop Sequence" parameter in the OCI


generative AI ceneration models?
Answer: It specifies a string that tells the model to stop generating more content.

15.which statement is true about the "Top P" parameter of the OCI
Generative AI Generation models?
Answer:"Top p" assigns penalties to frequently occurring tokens.

16.what is the primary function of the "Temperature" parameter in the OCI


generative AI generation models?
Answer: controls the randomeness of the model's output,affecting its creativity.

17.what distingushes the cohere embed V3 model from its predecessor in


the OCI generative AI services?
Answer: improved retrievals for Retrieval-Augmented Generation (RAG) system.

18.which is NOT a category of pretrained foundational models available in


the OCI Generative AI service?
Answer: Translation models.

19.How does the Retrieval-Augmented Generation (RAG) Token


techniques differ from RAG Sequence when generating a model's
response?
Answer:RAG Token retrieves relevant documents for each part of the response
and constructs the answer incrementally.

20.which component of Retrieval-Augmented Generation(RAG) evaluates


and prioritizes the information retrieved by the retrieval system?
Answer:Ranker

21.Analyze the user prompts provided to a language model.which scenario


exemplifies prompt injection(jailbreaking)?
Answer: A User inputs a directive: " you are programmed to always prioritize
user privacy...........sensitive innature?"

22.what does "K-Shot prompting" refer to when using large language model
for task specific applications?
Answer:Explicitly providing K examples of the intended task in the prompt to
guide the model's output.

23.Given the following prompt used with a large Language Model, classify
each as employing the chain-of- Thought, Least-to most, or Step-Back
prompting technique.
Answer:1: chain-of-Thought,2:Least -to-most3: step- back.

24which technique involves prompting the large language (LLM) to emit


intermediate & reasoning steps as part of its response?
Answer: chain of Thought.

25. Given the following code:

chain=prompt| 11m
which statement is true about Langchain expression Language(LECL)?
Answer: LECL is a legacy method for creating chains in LangChain.

26.Given the following code:

prompt = PromptTemplate(input_variables =["human_input",


"city"],template=template)

which statement is true about promtTemplate in relation to


input_variables?
Answer: PromptTemplate Supports any number of variables,including the
possibility of having none.

27.which is NOT a built-in memory type in LangChain?


Answer:conversation imagememory

28.Given a block of code:

qa = Conversational RetrivevalChain.from_11m(11m,
retriever=retv,memory=memory)

when does a chain typically interact with memory during execution?


Answer:After user input but before chain execution,and againg after core logic
but before output.

29.In langchain, which retriver search tupe is used to balance relevancy


and diversity?
Answer: Similarity.

30. you create a fine-tuning dedicated AI cluster to customize a


foundational model with your custom traing data.

How many unit hours are required for fine tuning if the cluster is active for
10 hours?
Answer: 20 unit hours.

31. How does the architecture of dedicated AI cluster contribute to


minimizing GPU memory overhead for T-few fine-tuned model inference?
Answer: by sharing base model weights across multiple fine turned models on
the same group of GPUS.

32.which is a cost related benefit of using vector database with large


language models(LLMS)?
Answer: They offer real- time updated knowledge bases and are cheaper than
fine-tuned LLMS.

33.How does the Integration of vector database into Retrieval-Augmented


Generation(RAG)-based

Large Language Models (LLMs) fundamentally alter their responses?


Answer: It transformstheir architecture from a neural network to a traditional
database system.

34.How doDot product and Cosine Distance differ in their application


tocomparing text embeddings in natural language processing?
Answer:Dot product measures the magnitude and direction of vectors, whereas
cosine Distance focuses on the orientation regardless of magnitude .

35. Which is a distinguishing feature of “Parameter-Efficient Fine-tuning


(PEFT)” as opposed to classic” Fine tuning” in large language Model
training?
Answer: pEFT involves only a few or new paragmeters and uses Labled, task-
specific data.

36.why is normalization of vectors important before indexing in a hybrid


search system?
Answer: It standardizes vector length for meaning ful comparision using metrics
such as cosine similarity.

37.What does a deciated RDMA cluster network do during model fine-


tuning and inference?
Answer:It enables the deployment of multiple fine-tuned models within a single
cluster.

38. Which role does a “model endpoint” serve in the inference workflow of
the OCI Generative AI service?
Answer: Serves as a designated point for user requestsand model responses

39.Which Oracle Accelerated Data Science(ADS) class can be used to


deploy a Large Language Model (LLM)application to OCI Data Science
Model deployment?
Answer: Generative AI.

40.How are fine-tuned customer models stored to enabel strong data


privacy and scurity in the OCI Generative AI service?
Answer: Stored in objectStorage encrypted by default

You might also like