Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
48 views2 pages

Assignment 11

The document discusses various concepts related to AI prompt engineering, including prompt injection vulnerabilities, Retrieval Augmented Generation (RAG), and the ReAct prompting framework. It highlights the importance of temperature settings in generating outputs, the advantages of chain-of-thought prompting, and the benefits of auto-COT and meta prompting for enhancing model performance. Additionally, it provides recommendations for creative writing tasks to optimize output diversity and coherence.

Uploaded by

sureshvalmiki118
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views2 pages

Assignment 11

The document discusses various concepts related to AI prompt engineering, including prompt injection vulnerabilities, Retrieval Augmented Generation (RAG), and the ReAct prompting framework. It highlights the importance of temperature settings in generating outputs, the advantages of chain-of-thought prompting, and the benefits of auto-COT and meta prompting for enhancing model performance. Additionally, it provides recommendations for creative writing tasks to optimize output diversity and coherence.

Uploaded by

sureshvalmiki118
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Date of Submission: Friday 27th Sep 2024

Q1.
Prompt engineering involves designing and optimizing input prompts to guide AI models toward generating more
accurate, relevant, and coherent responses. It focuses on crafting clear, specific instructions or queries to elicit
desired outcomes from the model, improving its performance in various applications.

Q2.
Prompt injection is a security vulnerability where malicious inputs are crafted to manipulate or override an AI’s
behaviour. One of the types is Direct Prompt Injection, where harmful content is inserted into prompts, and other is
Indirect Injection, where external data sources are compromised to influence AI-generated responses.

Q3.
Retrieval Augmented Generation (RAG) includes real time retrieval of relevant information from external sources,
improving response accuracy and relevance of the model. It enables access to real-time information and enhances
contextual understanding by retrieving and integrating relevant knowledge during generation, making outputs more
reliable and data-driven.

Q4.
The ReAct prompting framework combines reasoning and acting to enhance AI decision-making. Its key components
are – generating reasoning steps to analyse tasks and perform actions, retrieving external knowledge or interacting
with environments, and updating reasoning iteratively, ensuring more accurate and context-aware responses in
complex scenarios.

Q5.
Dense retrieval in RAG systems leverages learned embeddings for better semantic understanding, enabling more
accurate matching of queries and documents. It outperforms sparse retrieval, which relies on keyword matching, by
capturing deeper contextual relationships, improving retrieval quality, especially for complex or nuanced queries.

Q6.
Word C : 0.45
Word D : 0.20
Word A : 0.15

Q7.
Word A : 0.45
Word B : 0.25 (combined probability : 0.45+0.25=0.7)

Q8.
The temperature setting controls the randomness of an LLM’s output. Higher temperatures produce more diverse
and creative responses by making the model sample from a wider range of word probabilities, while lower
temperatures lead to more focused, deterministic, and predictable outputs.

Q9.
The advantage of chain-of-thought (COT) prompting over zero-shot prompting is that COT allows for step-by-step
reasoning, improving performance on complex tasks that require multi-step solutions, whereas Zero-Shot prompting
often relies on immediate responses without intermediate reasoning, limiting accuracy on such tasks.

Q10.
Auto-COT improves over COT by automatically generating reasoning chains without human intervention, saving time
and reducing human bias. It enhances scalability for diverse tasks, enabling models to handle complex queries more
efficiently and with broader applicability compared to manually designed COT prompts.

Q11.
Meta prompting enhances model performance by instructing it on how to generate or select optimal prompts. This
increases adaptability, reduces errors, and improves efficiency across various tasks, as the model learns to self-reflect
and adjust its responses for better outcomes without external guidance.
Date of Submission: Friday 27th Sep 2024
Q12.
For the task of creative writing -
Set the temperature to higher range (e.g 0.7 to 1) in order to encourage more diverse and creative outputs by making
model’s response less deterministic.
Use moderate value of top-k value (e.g 40 to 50) to limit the number of potential next words, ensuring a balance
between creativity and coherence.

You might also like