Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
7 views1 page

Unit5 QB

The document outlines 13 important questions related to model compression techniques, including quantization, pruning, and knowledge distillation. It covers definitions, techniques, and comparisons between different types of knowledge distillation, as well as specific NLP tasks using large language models. The questions aim to explore both theoretical concepts and practical applications in the field of machine learning.

Uploaded by

akarsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views1 page

Unit5 QB

The document outlines 13 important questions related to model compression techniques, including quantization, pruning, and knowledge distillation. It covers definitions, techniques, and comparisons between different types of knowledge distillation, as well as specific NLP tasks using large language models. The questions aim to explore both theoretical concepts and practical applications in the field of machine learning.

Uploaded by

akarsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1

Here are the 10 important questions:

1. Define quantization. With the help of neat diagram discuss various quantization
techniques.
2. Discuss various model pruning techniques with the help of a diagram.
3. Discuss White-Box knowledge distillation
4. Write a note on Meta knowledge distillation
5. Describe Back-box knowledge distillation
6. Explain the concept of Knowledge Distillation (KD), including the roles of the 'teacher
model' and 'student model'. How is the student model typically trained in the KD
framework?
7. Differentiate between White-Box Knowledge Distillation and Black-Box Knowledge
Distillation, highlighting their assumptions and common approaches.
8. Explain Model Pruning as a model compression technique. Describe the three
primary types of pruning with examples.
9. What is Model Quantisation, and what is its primary aim?
10. What is Model quantisation, Define the below terms:

1.Non uniform Quantisation 2. Dynamic range Quantisation

3.Post training Quantisation (PTQ) 4. Quantisation Aware training

11. What is Meta Knowledge Distillation (Meta-KD), and how does it advance traditional
KD methods? What are its two main components?
12. Elaborate on the following NLP Tasks Using LLMs
a) Text generation,
b) Text Summarization
13. Illustrate and discuss natural language understanding applications.

You might also like