MLESA Week-7 Assignment Solution
1. What is the primary goal of semantic segmentation?
(A) Object detection
(B) Image classification
(C) Pixel-wise labeling of objects
(D) Depth estimation
Solution
(C) Pixel-wise labeling of objects
Semantic segmentation aims to assign a specific label to each pixel in an image, categorizing them based
on the objects or regions they belong to.
2. Which layer type is commonly used in the decoder part of an Encoder-Decoder architecture for upsam-
pling?
(A) Fully Connected Layer
(B) Convolutional Transpose Layer
(C) Batch Normalization Layer
(D) Pooling Layer
Solution
(B) Convolutional Transpose Layer
Convolutional Transpose Layers are used in the decoder to perform upsampling, helping to reconstruct
the spatial dimensions of the input.
3. Which hyperparameter optimization method leverages probabilistic models to model the objective func-
tion and guide the search towards promising regions?
(A) Grid Search
(B) Random Search
(C) Bayesian Optimization
(D) Genetic Algorithm
Solution
(C) Bayesian Optimization
Bayesian Optimization uses probabilistic models to model the objective function and makes informed
decisions about where to sample next in the hyperparameter space.
4. In transfer learning for image classification, what does fine-tuning refer to?
(A) Training only the last few layers of a pre-trained model
1
(B) Training a model from scratch
(C) Using a pre-trained model without any modifications
(D) Ensembling multiple pre-trained models
Solution
(A) Training only the last few layers of a pre-trained model
Fine-tuning involves adjusting the weights of the last few layers of a pre-trained model on a new dataset,
allowing the model to adapt to the specific task.
5. When fine-tuning a multi layer pre-trained model, what layer can be freezed if you want to maximize
adaptation to the new task?
(A) Early layers (first few layers)
(B) Middle layers
(C) Output layer
(D) All layers should be fine-tuned.
Solution
(A) Early layers (first few layers)
Early layers contain general features applicable to various tasks, so freezing them can preserve their
usefulness.
6. Which of the following are hyperparameters to be tuned in training a neural network?
(A) Learning Rate
(B) Batch Size
(C) Number of layers
(D) Number of epochs
(E) Activation functions
(F) Dropout rate
Solution
All of the options
(A) Learning Rate: This is a hyperparameter that determines the step size at each iteration while
moving toward a minimum of a loss function.
(B) Batch Size: It is a hyperparameter that defines the number of samples to work through before
updating the internal model parameters.
(C) Number of Layers: More layers can represent more complex functions, but they also make
the network harder to train and more prone to overfitting.
(D) Number of Epochs: The number of epochs is a hyperparameter that defines the number of
times the learning algorithm will work through the entire training dataset.
2
(E) Activation Functions: These are functions applied to each neuron’s output. They decide
whether a neuron should be activated or not by calculating a weighted sum and further adding bias
with it.
(F) Dropout Rate: This is a regularization technique where randomly selected neurons are ignored
during training, reducing overfitting by preventing complex co-adaptations on training data.
7. Your neural network seems to have high bias. What would your next steps be?
(A) Make the network deeper
(B) Add regularization
(C) Get more training data
(D) Get more testing data
Solution
(A) Make the network deeper
High bias in a neural network indicates that the model is too simple and is underfitting the training
data. It means the model does not have enough capacity to capture the underlying trend of the data. To
address high bias, you would generally need to make your model more complex. Adding more layers to
the neural network can help it learn more complex patterns and relationships in the data.
8. Which techniques are useful for reducing the variance of your model?
(A) Dropout
(B) L2 regularisation
(C) Data augmentation
(D) Increase the number of neurons per layer
Solution
(A) Dropout, (B) L2 regularisation, (C) Data Augmentation
High variance in a model indicates overfitting: the model performs well on the training data but poorly on
unseen data. This means the model is too complex and has learned the training data too well, including
its noise and outliers.
(A) Dropout: This technique temporarily removes a random subset of neurons at each training step,
preventing them from co-adapting too closely to the training data.
(B) L2 Regularization: Also known as weight decay, it adds a penalty on the size of the weights to
the loss function. By discouraging large weights, L2 regularization ensures that the model is simpler
and less likely to fit the noise in the training data, thus reducing variance.
(C) Data Augmentation: Data augmentation effectively increases the size and diversity of the training
data, helping the model generalize better to new, unseen data by reducing overfitting and thus
variance.
3
Refer to the U-Net paper link for next two questions [Q9-10]
9. How do the authors solve the problem of having a small dataset?
(A) Elastic deformations are performed to teach the neural network shift and rotation invariance while
augmenting the data strongly.
(B) A weighted loss function is used to teach the network appropriate boundary segmentations.
(C) They append a further dataset from a different competition to learn features.
(D) They take pretrained weights from a network trained on ImageNet for low level features.
Solution
(A) and (B)
10. Based on the referred text, why does the Unet architecture incorporate skipped connections?
(A) They act as regularizers for backpropagation.
(B) They help oreserve the output size to be similar to that of the input
(C) They are required to prevent padding operations from causing exploding gradients.
(D) It helps to incorporate features lost during downsampling and upsampling operations and make sure
they’re present in the decoder path while preventing vanishing gradients.
Solution
(D)
4
Question Number Key/Answer
1 C
2 B
3 C
4 A
5 A
6 A,B,C,D,E,F
7 A
8 A,B,C
9 A,B
10 D