Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
23 views6 pages

Machine Learning Solutions

Uploaded by

Gargee R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views6 pages

Machine Learning Solutions

Uploaded by

Gargee R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Machine Learning Exam Solutions

Q1a

Apply K-Nearest Neighbor Algorithm (KNN) on the following data. Predict the student result for

values Physics = 6 marks, Chemistry = 8 marks.

Consider number of neighbors K = 3 and Euclidean Distance as distance measure.

Dataset:

| Physics (marks) | Chemistry (marks) | Result |

|------------------|--------------------|--------|

|4 |3 | Fail |

|6 |7 | Pass |

|7 |8 | Pass |

|5 |5 | Fail |

|8 |8 | Pass |

Solution:

Calculate Euclidean distances, sort, and predict using majority vote. Result: **Pass**

Q1b

Explain Support Vector Machine (SVM) classification algorithm with a suitable example.

Support Vector Machine (SVM) works by finding the hyperplane that maximally separates data
points of different classes. Key concepts include hyperplane, margin, and support vectors. For

non-linearly separable data, kernels (e.g., RBF) are used to transform data to a higher-dimensional

space.

Q2a

Explain any 4 evaluation measures of binary classification with examples.

1. Accuracy: Ratio of correct predictions. Example: 90% for 90 correct out of 100.

2. Precision: Ratio of true positives to total positive predictions. Example: 80% if 8 of 10 are

correct.

3. Recall: Ratio of true positives to all actual positives. Example: 80% if 8 out of 10 actual

positives are found.

4. F1-Score: Harmonic mean of Precision and Recall. Example: 74% for Precision = 80%, Recall

= 70%.

Q2b

Explain construction of a multi-classifier using One-vs-All and One-vs-One approaches.

- One-vs-All (OvA): Train N classifiers, each separating one class from others.

- One-vs-One (OvO): Train N(N-1)/2 classifiers for every pair of classes.


Q3a

Explain K-Means clustering algorithm, its advantages and disadvantages.

K-Means partitions data into K clusters using distance from centroids. Advantages include

simplicity and efficiency. Disadvantages include sensitivity to initialization and struggles with

non-spherical clusters.

Q4a

Elaborate need of clustering and explain how the elbow method is used to decide the value of

cluster k.

Clustering groups similar data. The elbow method plots Within-Cluster-Sum-of-Squares (WCSS)

vs. k. The "elbow point" is the optimal k.

Q4b

Explain Divisive Hierarchical Clustering (DHC) algorithm with example.

DHC starts with all data as one cluster and splits recursively. Example: Dividing a dataset based

on maximum dissimilarity until each point forms its own cluster.


Q5a

Differentiate the Bagging and Boosting approaches in ensemble learning.

- Bagging: Reduces variance by training multiple independent models. Example: Random Forest.

- Boosting: Reduces bias by focusing on hard-to-classify points. Example: AdaBoost.

Q5c

Explain AdaBoost algorithm in detail.

AdaBoost combines weak learners, weighting them based on accuracy. It iteratively improves by

focusing on misclassified points and adjusts model weights.

Q6c

Explain Random Forest ensembles with an example.

Random Forest uses Bagging with Decision Trees, introducing randomness in feature selection.

Example: Building trees with random feature subsets and averaging their predictions.

Q7a
Explain the following terms:

1. Markov Property: Future state depends only on the current state.

2. Bellman Equation: Recursive formula for optimal state value.

3. Markov Reward Process: Combines Markov chain with reward functions.

4. Markov Chain: State transition model depending only on the current state.

Q7b

Explain Q-Learning algorithm with example.

Q-Learning is a model-free RL algorithm. It updates Q-values iteratively to find the optimal policy.

Example: A robot navigating a grid to maximize rewards learns the shortest path.

Q8a

What is Reinforcement Learning? Explain real-time applications.

RL is a learning method where an agent interacts with an environment to maximize cumulative

rewards. Applications include robotics, gaming, autonomous vehicles, and healthcare.

Q8b
Explain the following terms:

1. Supervised Learning: Learning from labeled data. Example: Predicting house prices.

2. Unsupervised Learning: Discovering patterns in unlabeled data. Example: Customer

segmentation.

3. Reinforcement Learning: Learning via environment interactions to maximize rewards. Example:

Autonomous robots.

You might also like