Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
19 views6 pages

SVM Set3

Support vectors in SVM are the closest data points to the hyperplane that determine its position. Hard-Margin SVMs apply to linearly separable data without margin violations, while Soft-Margin SVMs allow for some violations in non-linearly separable data. The kernel trick enables mapping non-linear data into higher dimensions for better separation, making SVMs preferable in high-dimensional spaces or when data is not linearly separable.

Uploaded by

gunjan09102000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views6 pages

SVM Set3

Support vectors in SVM are the closest data points to the hyperplane that determine its position. Hard-Margin SVMs apply to linearly separable data without margin violations, while Soft-Margin SVMs allow for some violations in non-linearly separable data. The kernel trick enables mapping non-linear data into higher dimensions for better separation, making SVMs preferable in high-dimensional spaces or when data is not linearly separable.

Uploaded by

gunjan09102000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

1) What are support vectors in SVM?

2) What are Hard-Margin and Soft-Margin SVMs?


3) WHat is kernel trick and how it is important?
4) Give some situations where you will use an SVM over a
RandomForest Machine Learning algorithm.
5) What is the role of C in SVM? How does it affect the
bias/variance trade-off?
6) SVM being a large margin classifier, is it influenced by
outliers?
7) Can we apply the kernel trick to logistic regression?
Why is it not used in practice then?
8) What is the difference between logistic regression and
SVM without a kernel?
9) The training examples closest to the separating hyperplane are called
as
10) Which SVM model is more suitable for non-linearly
separable data?

========================================================================
==
1) Support vectors are the data points nearest to the hyperplane, the points of a data set
that, if removed, would alter the position of the dividing hyperplane.
Using these support vectors, we maximize the margin of the classifier.
For computing predictions, only the support vectors are used.
2) Hard-Margin SVMs have linearly separable training data. No data points are allowed in
the margin areas. This type of linear classification is known as Hard margin
classification.
Soft-Margin SVMs have training data that are not linearly separable. Margin violation
means choosing a hyperplane, which can allow some data points to stay either in between the
margin area or on the incorrect side of the hyperplane.

3)Theidea is to map non linear data into higher dimensional where


we can find a hyperplane that can separate the samples.
It reduces the complexity of finding the mapping function.

So, Kernel function defines the inner product in the

transformed space. Application of the kernel trick is not

limited to the SVM algorithm. Any computations involving

the dot products (x, y) can utilize the kernel trick.

4)The main reason to use an SVM instead is that the

problem might not be linearly separable. In that case, we

will have to use an SVM with a non-linear kernel (e.g. RBF).


Another related reason to use SVMs is if you are in a higher-

dimensional space. For example, SVMs have been reported

to work better for text classification.

5)

In the given Soft Margin Formulation of SVM, C is a

hyperparameter.

C hyperparameter adds a penalty for each misclassified

data point.

Large Value of parameter C implies a small margin, there is

a tendency to overfit the training model.

Small Value of parameter C implies a large margin which

might lead to underfitting of the model.


6) Yes, if C is large, otherwise not.

7)Logistic Regression is computationally more expensive

than SVM — O(N³) vs O(N²k) where k is the number of

support vectors.The classifier in SVM is designed such that

it is defined only in terms of the support vectors, whereas in

Logistic Regression, the classifier is defined over all the

points and not just the support vectors. This allows SVMs to

enjoy some natural speed-ups (in terms of efficient code-

writing) that is hard to achieve for Logistic Regression.

8)They differ only in the implementation . SVM is much more

efficient and has good optimization packages

9)Support vectors

10)Soft margin classifier

You might also like