A support vector machine (SVM) is a supervised machine learning model which can be used for both classification and regression. But they have been extensively used for solving complex classification problems such as image recognition, voice detection etc.
SVM algorithm outputs an optimal hyperplane that best separates the tags. The hyperplane is a boundary that ‘separates’ the data set into its classes. It could be lines, 2D planes, or even n-dimensional planes.
In addition to performing linear classification, SVM’s can efficiently perform non-linear classification also. Non-linear SVM is used when the data can’t be separated using a straight line.
With the help of kernel functions, we can transform the data into another dimension that has a clear division between the two classes. Kernel functions help transform non-linear spaces into linear spaces.
Pros and Cons — SVM
- It is useful for both linearly Separable (hard margin) and Non-linearly Separable (soft margin) data.
- It is effective in high-dimensional spaces.
- It is effective in cases where a number of dimensions are greater than the number of samples.
- It uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.
- Picking the right kernel and parameters can be computationally intensive.
- It also doesn’t perform very well, when the data set has more noise i.e. target classes are overlapping
- SVM doesn’t directly provide probability estimates, these are calculated using an expensive five-fold cross-validation.
SVMs are linear models that require numeric attributes. In case the attributes are non-numeric, we have to convert them to a numeric form in the data preparation stage.