Welcome to the 31st part of our machine learning tutorial series and the next part in our Support Vector Machine section. In this tutorial, we're going to talk about the Soft Margin Support Vector Machine.
First, there are two major reasons why the soft-margin classifier might be superior. One reason is your data is not perfectly linearly separable, but is very close and it makes more sense to continue using the default linearly kernel. The other reason is, even if you are using a kernel, you may wind up with significant over-fitment if you want to use a hard-margin. For example, consider:
Here's a case of data that isn't currently linearly separable. Assuming a hard-margin (which is what we've been using in our calculations so far), we might use a kernel to achieve a decision boundary of:
Next, noting imperfections in my drawing abilities, let's draw the support vector hyperplanes, and circle the support vectors:
In this case, every single data sample for the positive class is a support vector, and only two of the negative class aren't support vectors. This signals to use a high chance of overfitting having happened. That's something we want to avoid, since, as we move forward to classify future points, we have almost no wiggle room, and are likely to miss-classify new data. What if we did something like this instead:
We have a couple errors or violations noted by arrows, but this is likely to classify future featuresets better overall. What we have here is a "soft margin" classifier, which allows for some "slack" on the errors that we might get in the optimization process.
Our new optimization is the above calculation, where slack is greater than or equal to zero. The closer to 0 the slack is, the more "hard-margin" we are. The higher the slack, the more soft the margin is. If slack was 0, then we'd have a typical hard-margin classifier. As you might guess, however, we'd like to ideally minimize slack. To do this, we add it to the minimization of the magnitude of vector w:
Thus, we actually want to minimize 1/2||w||^2 + (C * The sum of all of the slacks used)
.With that, we brough in yet another variable, C. C is a multiplier for the "value" of how much we want slack to affect the rest of the equation. The lower C, the less important the sum of the slacks is in relation to the magnitude of vector w, and visa versa. In most cases, C will be defaulted to 1.
So there you have the Soft-Margin Support Vector Machine, and why you might want to use it. Next, we're going to show some sample code that incorporates a soft margin, kernels, and CVXOPT.