Welcome to the 26th part of our machine learning tutorial series and the next part in our Support Vector Machine section. In this tutorial, we're going to be working on our SVM's optimization method: fit
.
Where we left off, our code was:
import matplotlib.pyplot as plt from matplotlib import style import numpy as np style.use('ggplot') class Support_Vector_Machine: def __init__(self, visualization=True): self.visualization = visualization self.colors = {1:'r',-1:'b'} if self.visualization: self.fig = plt.figure() self.ax = self.fig.add_subplot(1,1,1) # train def fit(self, data): pass def predict(self,features): # sign( x.w+b ) classification = np.sign(np.dot(np.array(features),self.w)+self.b) return classification data_dict = {-1:np.array([[1,7], [2,8], [3,8],]), 1:np.array([[5,1], [6,-1], [7,3],])}
We'll begin adding to the fit
method:
def fit(self, data): self.data = data # { ||w||: [w,b] } opt_dict = {} transforms = [[1,1], [-1,1], [-1,-1], [1,-1]]
Note that this method is first passing self
(remember, just standard to do this with a method), then data
is passed. The data is the data we intend to train against / optimize with. In our case, that's going to be data_dict
, which we've already created
We set self.data
to that data. Now, we can reference the training data anywhere else in the class (but again, we'd have to run the train method first with data for it to work without an error).
Next, we begin building an optimization dictionary as opt_dict
, which is going to contain any optimization values. As we step down our w vector, we'll test that vector in our constraint function, finding the largest b, if any, that will satisfy the equation, and then we'll store all of that data in our optimization dictionary. The dictionary will be { ||w|| : [w,b] }
. When we're all done optimizing, we'll choose the values of w and b for whichever one in the dictionary has the lowest key value (which is ||w||
).
Finally, we set our transforms. We've explained that our intention there is to make sure we check every version of the vector possible.
Next, we need some starting point that matches our data. To do this, we're going to first reference our training data to pick some haflway decent starting values:
# finding values to work with for our ranges. all_data = [] for yi in self.data: for featureset in self.data[yi]: for feature in featureset: all_data.append(feature) self.max_feature_value = max(all_data) self.min_feature_value = min(all_data) # no need to keep this memory. all_data=None
All we're doing here is cycling through all of our data, and finding the highest and lowest values. Now we're going to work on our step sizes:
step_sizes = [self.max_feature_value * 0.1, self.max_feature_value * 0.01, # starts getting very high cost after this. self.max_feature_value * 0.001]
What we're doing here is setting some sizes per step that we want to make. For our first pass, we'll take big steps (10%). Once we find the minimum with these steps, we're going to step down to a 1% step size to continue finding the minimum here. Then, one more time, we step down to 0.1% for fine tuning. We could continue stepping down, depending on how precise you want to get. I will discuss towards the end of this project how you could determine within your program whether or not you should continue optimizing or not.
Next, we're going to set some variables that will help us make steps with b (used to make larger steps than we use for w, since we care far more about w precision than b), and keep track of the latest optimal value:
# extremely expensive b_range_multiple = 5 b_multiple = 5 latest_optimum = self.max_feature_value*10
Now we're ready to begin stepping:
for step in step_sizes: w = np.array([latest_optimum,latest_optimum]) # we can do this because convex optimized = False while not optimized: pass
The idea here is to begin stepping down the vector. To begin, we'll set optimized to False, and we'll reset this for each major step. The optimized var will be true when we have checked all steps down to the base of the convex shape (our bowl).
We will pick up with the logic in the next tutorial, where we actually plug in values to the constraint problem to see if we can find values to save.
Full code up to this point:
import matplotlib.pyplot as plt from matplotlib import style import numpy as np style.use('ggplot') class Support_Vector_Machine: def __init__(self, visualization=True): self.visualization = visualization self.colors = {1:'r',-1:'b'} if self.visualization: self.fig = plt.figure() self.ax = self.fig.add_subplot(1,1,1) # train def fit(self, data): self.data = data # { ||w||: [w,b] } opt_dict = {} transforms = [[1,1], [-1,1], [-1,-1], [1,-1]] all_data = [] for yi in self.data: for featureset in self.data[yi]: for feature in featureset: all_data.append(feature) self.max_feature_value = max(all_data) self.min_feature_value = min(all_data) all_data = None step_sizes = [self.max_feature_value * 0.1, self.max_feature_value * 0.01, # point of expense: self.max_feature_value * 0.001,] # extremely expensive b_range_multiple = 5 # b_multiple = 5 latest_optimum = self.max_feature_value*10 for step in step_sizes: w = np.array([latest_optimum,latest_optimum]) # we can do this because convex optimized = False while not optimized: pass def predict(self,features): # sign( x.w+b ) classification = np.sign(np.dot(np.array(features),self.w)+self.b) return classification data_dict = {-1:np.array([[1,7], [2,8], [3,8],]), 1:np.array([[5,1], [6,-1], [7,3],])}