Welcome to the 36th part of our machine learning tutorial series, and another tutorial within the topic of Clustering..
In the previous tutorial, we covered how to handle non-numerical data, and here we're going to actually apply the K-Means algorithm to the Titanic dataset. The K-Means algorithm is a flat-clustering algorithm, which means we need to tell the machine only one thing: How many clusters there ought to be. We're going to tell the algorithm to find two groups, and we're expecting that the machine finds survivors and non-survivors mostly in the two groups it picks.
Our code up to this point:
#https://pythonprogramming.net/static/downloads/machine-learning-data/titanic.xls import matplotlib.pyplot as plt from matplotlib import style style.use('ggplot') import numpy as np from sklearn.cluster import KMeans from sklearn import preprocessing import pandas as pd ''' Pclass Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd) survival Survival (0 = No; 1 = Yes) name Name sex Sex age Age sibsp Number of Siblings/Spouses Aboard parch Number of Parents/Children Aboard ticket Ticket Number fare Passenger Fare (British pound) cabin Cabin embarked Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton) boat Lifeboat body Body Identification Number home.dest Home/Destination ''' df = pd.read_excel('titanic.xls') #print(df.head()) df.drop(['body','name'], 1, inplace=True) df.convert_objects(convert_numeric=True) df.fillna(0, inplace=True) #print(df.head()) def handle_non_numerical_data(df): columns = df.columns.values for column in columns: text_digit_vals = {} def convert_to_int(val): return text_digit_vals[val] if df[column].dtype != np.int64 and df[column].dtype != np.float64: column_contents = df[column].values.tolist() unique_elements = set(column_contents) x = 0 for unique in unique_elements: if unique not in text_digit_vals: text_digit_vals[unique] = x x+=1 df[column] = list(map(convert_to_int, df[column])) return df df = handle_non_numerical_data(df)
From here, we can right away do the clustering:
X = np.array(df.drop(['survived'], 1).astype(float)) y = np.array(df['survived']) clf = KMeans(n_clusters=2) clf.fit(X)
Great, now let's see if the groups match each other. One note I will make is, in this case, survived is either a 0
, which means non-survival, or a 1
, which means survival. For a clustering algorithm, the machine will find the clusters, but then will asign arbitrary values to them, in the order it finds them. Thus, the group that is survivors might be a 0 or a 1, depending on a degree of randomness. Thus, if you consistently get 30% and 70% accuracy, then your model is 70% accurate. Let's see what we get:
correct = 0 for i in range(len(X)): predict_me = np.array(X[i].astype(float)) predict_me = predict_me.reshape(-1, len(predict_me)) prediction = clf.predict(predict_me) if prediction[0] == y[i]: correct += 1 print(correct/len(X))
0.4957983193277311
Okay, so accuracy is somewhere between 49%-51%...not very good! Remember a few tutorials ago, however, the idea of pre-processing? When we used it back then, it didn't seem to matter much, but how about here?
X = np.array(df.drop(['survived'], 1).astype(float)) X = preprocessing.scale(X) y = np.array(df['survived']) clf = KMeans(n_clusters=2) clf.fit(X) correct = 0 for i in range(len(X)): predict_me = np.array(X[i].astype(float)) predict_me = predict_me.reshape(-1, len(predict_me)) prediction = clf.predict(predict_me) if prediction[0] == y[i]: correct += 1 print(correct/len(X))
0.7081741787624141
Looks like preprocessing made a big deal here. Recall that preprocessing aims to put your data in a range from -1 to +1, which can make things better. I've never seen preprocessing make a large negative impact, usually it makes almost no impact at all, but here it has made a very large positive impact.
Curiously, I wonder how much of this is whether or not the person got onto a boat. I could see that the machine just separated people without a lifeboat from those with a lifeboat. We can see if that makes a big difference by adding df.drop(['boat'], 1, inplace=True)
before we define X
:
0.6844919786096256
Nothing major, but there is a slight impact. What about sex? We know this dataset actually has two classes: Male and Female. Maybe that's mostly what it's finding? Now we try df.drop(['sex'], 1, inplace=True)
0.6982429335370511
Nothing significant here either.
Full code up to this point:
#https://pythonprogramming.net/static/downloads/machine-learning-data/titanic.xls import matplotlib.pyplot as plt from matplotlib import style style.use('ggplot') import numpy as np from sklearn.cluster import KMeans from sklearn import preprocessing import pandas as pd ''' Pclass Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd) survival Survival (0 = No; 1 = Yes) name Name sex Sex age Age sibsp Number of Siblings/Spouses Aboard parch Number of Parents/Children Aboard ticket Ticket Number fare Passenger Fare (British pound) cabin Cabin embarked Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton) boat Lifeboat body Body Identification Number home.dest Home/Destination ''' df = pd.read_excel('titanic.xls') #print(df.head()) df.drop(['body','name'], 1, inplace=True) df.convert_objects(convert_numeric=True) df.fillna(0, inplace=True) #print(df.head()) def handle_non_numerical_data(df): columns = df.columns.values for column in columns: text_digit_vals = {} def convert_to_int(val): return text_digit_vals[val] if df[column].dtype != np.int64 and df[column].dtype != np.float64: column_contents = df[column].values.tolist() unique_elements = set(column_contents) x = 0 for unique in unique_elements: if unique not in text_digit_vals: text_digit_vals[unique] = x x+=1 df[column] = list(map(convert_to_int, df[column])) return df df = handle_non_numerical_data(df) df.drop(['sex','boat'], 1, inplace=True) X = np.array(df.drop(['survived'], 1).astype(float)) X = preprocessing.scale(X) y = np.array(df['survived']) clf = KMeans(n_clusters=2) clf.fit(X) correct = 0 for i in range(len(X)): predict_me = np.array(X[i].astype(float)) predict_me = predict_me.reshape(-1, len(predict_me)) prediction = clf.predict(predict_me) if prediction[0] == y[i]: correct += 1 print(correct/len(X))
It appears to me that this clustering algorithm seems to automatically categorize these people into who might survive or not on the ship's sinking. Interesting. We don't have much in the way of determining exactly what the machine is thinking about why these are the groups chosen, but they appear to have a high degree of correlation with survivability.
In the next tutorial, we're going to dive into creating our own custom K-Means algorithm from scratch.