Based on the training accuracy, do you conclude that the data are linearly separable? Why or why
Fantastic news! We've Found the answer you've been seeking!
Question:
Based on the training accuracy, do you conclude that the data are linearly separable? Why or why not?
2.2 Which feature most increases the likelihood that the class is 'Android' and which feature most increases the likelihood that the class is 'iPhone'?
2.3 Compare the initial training/test accuracies to the training/test accuracies after averaging. What happens? Why do you think averaging the weights from different iterations has this effect?
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import accuracy_score, precision_score
classifier = SGDClassifier(loss='perceptron', max_iter=1000, tol=1.0e-12, random_state=123, eta0=100)
classifier.fit(X_train, Y_train)
print("Number of SGD iterations: %d" % classifier.n_iter_)
print("Training accuracy: %0.6f" % accuracy_score(Y_train, classifier.predict(X_train)))
print("Testing accuracy: %0.6f" % accuracy_score(Y_test, classifier.predict(X_test)))
print("\nFeature weights:")
args = np.argsort(classifier.coef_[0])
for a in args:
print(" %s: %0.4f" % (feature_names[a], classifier.coef_[0][a]))
Related Book For
Introduction To Statistical Investigations
ISBN: 9781118172148
1st Edition
Authors: Beth L.Chance, George W.Cobb, Allan J.Rossman Nathan Tintle, Todd Swanson Soma Roy
Posted Date: