Question: Step 1 : use assignment 4 draft.py to draw the following six scatter plots depicting the way different classifiers perform on the iris dataset which

Step 1: use assignment 4 draft.py to draw the following six scatter plots depicting the way different classifiers
perform on the iris dataset which has 50 samples for each of its three classes(labels). These plots figure each
class with a different color (red, blue, white)
PCA.f0
PCA.f0
PCA.f0
SVC w/ Sig. 6
score: 65.2%
PCA.f0
PCA.f0
Step 2: (25 points) Compare six SVM classifiers with polynomial kernel (kernel='poly'), degrees [3,5,7], and
gamma values [.3,.5] by drawing a grid of six plots similar to the one you obtained from step 1.
Step 3: (25 points) Compare the following six classifiers by drawing a grid of six plots similar to the one you
obtained from step 1:
linear regression
linearSVC
GaussianNB w/ var_smoothing=2e1
GaussianNB w/ var_smoothing=1e1
GaussianNB w/ var_smoothing=1e0
GaussianNB w/ var_smoothing=1e-1
Step 4: (25 points) Compare six variations of SVM classifiers with kernels ['sigmoid','rfb'], and gamma values [1e-
1,1e0,1e2] by drawing a grid of six plots similar to the one you obtained from step 1.
Step 5: (25 points) Compare eight variations of neural network MLP classifiers with default alpha (1e-4), solvers
['adam', 'Ibfgs'], activation ['logistic', 'relu'] and layers [(30,30),(10,5)] by drawing a grid of eight plots similar to the
one you obtained from step 1.
assignment 4 draft.py
from sklearn.decomposition import PCA import matplotlib.pyplot as plt from sklearn import svm, datasets from sklearn.naive_bayes import GaussianNB from sklearn.inspection import DecisionBoundaryDisplay from sklearn.model_selection import train_test_split from sklearn.neural_network import MLPClassifier from sklearn.linear_model import LinearRegression pca = PCA(2) #Step 1: loading the dataset iris = datasets.load_iris() #Step 2: transforming the dataset features to reduce dimensions from 4D to 2D X_transformed = pca.fit_transform(iris.data) #Step 3: obtaining the true labels of dataset y = iris.target #Step 4: splitting the dataset into 15% testing set and 85% training set in a random fashion X_train, X_test, y_train, y_test = train_test_split(X_transformed, y, test_size=0.15, random_state=23) #Step 4: Defining 5 models and fitting them to our training set models =( LinearRegression(), svm.LinearSVC(C=1, max_iter=10000), svm.SVC(kernel='poly', degree=3, gamma=.1, C=1), MLPClassifier(solver='lbfgs', alpha=.5, hidden_layer_sizes=(2,2),activation='logistic', random_state=23), GaussianNB(var_smoothing=1e1), svm.SVC(kernel="sigmoid", gamma=0.6, C=1),) models =(clf.fit(X_train, y_train) for clf in models) # Step 5: Drawing the plots titles =("Linear regression", "LinearSVC", "SVC w/ poly3.1","NN layer:(2,2)", "GaussianNB", "SVC w/ Sig.6") fig, sub = plt.subplots(2,3) plt.subplots_adjust(wspace=0.4, hspace=0.7) X0, X1= X_train[:,0], X_train[:,1] for clf, title, ax in zip(models, titles, sub.flatten()): disp = DecisionBoundaryDisplay.from_estimator( clf, X_train, response_method="predict", cmap=plt.cm.coolwarm, alpha=0.8, ax=ax, xlabel='PCA.f0', ylabel='PCA.f1') ax.scatter(X0, X1, c=y_train, cmap=plt.cm.coolwarm, s=20, edgecolors="k") ax.set_xticks(()) ax.set_yticks(()) ax.set_title(title+'
score: '+str(100*round(clf.score(X_test,y_test),3))+'%')# plt.show()
Step 1 : use assignment 4 draft.py to draw the

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Programming Questions!