Question: Code additional methods to implement into this neural network python code to calculate and display the: Kappa (or Cohens kappa): Classification accuracy normalized by the

Code additional methods to implement into this neural network python code to calculate and display the:

Kappa (or Cohens kappa): Classification accuracy normalized by the imbalance of the classes in the data and the ROC Curve: Like precision and recall, accuracy is divided into sensitivity and specificity and models can be chosen based on the balance thresholds of these values. code........................

#Importing the numpy to perform Linear Algebraic operations on the data import numpy as np #Import pandas library to perform the data preprocessing import pandas #importing the Keras deep learning framework of Python import keras #Importing the Sequential model from keras from keras.models import Sequential #Importing the types of layers in the Neural Network that we are going to have from keras.layers import Dense #Importing the train_test_split function which is useful in dividing the dataset into the training and testing data from sklearn.model_selection import train_test_split #Importing the StandardScaler function to perform the standardisation/scaling of the data from sklearn.preprocessing import StandardScaler, LabelEncoder #Importing the metries for the performance evaluation of our deep learning model from sklearn import metrics from keras.utils import np_utils, normalize, to_categorical from imblearn.over_sampling import SMOTE

dataframe = pandas.read_csv("C:/Users/train.csv", header=0, dtype=object) dataset = dataframe.values X_train = dataset[:,0:78].astype(float) y_train = dataset[:,78]

dataframe = pandas.read_csv("C:/Users/test.csv", header=0, dtype=object) dataset = dataframe.values X_test = dataset[:,0:78].astype(float) y_test = dataset[:,78]

encoder = LabelEncoder() encoder.fit(y_train) encoded_Yone = encoder.transform(y_train) # convert integers to dummy variables (i.e. one hot encoded) y_train = np_utils.to_categorical(encoded_Yone) #encode our testing set encoder = LabelEncoder() encoder.fit(y_test) encoded_Ytwo = encoder.transform(y_test) # convert integers to dummy variables (i.e. one hot encoded) y_test = np_utils.to_categorical(encoded_Ytwo)

#Creating an object of StandardScaler trial run to try and improve accuracy from 90% baseline* sc = StandardScaler() #Scaling the data using the StandardScaler() object X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test)

neural_classifier = Sequential() #output_dim = number of nuerons in first hidden layer #inittializer = initializing the weights of the neural network #input_dim = number of neuron in the input layer = number of input features = 78 #Actiavtion = activation function that is used in each layer #The sigmoid and hyperbolic tangent activation functions cannot be used in networks with many layers # due to the vanishing gradient problem. #Dense is the type of layer # kernel int to uniform the neural network needs to start with some weights and then iteratively update them to better values. The term kernel_initializer is a fancy term for which statistical distribution or function to use for initialising the weights. In case of statistical distribution, the library will generate numbers from that statistical distribution and use as starting weights. #Input layer neural_classifier.add(Dense(100, kernel_initializer = 'uniform', activation = 'relu', input_dim = 78)) #First hidden layer neural_classifier.add(Dense(150, kernel_initializer = 'uniform', activation = 'relu')) #Second hidden layer neural_classifier.add(Dense(200, kernel_initializer = 'uniform', activation = 'relu')) #Third hidden layer neural_classifier.add(Dense(250, kernel_initializer = 'uniform', activation = 'relu')) #Fourth hidden layer neural_classifier.add(Dense(300, kernel_initializer = 'uniform', activation = 'relu')) #Fifth hidden layer neural_classifier.add(Dense(350, kernel_initializer = 'uniform', activation = 'relu')) #Sixth hidden layer neural_classifier.add(Dense(400, kernel_initializer = 'uniform', activation = 'relu')) #Seventh hidden layer neural_classifier.add(Dense(250, kernel_initializer = 'uniform', activation = 'relu')) #Eighth hidden layer neural_classifier.add(Dense(300, kernel_initializer = 'uniform', activation = 'relu')) # Output layer # output layer has 15 neurons because there are 15 classes in dataset #Since it is a multiclass classification problem hence we are using the softmax activation function neural_classifier.add(Dense(15, kernel_initializer = 'uniform', activation = 'softmax')) #Optimizer is ADAM which is a kind of optimization algorithm for minimizing the loss on each and #every epoch neural_classifier.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy']) #Epochs = number of times we will train our network neural_classifier.fit(X_train, y_train, batch_size = 32, epochs = 3)

from sklearn.metrics import classification_report y_pred = neural_classifier.predict_classes(X_test) y_pred[1] rounded_labels=np.argmax(y_test, axis=1) rounded_labels[1] print(classification_report(rounded_labels,y_pred, digits=4)) from sklearn.metrics import confusion_matrix # Precision is a measure of the ability of a classification model to identify only the relevant data points, while recall is a measure of the ability matrix = confusion_matrix(rounded_labels,y_pred) matrix.diagonal()/matrix.sum(axis=1)

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!