Question: Using a pre-built VGG16 model below (using Chest X-Ray Images (Pneumonia) image database from Kaggle), how can I improve it to be trainable and fine-tuning

Using a pre-built VGG16 model below (using Chest X-Ray Images (Pneumonia) image database from Kaggle), how can I improve it to be trainable and fine-tuning its weights?: # Set up the Neural Network #IMAGE_SIZE = (224, 224) # This is defined above RGB = 3 # Use three layers for RBG images BW = 1 # Use one layer for greyscale images INPUT_SHAPE = [*IMAGE_SIZE, RGB] #INPUT_SHAPE = [*IMAGE_SIZE, BW] #INPUT_SHAPE = (224,224,3) OUTPUT_SIZE = 2 # Our image set contains photos of 200 different whales

# Set up the Neural Network

# ==== VGG === works with 224x224 size images # ==== uncomment one of these lines to use the VGG16 model #pretrained_model = tf.keras.applications.VGG16(weights=None, include_top=False ,input_shape=INPUT_SHAPE) # Using random initial weights pretrained_model = tf.keras.applications.VGG16(weights='imagenet', include_top=False ,input_shape=INPUT_SHAPE) # Using pretrained weights from Imagenet

# ==== DenseNet201 === # ==== uncomment one of these lines to use the DenseNet201 model #pretrained_model = tf.keras.applications.DenseNet201(weights=None, include_top=False ,input_shape=INPUT_SHAPE) # Using random initial weights #pretrained_model = tf.keras.applications.DenseNet201(weights='imagenet', include_top=False ,input_shape=INPUT_SHAPE) # Using pretrained weights from Imagenet

# ==== Xception === # ==== uncomment one of these lines to use the Xception model # by default Xception expects images of size 299x299 pixels #pretrained_model = tf.keras.applications.Xception(weights=None, include_top=False ,input_shape=INPUT_SHAPE) # Using random initial weights #pretrained_model = tf.keras.applications.Xception(weights='imagenet', include_top=False ,input_shape=INPUT_SHAPE) # Using pretrained weights from Imagenet

# ==== Some additional models you could use #pretrained_model = tf.keras.applications.EfficientNetB4(weights='imagenet', include_top=False ,input_shape=INPUT_SHAPE) #pretrained_model = tf.keras.applications.InceptionV3(weights='imagenet', include_top=False ,input_shape=INPUT_SHAPE) #pretrained_model = tf.keras.applications.MobileNetV2(weights='imagenet', include_top=False ,input_shape=INPUT_SHAPE) #pretrained_model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False ,input_shape=INPUT_SHAPE) #pretrained_model = tf.keras.applications.ResNet152V2(weights='imagenet', include_top=False ,input_shape=INPUT_SHAPE)

# Set the model so that all the weights are trainable with the new whale images pretrained_model.trainable = False # False = transfer learning, only train to new top layers #pretrained_model.trainable = True # True = fine-tuning learning, train all the layers adjusting the pre-trained layers

model = Sequential() # Start with the pretrained model defined above model.add(pretrained_model)

# Flatten 2D images into 1D data for final layers like traditional neural network #model.add(GlobalAveragePooling2D()) model.add(Flatten()) #model.add(Dense(1024, activation="relu")) # Can add optional additional layers here model.add(Dense(200, activation="relu")) # Can add optional additional layers here model.add(Dense(OUTPUT_SIZE, activation='softmax'))

print ("=== Pretrained Model =========================================================================") pretrained_model.summary() # print layers in pretrained model print ("=== Final Model ==============================================================================") model.summary() # print final model

# ==== Optimizer === We will study these options in a future unit. For not, just leave as RMSprop # Some sample weight optimizer settings #RMSprop(learning_rate=0.001, rho=0.9, epsilon=None, decay=0.0) optimizer_RMSprop = RMSprop(learning_rate=0.00001, epsilon=1e-08) #optimizer_Adam = Adam(learning_rate=0.001) # default learning rate optimizer_Adam = Adam(learning_rate=0.00001)

# Compile neural network model model.compile( optimizer=optimizer_Adam, # optimizer=optimizer_RMSprop, loss = 'categorical_crossentropy', metrics=['accuracy'] )

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Accounting Questions!