Question: solve this import torch import torch. nn as nn class MultiClassClassificationNet (nn. Module) : f _init_(self) : super (MultiClassClassificationNet, self)._init_() # your code goes here

solve this

import torch import torch. nn as nn class MultiClassClassificationNet (nn. Module) : f _init_(self) : super (MultiClassClassificationNet, self)._init_() # your code goes here self. fc1 = nn. Linear(32 * 32 * 3, 25#) Input layer self. fc2 = nn. Linear(256, 128) # Hidden layer self. fc3 = nn. Linear (128, 10) https://colab.research.google.com/drive/1ZCKJ9JoTWrsz4WZ-Hm24y7BqkffUh-DI#scrollTo=DZ-Xqs YOYqecprintMode=true 9/10 2/14/25, 10:13 PM Gurung_Pirthi_CS767Lab4-Spring.ipynb - Colab def forward (self, x) : # your code goes here x = x. view(x. size(0), -1) # Reshape/flatten the input tenso x = torch. relu(self. fc1(x)) = torch. relu(self. fc2(x) ) return self fc3(x) # think which Loss function you need to use if you output 1 neuron and it is between 0 and 1 # https://pytorch. org/docs/stable/generated/torch. nn. BCELoss. html - this? # https://pytorch. org/docs/stable/generated/torch. nn. CrossEntropyLoss. html or this? criterion = nn. CrossEntropyLoss ( ) # Training Loop is given to you net = MultiClassClassificationNet() . to(device) optimizer = optim. SGD(net. parameters ( ), lr=0. 001, momentum=0.9) for epoch in range (20) : for i, data in enumerate(train_loader, 0): inputs, labels = data puts . to (device) labels = labels. to(device) labels = labels. long ( ) optimizer. zero_grad( ) outputs = net (inputs) loss = criterion(outputs, labels) Loss. backward ( ) optimizer. step ( ) print ( f'Epoch {epoch + 1), Loss: {loss. item()}' )

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Mathematics Questions!