Question: Linear SVM Part One: Loss Function [Graded] You will need to implement the function loss, which takes in training data xTr ( ) and labels

Linear SVM

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Part One: Loss Function [Graded]

You will need to implement the function loss, which takes in training data xTr (

) and labels yTr () with yTr[i]{1,1} and evaluates the squared hinge loss of classifier (,)

.

Some functions that might be useful for you:

  • np.maximum(a,b): returns the maximum value between a and b
  • arr.clip(min=0): returns arr but with value 0 replacing negative entries
  • arr.shape: returns the tuple (m,n) where m is the row count, n is the column count
def loss(w, b, xTr, yTr, C):
 """
 INPUT:
 w : d dimensional weight vector
 b : scalar (bias)
 xTr : nxd dimensional matrix (each row is an input vector)
 yTr : n dimensional vector (each entry is a label)
 C : scalar (constant that controls the tradeoff between l2-regularizer and hinge-loss)
 
 OUTPUTS:
 loss : the total loss obtained with (w, b) on xTr and yTr (scalar)
 """
 
 loss_val = 0.0
 
 # YOUR CODE HERE
 raise NotImplementedError()
 
 return loss_val

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!