Question: Asynchronous Parallel SGD v / s Sequential SGD We are comparing the training of a model with 1 parameter using asynchronous vs synchronous parallel Stochastic
Asynchronous Parallel SGD vs Sequential SGD
We are comparing the training of a model with parameter using asynchronous
vs synchronous parallel Stochastic Gradient Descent SGD on a machine with
cores.
Assume that calculating the gradient on each core and updating the parameter vector in the shared memory takes seconds for all cores, and the cores have the following delays: s s s and s Assume for the gradient of every data point i we have fi and all cores start the parameter update simultaneously.
Consider the parameters after running asynchronous parallel SGD for seconds. Assume that in sequential SGD the order of execution is core core core core
What is the tightest upper bound on the noise in the parameter vector, relative to what would have occurred if the same updates had been done sequentially as in sequential SGD
Hint: For example, if in the asynchronous parallel SGD HOGWILD setting, only core
and core can execute, then compare this to the parameter obtained using sequential SGD where only core and core execute, sequentially.
Your Answer:
A where is the learning rate
B where is the learning rate
C where is the learning rate
D where is the learning rate
Es
Fs
Step by Step Solution
There are 3 Steps involved in it
1 Expert Approved Answer
Step: 1 Unlock
Question Has Been Solved by an Expert!
Get step-by-step solutions from verified subject matter experts
Step: 2 Unlock
Step: 3 Unlock
