Which of the following statement(s) is true for Gradient Decent (GD) and Stochastic Gradient Decent (SGD)?
a. In GD and SGD, you update a set of parameters in an iterative manner to minimize the error function.
b. In GD, you use a subset of training data to update a parameter in each iteration.
c. The scale of learning rate in GD or SGD influences the speed of training, but not the final convergence.
d. In SGD, you have to run through all the samples in your training set for a single update of a parameter in each iteration.

Q&A Education