org.apache.spark.mllib.optimization
Alias of runMiniBatchSGD with convergenceTol set to default value of 0.001.
Run stochastic gradient descent (SGD) in parallel using mini batches.
Run stochastic gradient descent (SGD) in parallel using mini batches. In each iteration, we sample a subset (fraction miniBatchFraction) of the total data in order to compute a gradient estimate. Sampling, and averaging the subgradients over this subset is performed using one standard spark map-reduce in each iteration.
Input data for SGD. RDD of the set of data examples, each of the form (label, [feature values]).
Gradient object (used to compute the gradient of the loss function of one single data example)
Updater function to actually perform a gradient step in a given direction.
initial step size for the first step
number of iterations that SGD should be run.
regularization parameter
fraction of the input data set that should be used for one iteration of SGD. Default value 1.0.
Minibatch iteration will end before numIterations if the relative difference between the current weight and the previous weight is less than this value. In measuring convergence, L2 norm is calculated. Default value 0.001. Must be between 0.0 and 1.0 inclusively.
A tuple containing two elements. The first element is a column matrix containing weights for every feature, and the second element is an array containing the stochastic loss computed for every iteration.
:: DeveloperApi :: Top-level method to run gradient descent.