aihwkit.optim.analog_sgd module¶
Analog-aware stochastic gradient descent optimizer.
-
class
aihwkit.optim.analog_sgd.
AnalogSGD
(params, lr=<required parameter>, momentum=0, dampening=0, weight_decay=0, nesterov=False)¶ Bases:
torch.optim.sgd.SGD
Implements analog-aware stochastic gradient descent.
-
regroup_param_groups
(model)¶ Reorganize the parameter groups, isolating analog layers.
Update the param_groups of the optimizer, moving the parameters for each analog layer to a new single group.
- Parameters
model (torch.nn.modules.module.Module) – model for the optimizer.
- Return type
None
-
set_learning_rate
(learning_rate=0.1)¶ Update the learning rate to a new value.
Update the learning rate of the optimizer, propagating the changes to the analog tiles accordingly.
- Parameters
learning_rate (float) – learning rate for the optimizer.
- Return type
None
-
step
(closure=None)¶ Performs an analog-aware single optimization step.
If a group containing analog parameters is detected, the optimization step calls the related RPU controller. For regular parameter groups, the optimization step has the same behaviour as
torch.optim.SGD
.- Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
- Return type
Optional[float]
-