aihwkit.nn.modules.container module¶
Analog Modules that contain children Modules.
-
class
aihwkit.nn.modules.container.
AnalogSequential
(*args)¶ Bases:
torch.nn.modules.container.Sequential
An analog-aware sequential container.
Specialization of torch
nn.Sequential
with extra functionality for handling analog layers:correct handling of
.cuda()
for children modules.apply analog-specific functions to all its children (drift and program weights).
Note
This class is recommended to be used in place of
nn.Sequential
in order to correctly propagate the actions to all the children analog layers. If using regular containers, please be aware that operations need to be applied manually to the children analog layers when needed.-
cpu
()¶ Moves all model parameters and buffers to the CPU.
- Returns
self
- Return type
Module
-
cuda
(device=None)¶ Moves all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.
- Parameters
device (int, optional) – if specified, all parameters will be copied to that device
- Returns
self
- Return type
Module
-
drift_analog_weights
(t_inference=0.0)¶ (Program) and drift all analog inference layers of a given model.
- Parameters
t_inference (float) – assumed time of inference (in sec)
- Raises
ModuleError – if the layer is not in evaluation mode.
- Return type
None
-
program_analog_weights
()¶ Program all analog inference layers of a given model.
- Raises
ModuleError – if the layer is not in evaluation mode.
- Return type
None
-
to
(device=None)¶ Moves and/or casts the parameters, buffers and analog tiles.
Note
Please be aware that moving analog layers from GPU to CPU is currently not supported.
- Parameters
device (Optional[Union[torch.device, str, int]]) – the desired device of the parameters, buffers and analog tiles in this module.
- Returns
This module in the specified device.
- Return type
-
training
¶