aihwkit.nn.modules.base module¶
Base class for analog Modules.
-
class
aihwkit.nn.modules.base.
AnalogModuleBase
(in_features, out_features, bias, realistic_read_write=False, weight_scaling_omega=0.0, mapping=None)[source]¶ Bases:
torch.nn.modules.module.Module
Base class for analog Modules.
Base
Module
for analog layers that use analog tiles. When subclassing, please note:the
_setup_tile()
method is expected to be called by the subclass constructor, and it does not only create a tile.register_analog_tile()
needs to be called for each created analog tilethis module does not call torch’s
Module
init as the child is likely again derived from Modulethe
weight
andbias
Parameters are not guaranteed to be in sync with the tile weights and biases during the lifetime of the instance, for performance reasons. The canonical way of reading and writing weights is via theset_weights()
andget_weights()
as opposed to using the attributes directly.the
BaseTile
subclass that is created is retrieved from therpu_config.tile_class
attribute.
- Parameters
in_features – input vector size (number of columns).
out_features – output vector size (number of rows).
bias – whether to use a bias row on the analog tile or not.
realistic_read_write – whether to enable realistic read/write for setting initial weights and during reading of the weights.
weight_scaling_omega – the weight value that the current max weight value will be scaled to. If zero, no weight scaling will be performed.
mapping – Configuration of the hardware architecture (e.g. tile size).
-
ANALOG_CTX_PREFIX
: str = 'analog_ctx_'¶
-
ANALOG_SHARED_WEIGHT_PREFIX
: str = 'analog_shared_weights_'¶
-
ANALOG_STATE_PREFIX
: str = 'analog_tile_state_'¶
-
analog_tile_count
()[source]¶ Return the number of registered tiles.
- Returns
Number of registered tiles
- Return type
int
-
analog_tiles
()[source]¶ Generator to loop over all registered analog tiles of the module
- Return type
Generator[BaseTile, None, None]
-
drift_analog_weights
(t_inference=0.0)[source]¶ (Program) and drift the analog weights.
- Parameters
t_inference (float) – assumed time of inference (in sec)
- Raises
ModuleError – if the layer is not in evaluation mode.
- Return type
None
-
extra_repr
()[source]¶ Set the extra representation of the module.
- Returns
A string with the extra representation.
- Return type
str
-
get_weights
(force_exact=False)[source]¶ Get the weight (and bias) tensors.
This uses an realistic read if the property
realistic_read_write
of the layer is set, unless it is overwritten byforce_exact
. It scales the analog weights by the digital alpha scale ifweight_scaling_omega
is positive (seeget_weights_scaled()
).Note
This is the recommended way for setting the weight/bias matrix from the analog tile, as it will correctly fetch the weights from the internal memory. Accessing
self.weight
andself.bias
might yield wrong results as they are not always in sync with the analog tile library, for performance reasons.- Parameters
force_exact (bool) – forces an exact read to the analog tiles
- Returns
weight matrix, bias vector
- Return type
tuple
- Raises
ModuleError – in case of multiple defined analog tiles in the module
-
load_state_dict
(state_dict, strict=True, load_rpu_config=True)[source]¶ Specializes torch’s
load_state_dict
to add a flag whether to load the RPU config from the saved state.- Parameters
state_dict (OrderedDict[str, Tensor]) – see torch’s
load_state_dict
strict (bool) – see torch’s
load_state_dict
load_rpu_config (bool) –
Whether to load the saved RPU config or use the current RPU config of the model.
Caution
If
load_rpu_config=False
the RPU config can be changed from the stored model. However, the user has to make sure that the changed RPU config makes sense.For instance, changing the device type might change the expected fields in the hidden parameters and result in an error.
- Returns
see torch’s
load_state_dict
- Return type
NamedTuple
- Raises: ModuleError: in case the rpu_config class mismatches
for
load_rpu_config=False
.
-
named_analog_tiles
()[source]¶ Generator to loop over all registered analog tiles of the module with names.
- Return type
Generator[Tuple[str, BaseTile], None, None]
-
program_analog_weights
()[source]¶ Program the analog weights.
- Raises
ModuleError – if the layer is not in evaluation mode.
- Return type
None
-
register_analog_tile
(tile, name=None)[source]¶ Register the analog context of the tile.
Note
Needs to be called at the end init to register the tile for the analog optimizers.
- Parameters
tile (BaseTile) – tile to register
name (Optional[str]) – Optional tile name used as the parameter name
- Return type
None
-
set_weights
(weight, bias=None, force_exact=False)[source]¶ Set the weight (and bias) values with given tensors.
This uses an realistic write if the property
realistic_read_write
of the layer is set, unless it is overwritten byforce_exact
.If
weight_scaling_omega
is larger than 0, the weights are set in a scaled manner (assuming a digital output scale). Seeset_weights_scaled()
for details.Note
This is the recommended way for setting the weight/bias matrix of the analog tile, as it will correctly store the weights into the internal memory. Directly writing to
self.weight
andself.bias
might yield wrong results as they are not always in sync with the analog tile Parameters, for performance reasons.- Parameters
weight (torch.Tensor) – weight matrix
bias (Optional[torch.Tensor]) – bias vector
force_exact (bool) – forces an exact write to the analog tiles
- Raises
ModuleError – in case of multiple defined analog tiles in the module
- Return type
None
-
state_dict
(destination=None, prefix='', keep_vars=False)[source]¶ Return a dictionary containing a whole state of the module.
- Parameters
destination (Any) –
prefix (str) –
keep_vars (bool) –
- Return type
Dict
-
unregister_parameter
(param_name)[source]¶ Unregister module parameter from parameters.
- Raises
ModuleError – In case parameter is not found
- Parameters
param_name (str) –
- Return type
None