aihwkit.nn.modules.conv_mapped module

Convolution layers.

class aihwkit.nn.modules.conv_mapped.AnalogConv1dMapped(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', rpu_config=None, realistic_read_write=False, weight_scaling_omega=None)[source]

Bases: aihwkit.nn.modules.conv_mapped._AnalogConvNdMapped

1D convolution layer that maps to analog tiles.

Applies a 1D convolution over an input signal composed of several input planes, using an analog tile for its forward, backward and update passes.

The module will split the weight matrix onto multiple tiles if necessary. Physical max tile sizes are specified with MappingParameter in the RPU configuration, see RPUConfigAlias.

Note

The tensor parameters of this layer (.weight and .bias) are not guaranteed to contain the same values as the internal weights and biases stored in the analog tile. Please use set_weights and get_weights when attempting to read or modify the weight/bias. This read/write process can simulate the (noisy and inexact) analog writing and reading of the resistive elements.

Parameters
analog_bias: bool
digital_bias: bool
dilation: Tuple[int, ...]
fold_indices: torch.Tensor
classmethod from_digital(module, rpu_config=None, realistic_read_write=False)[source]

Return an AnalogConv1dMapped layer from a torch Conv1d layer.

Parameters
Returns

an AnalogConv1d layer based on the digital Conv1d module.

Return type

aihwkit.nn.modules.conv_mapped.AnalogConv1dMapped

get_tile_size(in_channels, groups, kernel_size)[source]

Calculate the tile size.

Parameters
  • in_channels (int) –

  • groups (int) –

  • kernel_size (Tuple[int, ...]) –

Return type

int

groups: int
in_channels: int
in_features: int
input_size: float
kernel_size: Tuple[int, ...]
out_channels: int
out_features: int
output_padding: Tuple[int, ...]
padding: Tuple[int, ...]
padding_mode: str
realistic_read_write: bool
stride: Tuple[int, ...]
transposed: bool
use_bias: bool
weight_scaling_omega: float
class aihwkit.nn.modules.conv_mapped.AnalogConv2dMapped(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', rpu_config=None, realistic_read_write=False, weight_scaling_omega=None)[source]

Bases: aihwkit.nn.modules.conv_mapped._AnalogConvNdMapped

2D convolution layer that maps to analog tiles.

Applies a 2D convolution over an input signal composed of several input planes, using an analog tile for its forward, backward and update passes.

The module will split the weight matrix onto multiple tiles if necessary. Physical max tile sizes are specified with MappingParameter in the RPU configuration, see RPUConfigAlias.

Note

The tensor parameters of this layer (.weight and .bias) are not guaranteed to contain the same values as the internal weights and biases stored in the analog tile. Please use set_weights and get_weights when attempting to read or modify the weight/bias. This read/write process can simulate the (noisy and inexact) analog writing and reading of the resistive elements.

Parameters
analog_bias: bool
digital_bias: bool
dilation: Tuple[int, ...]
fold_indices: torch.Tensor
classmethod from_digital(module, rpu_config=None, realistic_read_write=False)[source]

Return an AnalogConv2dMapped layer from a torch Conv2d layer.

Parameters
Returns

an AnalogConv2dMapped layer based on the digital Conv2d module.

Return type

aihwkit.nn.modules.conv_mapped.AnalogConv2dMapped

get_tile_size(in_channels, groups, kernel_size)[source]

Calculate the tile size.

Parameters
  • in_channels (int) –

  • groups (int) –

  • kernel_size (Tuple[int, ...]) –

Return type

int

groups: int
in_channels: int
in_features: int
input_size: float
kernel_size: Tuple[int, ...]
out_channels: int
out_features: int
output_padding: Tuple[int, ...]
padding: Tuple[int, ...]
padding_mode: str
realistic_read_write: bool
stride: Tuple[int, ...]
transposed: bool
use_bias: bool
weight_scaling_omega: float
class aihwkit.nn.modules.conv_mapped.AnalogConv3dMapped(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', rpu_config=None, realistic_read_write=False, weight_scaling_omega=None)[source]

Bases: aihwkit.nn.modules.conv_mapped._AnalogConvNdMapped

3D convolution layer that maps to analog tiles.

Applies a 3D convolution over an input signal composed of several input planes, using an analog tile for its forward, backward and update passes.

The module will split the weight matrix onto multiple tiles if necessary. Physical max tile sizes are specified with MappingParameter in the RPU configuration, see RPUConfigAlias.

Note

The tensor parameters of this layer (.weight and .bias) are not guaranteed to contain the same values as the internal weights and biases stored in the analog tile. Please use set_weights and get_weights when attempting to read or modify the weight/bias. This read/write process can simulate the (noisy and inexact) analog writing and reading of the resistive elements.

Parameters
Raises

ModuleError – Tiling weight matrices is always done across channels only. If the kernel number of elements is larger than the maximal tile size, mapping cannot be done

analog_bias: bool
digital_bias: bool
dilation: Tuple[int, ...]
fold_indices: torch.Tensor
classmethod from_digital(module, rpu_config=None, realistic_read_write=False)[source]

Return an AnalogConv3dMapped layer from a torch Conv3d layer.

Parameters
Returns

an AnalogConv3d layer based on the digital Conv3d module.

Return type

aihwkit.nn.modules.conv_mapped.AnalogConv3dMapped

get_tile_size(in_channels, groups, kernel_size)[source]

Calculate the tile size.

Parameters
  • in_channels (int) –

  • groups (int) –

  • kernel_size (Tuple[int, ...]) –

Return type

int

groups: int
in_channels: int
in_features: int
input_size: float
kernel_size: Tuple[int, ...]
out_channels: int
out_features: int
output_padding: Tuple[int, ...]
padding: Tuple[int, ...]
padding_mode: str
realistic_read_write: bool
stride: Tuple[int, ...]
transposed: bool
use_bias: bool
weight_scaling_omega: float