aihwkit.simulator.rpu_base.tiles

Bindings for the simulator analog tiles.

class aihwkit.simulator.rpu_base.tiles.AnalogTile

Bases: FloatingPointTile

Analog tile.

Parameters:
  • x_sizeX size of the tile.

  • d_sizeD size of the tile.

get_meta_parameters(self: aihwkit.simulator.rpu_base.tiles.AnalogTile) aihwkit.simulator.rpu_base.devices.AnalogTileParameter
class aihwkit.simulator.rpu_base.tiles.FloatingPointTile

Bases: pybind11_object

Floating point tile.

Parameters:
  • x_sizeX size of the tile.

  • d_sizeD size of the tile.

backward(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, d_input: torch.Tensor, bias: bool = False, d_trans: bool = False, x_trans: bool = False, non_blocking: bool = False) torch.Tensor

Compute the transposed dot product (backward pass).

Compute the transposed dot product: .. math:

mathbf{y} = Wmathbf{d}

where \(\mathbf{d}\) is the input and \(W\) is the current weight matrix (of size [d_size, x_size]).

An analog tile will have a possible non-ideal version of this backward pass.

Parameters:
  • d_input[N, *,  d_size] input \(\mathbf{d}\) torch::Tensor.

  • bias – whether to use bias.

  • d_trans – whether the d_input matrix is transposed. That is of size [d_size, *, N]

  • x_trans – whether the x output matrix is transposed.

Returns:

[N, *, x_size (-1)] or [x_size (-1), *, N] torch::Tensor.

Return type:

torch::Tensor

backward_indexed(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, d_input: torch.Tensor, x_tensor: torch.Tensor, non_blocking: bool = False) torch.Tensor

Compute the dot product using an index matrix (backward pass).

Caution

Internal use for convolutions only.

Parameters:
  • d_input – 4D torch::tensor in order N,C,H,W

  • x_tensor – torch:tensor with convolution dimensions

Returns:

4D (5D) torch::tensor in order N,C, (x_depth,) x_height, x_width

Return type:

x_output

clip_weights(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, weight_clipper_params: aihwkit.simulator.rpu_base.tiles.WeightClipParameter) None

Clips the weights for use of hardware-aware training.

Several clipping types are available, see WeightClipParameter.

Parameters:

weight_clipper_params – parameters of the clipping.

decay_weights(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, alpha: float = 1.0) None

Decays the weights:

W *= (1 - alpha / life_time)

An analog tile will have possible non-ideal version of this decay.

Parameters:

alpha – decay scale

diffuse_weights(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) None

Diffuse the weights.

Diffuse the weights:

W += diffusion_rate * Gaussian noise

An analog tile will have a possible non-ideal version of this diffusion.

drift_weights(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, time_since_last_call: float) None

Drift weights according to a power law:

W = W0*(delta_t/t0)^(-nu_actual)

Applies the weight drift to all unchanged weight elements (judged by reset_tol) and resets the drift for those that have changed (nu is not re-drawn, however). Each device might have a different version of this drift.

Parameters:

time_since_last_call – This is the time between the calls (delta_t), typically the time to process a mini-batch for the network.

dump_extra(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) Dict[str, List[float]]

Return additional state vraiables for pickling.

Returns:

dictionary of extra variables states

Return type:

state

forward(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, x_input: torch.Tensor, bias: bool = False, x_trans: bool = False, d_trans: bool = False, is_test: bool = False, non_blocking: bool = False) torch.Tensor

Compute the dot product (forward pass).

Compute the dot product: .. math:

mathbf{y} = Wmathbf{x} [+ mathbf{b}]

where \(\mathbf{x}\) is the input and \(W\) is the [d_size, x_size] current weight matrix. If bias is True, then it is assumes that a bias row is added to the analog tile weights. The input \(\mathbf{x}\) is then expected to be of size x_size -1 , as internally it will be expanded by a 1, to match the bias row in the tile weights.

An analog tile will have a possible non-ideal version of this forward pass.

Parameters:
  • x_input[N,*, x_size (- 1)] input \(\mathbf{x}\) torch::Tensor.

  • bias – whether to use bias.

  • x_trans – whether the x_input matrix is transposed. That is of size [x_size (- 1), *, N]

  • d_trans – whether the d matrix is transposed.

  • is_test – whether inference (true) mode or training (false)

Returns:

[N, *, d_size] or [d_size, *, N] matrix.

Return type:

torch::tensor

forward_indexed(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, x_input: torch.Tensor, d_tensor: torch.Tensor, is_test: bool = False, non_blocking: bool = False) torch.Tensor

Compute the dot product using an index matrix (forward pass).

Caution

Internal use for convolutions only.

Parameters:
  • x_input – 4D or 5D torch::tensor in order N,C,(D),H,W

  • d_tensor – torch:tensor with convolution dimensions

  • is_test – whether inference (true) mode or training (false)

Returns:

4D 5D torch::tensor in order N, C, (d_depth,) d_height, d_width

Return type:

d_output

get_brief_info(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) str
get_d_size(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) int

Return the tile output dimensions (d-size).

Returns:

the tile number of rows

Return type:

int

get_hidden_parameter_names(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) List[str]

Get the hidden parameters of the tile.

Returns:

list of hidden parameter names.

Return type:

list

get_hidden_parameters(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) torch.Tensor

Get the hidden parameters of the tile.

Returns:

Each 2D slice tensor is of size [d_size, x_size] (in row-major order)

corresponding to the parameter name.

Return type:

3D tensor

get_hidden_update_index(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) int

Get the current device index that is updated (in case multiple devices per cross-point).

Parameters:

idx – index of the (unit cell) devices, returns 0 in all other cases.

get_info(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) str
get_learning_rate(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) float

Return the tile learning rate.

Returns:

the tile learning rate.

Return type:

float

get_meta_parameters(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) aihwkit.simulator.rpu_base.devices.FloatingPointTileParameter

Returns the current meta parameter structure.

get_pulse_counters(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) torch.Tensor

Get the pulse counters if available.

Returns:

Pulse counters: pos, neg (and for each sub-device)

Return type:

3D tensor

get_shared_weights_if(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) bool

Returns whether weight is shared.

get_weights(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) torch.Tensor

Return the exact tile weights.

Return the tile weights by producing and exact copy.

Note

This is not hardware realistic, and is used for debug purposes only.

Returns:

the [d_size, x_size] weight matrix.

Return type:

tensor

get_x_size(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) int

Return the tile input dimensions (x-size).

Returns:

the tile number of columns (including bias if available)

Return type:

int

has_matrix_indices(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) bool

Returns whether the index matrix necessary for the *_indexed functionality has been set.

Caution

Internal use only.

Returns:

whether it was set or not.

Return type:

bool

load_extra(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, state: Dict[str, List[float]], strict: bool) None

Load the state dictionary generated by dump_extra.

Parameters:

strict – Whether to throw a runtime error when a field is not found.

modify_weights(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, weight_modifier_params: aihwkit.simulator.rpu_base.tiles.WeightModifierParameter) None

Modifies the weights in forward and backward (but not update) pass for use of hardware-aware training.

Several modifier types are available, see WeightModifierParameter.

Parameters:

weight_modifier_params – parameters of the modifications.

remap_weights(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, weight_remap_params: aihwkit.simulator.rpu_base.tiles.WeightRemapParameter, scales: torch.Tensor) torch.Tensor

Remaps the weights for use of hardware-aware training.

Several remap types are available, see WeightRemapParameter.

Parameters:
  • weight_remap_params – parameters of the remapping.

  • scales – scales that will be used and updated during remapping

Returns:

[d_size] of scales

Return type:

torch::tensor

reset_columns(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, start_column_idx: int = 0, num_columns: int = 1, reset_prob: float = 1.0) None

Resets the weights with device-to-device and cycle-to-cycle variability (depending on device type), typically:

W_ij = xi*reset_std + reset_bias_ij
Parameters:
  • start_col_idx – a start index of columns (0..x_size-1)

  • num_columns – how many consecutive columns to reset (with circular warping)

  • reset_prob – individual probability of reset.

reset_delta_weights(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile) None
set_delta_weights(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, delta_weights: torch.Tensor) None
set_hidden_parameters(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, arg0: torch.Tensor) None

Sets the hidden parameters of the tile.

Parameters:

tensor (3D) – Each 2D slice tensor is of size [d_size, x_size] (in row-major order) corresponding to the parameter name.

set_hidden_update_index(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, arg0: int) None

Set the updated device index (in case multiple devices per cross-point).

Note

Only used for vector unit cells, so far. Ignored in other cases.

Parameters:

idx – index of the (unit cell) devices

set_learning_rate(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, learning_rate: float) None

Set the tile learning rate.

Set the tile learning rate to -learning_rate. Please note that the learning rate is always taken to be negative (because of the meaning in gradient descent) and positive learning rates are not supported.

Parameters:

learning_rate – the desired learning rate.

set_matrix_indices(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, indices: torch.Tensor) None

Sets the index vector for the *_indexed functionality.

Caution

Internal use only.

Parameters:

indices – int torch::Tensor

set_shared_weights(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, weights: torch.Tensor) None
set_verbosity_level(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, verbose: int) None

Sets the verbosity level for debugging.

Parameters:

verbose – verbosity level

set_weights(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, weights: torch.Tensor) None

Set the tile weights exactly.

Set the tile weights to the exact values of the weights parameter.

Note

This is not hardware realistic, and is used for debug purposes only.

Parameters:

weights[d_size, x_size] weight matrix.

set_weights_uniform_random(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, min_value: float, max_value: float) None

Sets weights uniformlay in the range min_value to max_value.

Parameters:
  • min_value – lower bound of uniform distribution

  • max_value – upper bound

update(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, x_input: torch.Tensor, d_input: torch.Tensor, bias: bool, d_trans: bool = False, x_trans: bool = False, non_blocking: bool = False) None

Compute an n-rank update.

Compute an n-rank update: .. math:

W leftarrow W - lambda mathbf{x}mathbf{d}^T

where \(\lambda\) is the learning rate.

An analog tile will have a possible non-ideal version of this update pass.

Note

The learning rate is always positive, and thus scaling is negative.

Parameters:
  • x_input[N, *, x_size (-1)] input \(\mathbf{x}\) torch::Tensor.

  • d_input[N, *, d_size] input \(\mathbf{d}\) torch::Tensor.

  • bias – whether to use bias.

  • x_trans – whether the x_input matrix is transposed, ie. [x_size (-1), *, N]

  • d_trans – whether the d matrix is transposed, ie. [d_size, *, N]

update_indexed(self: aihwkit.simulator.rpu_base.tiles.FloatingPointTile, x_input: torch.Tensor, d_input: torch.Tensor, non_blocking: bool = False) None

Compute the dot product using an index matrix (backward pass).

Caution

Internal use for convolutions only.

Parameters:
  • x_input – 4D torch::tensor input in order N,C,H,W

  • d_input – 4D torch::tensor (grad_output) in order N,C,oH,oW

class aihwkit.simulator.rpu_base.tiles.WeightClipParameter

Bases: pybind11_object

property fixed_value
property sigma
property type
class aihwkit.simulator.rpu_base.tiles.WeightClipType

Bases: pybind11_object

Members:

None

FixedValue

LayerGaussian

AverageChannelMax

AverageChannelMax = <WeightClipType.AverageChannelMax: 3>
FixedValue = <WeightClipType.FixedValue: 1>
LayerGaussian = <WeightClipType.LayerGaussian: 2>
None = <WeightClipType.None: 0>
property name
property value
class aihwkit.simulator.rpu_base.tiles.WeightModifierParameter

Bases: pybind11_object

property assumed_wmax
property coeffs
property copy_last_column
property dorefa_clip
property enable_during_test
property g_max
property pcm_prob_at_gmax
property pcm_prob_at_random
property pcm_prob_at_reset
property pcm_t0
property pcm_t_inference
property pcm_zero_thres
property pdrop
property per_batch_sample
property rel_to_actual_wmax
property res
property std_dev
property sto_round
property type
class aihwkit.simulator.rpu_base.tiles.WeightModifierType

Bases: pybind11_object

Members:

Copy

Discretize

MultNormal

AddNormal

DiscretizeAddNormal

DoReFa

Poly

PCMNoise

ProgNoise

DropConnect

None

AddNormal = <WeightModifierType.AddNormal: 3>
Copy = <WeightModifierType.Copy: 0>
Discretize = <WeightModifierType.Discretize: 1>
DiscretizeAddNormal = <WeightModifierType.DiscretizeAddNormal: 4>
DoReFa = <WeightModifierType.DoReFa: 5>
DropConnect = <WeightModifierType.DropConnect: 8>
MultNormal = <WeightModifierType.MultNormal: 2>
None = <WeightModifierType.Copy: 0>
PCMNoise = <WeightModifierType.PCMNoise: 7>
Poly = <WeightModifierType.Poly: 6>
ProgNoise = <WeightModifierType.ProgNoise: 9>
property name
property value
class aihwkit.simulator.rpu_base.tiles.WeightRemapParameter

Bases: pybind11_object

property max_scale_range
property max_scale_ref
property remapped_wmax
property type
class aihwkit.simulator.rpu_base.tiles.WeightRemapType

Bases: pybind11_object

Members:

None

LayerwiseSymmetric

ChannelwiseSymmetric

ChannelwiseSymmetric = <WeightRemapType.ChannelwiseSymmetric: 2>
LayerwiseSymmetric = <WeightRemapType.LayerwiseSymmetric: 1>
None = <WeightRemapType.None: 0>
property name
property value