aihwkit.simulator.tiles.custom module

High level analog tiles (floating point).

class aihwkit.simulator.tiles.custom.CustomRPUConfig(tile_class=<class 'aihwkit.simulator.tiles.custom.CustomTile'>, runtime=<factory>, pre_post=<factory>, tile_array_class=<class 'aihwkit.simulator.tiles.array.TileModuleArray'>, mapping=<factory>, simulator_tile_class=<class 'aihwkit.simulator.tiles.custom.CustomSimulatorTile'>, forward=<factory>, backward=<factory>, update=<factory>)[source]

Bases: MappableRPU, PrePostProcessingRPU

Configuration for resistive processing unit using the CustomTile.

Parameters:
backward: IOParameters

Input-output parameter setting for the backward direction.

forward: IOParameters

Input-output parameter setting for the forward direction.

simulator_tile_class

Simulator tile class implementing the analog forward / backward / update.

alias of CustomSimulatorTile

tile_array_class

Tile class used for mapped logical tile arrays.

alias of TileModuleArray

tile_class

Tile class that corresponds to this RPUConfig.

alias of CustomTile

update: CustomUpdateParameters

Parameter for the update behavior.

class aihwkit.simulator.tiles.custom.CustomSimulatorTile(x_size, d_size, rpu_config, bias=False)[source]

Bases: SimulatorTile, Module

Custom Simulator Tile for analog training.

To implement specialized SGD algorithms here the forward / backward / update are explicitly defined without using auto-grad.

When not overriden, forward and backward use the analog forward pass of the TorchSimulatorTile.

Update is in floating point but adds optionally noise to the gradient.

Parameters:
  • x_size (int) –

  • d_size (int) –

  • rpu_config (CustomRPUConfig) –

  • bias (bool) –

backward(d_input, bias=False, in_trans=False, out_trans=False, non_blocking=False)[source]

Backward pass.

Note

Ignores additional arguments

Raises:

TileError – in case transposed input / output or bias is requested

Parameters:
  • d_input (Tensor) –

  • bias (bool) –

  • in_trans (bool) –

  • out_trans (bool) –

  • non_blocking (bool) –

Return type:

Tensor

extra_repr()[source]

Extra documentation string.

Return type:

str

forward(x_input, bias=False, in_trans=False, out_trans=False, is_test=False, non_blocking=False)[source]

General simulator tile forward.

Note

Ignores additional arguments

Raises:

TileError – in case transposed input / output or bias is requested

Parameters:
  • x_input (Tensor) –

  • bias (bool) –

  • in_trans (bool) –

  • out_trans (bool) –

  • is_test (bool) –

  • non_blocking (bool) –

Return type:

Tensor

get_brief_info()[source]

Returns a brief info

Return type:

str

get_d_size()[source]

Returns output size of tile

Return type:

int

get_learning_rate()[source]

Get the learning rate of the tile.

Returns:

learning rate if exists.

Return type:

float | None

get_meta_parameters()[source]

Returns meta parameters.

Return type:

Any

get_weights()[source]

Get the tile weights.

Returns:

a tuple where the first item is the [out_size, in_size] weight matrix; and the second item is either the [out_size] bias vector or None if the tile is set not to use bias.

Return type:

Tensor

get_x_size()[source]

Returns input size of tile

Return type:

int

set_config(rpu_config)[source]

Updated the configuration to allow on-the-fly changes.

Parameters:

rpu_config (CustomRPUConfig) – configuration to use in the next forward passes.

Return type:

None

set_learning_rate(learning_rate)[source]

Set the learning rate of the tile.

No-op for tiles that do not need a learning rate.

Parameters:
  • rate (learning) – learning rate to set

  • learning_rate (float | None) –

Return type:

None

set_weights(weight)[source]

Set the tile weights.

Parameters:

weight (Tensor) – [out_size, in_size] weight matrix.

Return type:

None

set_weights_uniform_random(bmin, bmax)[source]

Sets the weights to uniform random numbers.

Parameters:
  • bmin (float) – min value

  • bmax (float) – max value

Raises:

TileError – in case bmin >= bmax

Return type:

None

update(x_input, d_input, bias=False, in_trans=False, out_trans=False, non_blocking=False)[source]

Update with gradient noise.

Note

Ignores additional arguments

Raises:

TileError – in case transposed input / output or bias is requested

Parameters:
  • x_input (Tensor) –

  • d_input (Tensor) –

  • bias (bool) –

  • in_trans (bool) –

  • out_trans (bool) –

  • non_blocking (bool) –

Return type:

Tensor

class aihwkit.simulator.tiles.custom.CustomTile(out_size, in_size, rpu_config=None, bias=False, in_trans=False, out_trans=False)[source]

Bases: TileModule, TileWithPeriphery, SimulatorTileWrapper

Custom tile based on TileWithPeriphery.

Implements a tile with periphery for analog training without the RPUCuda engine.

Raises:
Parameters:
  • out_size (int) –

  • in_size (int) –

  • rpu_config (CustomRPUConfig | None) –

  • bias (bool) –

  • in_trans (bool) –

  • out_trans (bool) –

forward(x_input, tensor_view=None)[source]

Torch forward function that calls the analog context forward

Parameters:
  • x_input (Tensor) –

  • tensor_view (Tuple | None) –

Return type:

Tensor

post_update_step()[source]

Operators that need to be called once per mini-batch.

Note

This function is called by the analog optimizer.

Caution

If no analog optimizer is used, the post update steps will not be performed.

Return type:

None

supports_indexed = False
class aihwkit.simulator.tiles.custom.CustomUpdateParameters(gradient_noise=0.0)[source]

Bases: _PrintableMixin

Custom parameters for the update

Parameters:

gradient_noise (float) –

gradient_noise: float = 0.0

Adds Gaussian noise (with this std) to the weight gradient.