aihwkit.simulator.tiles.floating_point module

High level analog tiles (floating point).

class aihwkit.simulator.tiles.floating_point.CudaFloatingPointTile(*args, **kwds)

Bases: aihwkit.simulator.tiles.floating_point.FloatingPointTile

Floating point tile (CUDA).

Floating point tile that uses GPU for its operation. The instantiation is based on an existing non-cuda tile: all the source attributes are copied except for the simulator tile, which is recreated using a GPU tile.


source_tile – tile to be used as the source of this tile


Return a copy of this tile in CUDA memory.


device (Optional[Union[torch.device, str, int]]) –

Return type


is_cuda = True
class aihwkit.simulator.tiles.floating_point.FloatingPointTile(*args, **kwds)

Bases: aihwkit.simulator.tiles.base.BaseTile

Floating point tile.

Implements a floating point or ideal analog tile.

A linear layer with this tile is perfectly linear, it just uses the RPUCuda library for execution.

Forward pass:

\[\mathbf{y} = W\mathbf{x}\]

\(W\) are the weights, \(\mathbf{x}\) is the input vector. \(\mathbf{y}\) is output of the vector matrix multiplication. Note that if bias is used, \(\mathbf{x}\) is concatenated with 1 so that the last column of \(W\) are the biases.

Backward pass:

Typical backward pass with transposed weights:

\[\mathbf{d'} = W^T\mathbf{d}\]

where \(\mathbf{d}\) is the error vector. \(\mathbf{d}_o\) is output of the backward matrix vector multiplication.

Weight update:

Usual learning rule for back-propagation:

\[w_{ij} \leftarrow w_{ij} + \lambda d_i\,x_j\]


\[w_{ij} \leftarrow w_{ij}(1-\alpha r_\text{decay})\]

Weight decay can be called by calling the analog tile decay.


life_time parameter is set during initialization. alpha is a scaling factor that can be given during run-time.


\[w_{ij} \leftarrow w_{ij} + \xi\;r_\text{diffusion}\]

Similar to the decay, diffusion is only done when explicitly called. However, the parameter of the diffusion process are set during initialization and are fixed for the remainder. \(\xi\) is a standard Gaussian process.

  • out_size – output vector size of the tile, ie. the dimension of \(\mathbf{y}\) in case of \(\mathbf{y} = W\mathbf{x}\) (or equivalently the dimension of the \(\boldsymbol{\delta}\) of the backward pass).

  • in_size – input vector size, ie. the dimension of the vector \(\mathbf{x}\) in case of \(\mathbf{y} = W\mathbf{x}\)).

  • rpu_config – resistive processing unit configuration.

  • bias – whether to add a bias column to the tile, ie. \(W\) has an extra column to code the biases. Internally, the input \(\mathbf{x}\) will be automatically expanded by an extra dimension which will be set to 1 always.


Return a copy of this tile in CUDA memory.


device (Optional[Union[torch.device, str, int]]) –

Return type