aihwkit.simulator.tiles.inference module

High level analog tiles (inference).

class aihwkit.simulator.tiles.inference.CudaInferenceTile(source_tile)[source]

Bases: Generic[aihwkit.simulator.tiles.base.RPUConfigGeneric]

Analog inference tile (CUDA).

Analog inference tile that uses GPU for its operation. The instantiation is based on an existing non-cuda tile: all the source attributes are copied except for the simulator tile, which is recreated using a GPU tile.

Caution

Deprecated. Use InferenceTile(..).cuda() instead.

Parameters

source_tile – tile to be used as the source of this tile

class aihwkit.simulator.tiles.inference.InferenceTile(out_size, in_size, rpu_config=None, bias=False, in_trans=False, out_trans=False, shared_weights=True)[source]

Bases: Generic[aihwkit.simulator.tiles.base.RPUConfigGeneric]

Tile used for analog inference and hardware-aware training for inference.

Parameters
  • out_size – output size

  • in_size – input size

  • rpu_config – resistive processing unit configuration.

  • bias – whether to add a bias column to the tile.

  • in_trans – Whether to assume an transposed input (batch first)

  • out_trans – Whether to assume an transposed output (batch first)

  • shared_weights – Whether to keep the weight in torch’s memory space

cuda(device=None)[source]

Return a copy of this tile in CUDA memory.

Parameters

device (Optional[Union[torch.device, str, int]]) – CUDA device

Returns

Self with the underlying C++ tile moved to CUDA memory.

Raises

CudaError – if the library has not been compiled with CUDA.

Return type

BaseTile

drift_weights(t_inference=0.0)[source]

Programs and drifts the current reference weights.

The current weight reference is either the current weights or the ones at the time when initialize_drift_reference() was called, which then would overwrite the current weights with the drifted ones.

Parameters

t_inference (float) – Time (in sec) of assumed inference time. Programming ends at t=0s. The rest is waiting time, where the devices might drift and accumulate noise. See noise model used for details.

Return type

None

forward(x_input, is_test=False)[source]

Forward pass with drift compensation.

Note

The drift compensation scale will only be applied during testing, ie if is_test=True.

Parameters
  • x_input (torch.Tensor) –

  • is_test (bool) –

Return type

torch.Tensor

post_update_step()[source]

Operators that need to be called once per mini-batch.

Return type

None

program_weights(from_reference=True)[source]

Apply weights noise to the current tile weights and saves these for repeated drift experiments.

This method also establishes the drift coefficients for each conductance slice.

Parameters

from_reference (bool) – Whether to use weights from reference

Return type

None

aihwkit.simulator.tiles.inference.ones(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)Tensor

Returns a tensor filled with the scalar value 1, with the shape defined by the variable argument size.

Parameters

size (int...) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

Keyword Arguments
  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.ones(2, 3)
tensor([[ 1.,  1.,  1.],
        [ 1.,  1.,  1.]])

>>> torch.ones(5)
tensor([ 1.,  1.,  1.,  1.,  1.])
aihwkit.simulator.tiles.inference.zeros(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)Tensor

Returns a tensor filled with the scalar value 0, with the shape defined by the variable argument size.

Parameters

size (int...) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

Keyword Arguments
  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.zeros(2, 3)
tensor([[ 0.,  0.,  0.],
        [ 0.,  0.,  0.]])

>>> torch.zeros(5)
tensor([ 0.,  0.,  0.,  0.,  0.])