aihwkit.optim.context module

Parameter context for analog tiles.

class aihwkit.optim.context.AnalogContext(analog_tile, parameter=None)[source]

Bases: torch.nn.parameter.Parameter

Context for analog optimizer.

Parameters
  • analog_tile (BaseTile) –

  • parameter (Optional[torch.nn.parameter.Parameter]) –

Return type

AnalogContext

cpu()[source]

Move the context to CPU.

Note

This is a no-op for CPU context.

Returns

self

Return type

aihwkit.optim.context.AnalogContext

cuda(device=None)[source]

Move the context to a cuda device.

Parameters

device (Optional[Union[torch.device, str, int]]) – the desired device of the tile.

Returns

This context in the specified device.

Return type

aihwkit.optim.context.AnalogContext

get_data()[source]

Get the data value of the underlying Tensor.

Return type

torch.Tensor

has_gradient()[source]

Return whether a gradient trace was stored.

Return type

bool

reset(analog_tile=None)[source]

Reset the gradient trace and optionally sets the tile pointer.

Parameters

analog_tile (Optional[BaseTile]) –

Return type

None

set_data(data)[source]

Set the data value of the Tensor.

Parameters

data (torch.Tensor) –

Return type

None

to(*args, **kwargs)[source]

Move analog tiles of the current context to a device.

Note

Please be aware that moving analog tiles from GPU to CPU is currently not supported.

Caution

Other tensor conversions than moving the device to CUDA, such as changing the data type are not supported for analog tiles and will be simply ignored.

Returns

This module in the specified device.

Parameters
  • args (Any) –

  • kwargs (Any) –

Return type

aihwkit.optim.context.AnalogContext

aihwkit.optim.context.ones(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)Tensor

Returns a tensor filled with the scalar value 1, with the shape defined by the variable argument size.

Parameters

size (int...) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.

Keyword Arguments
  • out (Tensor, optional) – the output tensor.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).

  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.ones(2, 3)
tensor([[ 1.,  1.,  1.],
        [ 1.,  1.,  1.]])

>>> torch.ones(5)
tensor([ 1.,  1.,  1.,  1.,  1.])