aihwkit.nn.functions module¶
Autograd functions for aihwkit.
- class aihwkit.nn.functions.AnalogFunction(*args, **kwargs)[source]¶
Bases:
aihwkit.nn.functions.AnalogFunctionBase
Function that delegates into a RPU unit.
- static forward(ctx, analog_ctx, input_, shared_weights=None, is_test=False)[source]¶
Execute the forward pass in the analog tile.
- Parameters
ctx (Any) –
analog_ctx (aihwkit.optim.context.AnalogContext) –
input_ (torch.Tensor) –
shared_weights (Optional[torch.Tensor]) –
is_test (bool) –
- Return type
torch.Tensor
- class aihwkit.nn.functions.AnalogFunctionBase(*args, **kwargs)[source]¶
Bases:
torch.autograd.function.Function
Base function for analog functions.
- static backward(ctx, grad_output)[source]¶
Execute the backward pass in the analog tile.
- Parameters
ctx (Any) –
grad_output (torch.Tensor) –
- Return type
Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]
- static forward(ctx, analog_ctx, input_, shared_weights=None, is_test=False)[source]¶
Execute the forward pass in the analog tile.
Note: Indexed versions can used when analog_ctx.use_indexed is set to True.
- Parameters
ctx (Any) –
analog_ctx (aihwkit.optim.context.AnalogContext) –
input_ (torch.Tensor) –
shared_weights (Optional[torch.Tensor]) –
is_test (bool) –
- Return type
torch.Tensor
- class aihwkit.nn.functions.AnalogIndexedFunction(*args, **kwargs)[source]¶
Bases:
aihwkit.nn.functions.AnalogFunctionBase
Function that delegates into a RPU unit to use the indexed forward/backward/update.
- static forward(ctx, analog_ctx, input_, shared_weights=None, is_test=False)[source]¶
Execute the forward pass in the analog tile.
- Parameters
ctx (Any) –
analog_ctx (aihwkit.optim.context.AnalogContext) –
input_ (torch.Tensor) –
shared_weights (Optional[torch.Tensor]) –
is_test (bool) –
- Return type
torch.Tensor
- aihwkit.nn.functions.empty_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) → Tensor¶
Returns an uninitialized tensor with the same size as
input
.torch.empty_like(input)
is equivalent totorch.empty(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)
.- Parameters
input (Tensor) – the size of
input
will determine size of the output tensor.- Keyword Arguments
dtype (
torch.dtype
, optional) – the desired data type of returned Tensor. Default: ifNone
, defaults to the dtype ofinput
.layout (
torch.layout
, optional) – the desired layout of returned tensor. Default: ifNone
, defaults to the layout ofinput
.device (
torch.device
, optional) – the desired device of returned tensor. Default: ifNone
, defaults to the device ofinput
.requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default:
False
.memory_format (
torch.memory_format
, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format
.
Example:
>>> a=torch.empty((2,3), dtype=torch.int32, device = 'cuda') >>> torch.empty_like(a) tensor([[0, 0, 0], [0, 0, 0]], device='cuda:0', dtype=torch.int32)