aihwkit.experiments.experiments.inferencing module
Basic inferencing Experiment.
- class aihwkit.experiments.experiments.inferencing.BasicInferencing(dataset, model, batch_size=10, loss_function=<class 'torch.nn.modules.loss.CrossEntropyLoss'>, weight_template_id='', inference_repeats=2, inference_time=86400, remap_weights=True)[source]
Bases:
Experiment
Experiment for inferencing a neural network.
Experiment
that represents inferencing a neural network using a basic inferencing loop.This class contains:
the data needed for an experiment. The recommended way of setting this data is via the arguments of the constructor. Additionally, some of the items have getters that are used by the
Workers
that execute the experiments and by the inferencing loop.the inferencing algorithm, with the main entry point being
train()
.
Note
When executing a
BasicInferencing
in the cloud, additional constraints are applied to the data. For example, the model is restricted to sequential layers of specific types; the dataset choices are limited, etc. Please check theCloudRunner
documentation.- Parameters:
dataset (Type[Dataset]) –
model (Module) –
batch_size (int) –
loss_function (type) –
weight_template_id (str) –
inference_repeats (int) –
inference_time (int) –
remap_weights (bool) –
- get_data_loader(dataset, batch_size, max_elements=0, dataset_root='/tmp/datasets')[source]
Return DataLoaders for the selected dataset.
- Parameters:
dataset (type) – the dataset class to be used.
batch_size (int) – the batch size used for inferencing.
max_elements (int) – the maximum number of elements of the dataset to be used. If
0
, the full dataset is used.dataset_root (str) – the path to the folder where the files from the dataset are stored.
- Returns:
A tuple with the inferencing and validation loaders.
- Return type:
DataLoader
- get_dataset_arguments(dataset)[source]
Return the dataset constructor arguments for specifying subset.
- Parameters:
dataset (type) –
- Return type:
Tuple[Dict, Dict]
- get_dataset_transform(dataset)[source]
Return the dataset transform.
- Parameters:
dataset (type) –
- Return type:
Any
- get_model(weight_template_id, device)[source]
Get a copy of the set-up model (with load the weights and biases) from the original experiment model.
- Parameters:
weight_template_id (str) – location/index for the file that contains the state_dicts for the model.
device (device) – the torch device used for the model.
- Returns:
a copied model with loaded weights and biases
- Return type:
Module
- inference(validation_loader, model, loss_function, inference_repeats, inference_time, device, n_inference_times=10)[source]
Run the inferencing loop.
- Parameters:
validation_loader (DataLoader) – the data loader for the validation data.
model (Module) – the neural network to be trained.
loss_function (_Loss) – the loss function used for inferencing.
inference_repeats (int) – the number of times to repeat the process zof programming and drifting.
inference_time (int) – the time span between programming the chip and performing the inference.
device (device) – the torch device used for the model.
n_inference_times (int) – how many inference times (log-spaced)
- Returns:
A list of the metrics for each epoch.
- Return type:
Dict
- inferencing_step(validation_loader, model, loss_function, t_inference_list, device)[source]
Run a single inferencing.
- Parameters:
validation_loader (DataLoader) – the data loader for the inferencing data.
model (Module) – the neural network to be trained.
loss_function (_Loss) – the loss function used for inferencing.
t_inference_list (list) – list of t_inferences.
device (device) – the torch device used for the model.
- Returns:
Tuple of ndarray of inference accuracy, error and loss.
- Return type:
Tuple[ndarray, ndarray, ndarray]