Advanced installation guide

Compilation

The build system for aihwkit is based on cmake, making use of scikit-build for generating the Python packages.

Some of the dependencies and tools are Python-based. For convenience, we suggest creating a virtual environment as a way to isolate your environment:

$ python3 -m venv aihwkit_env
$ cd aihwkit_env
$ source bin/activate
(aihwkit_env) $

Note

The following sections assume that the command line examples are executed in the activated aihwkit_env environment.

Dependencies

For compiling aihwkit, the following dependencies are required:

Dependency

Version

Notes

C++11 compatible compiler

cmake

3.18+

pybind11

2.6.2+

Versions 2.6.0+ can be installed using pip (recommended)

scikit-build

0.11.0+

Python 3 development headers

3.7+

BLAS implementation

OpenBLAS or Intel MKL

PyTorch

1.7+

The libtorch library and headers are needed 1

OpenMP

11.0.0+

Optional, OpenMP library and headers 2

CUDA

9.0+

Optional, for GPU-enabled simulator

Nvidia CUB

1.8.0

Optional, for GPU-enabled simulator 4

googletest

1.10.0

Optional, for building the C++ tests 4

Please refer to your operative system documentation for instructions on how to install the different dependencies. The following section contains quick instructions for several operative systems:

Debian-based

On a Debian-based operative system, the following commands can be used for installing the minimal dependencies:

$ sudo apt-get install python3-dev libopenblas-dev
$ pip install cmake scikit-build torch pybind11

OSX

On an OSX-based system, the following commands can be used for installing the minimal dependencies (note that Xcode needs to be installed):

$ brew install openblas
$ pip install cmake scikit-build torch pybind11

miniconda (e.g. linux)

On a miniconda-based system, the following commands can be used for installing the minimal dependencies 3:

$ conda install cmake openblas pybind11
$ conda install -c conda-forge scikit-build
$ conda install -c pytorch pytorch

Note

You can also install all the requirements by:

$ pip install -r requirements.txt
$ pip install -r requirements-dev.txt
$ pip install -r requirements-examples.txt

Note

If you are using CUDA (see below) then you need to have a CUDA-enabled pytorch installed. Please refer to the torch website how to install that

Windows using conda (Experimental)

On a Windows-based system, the following instructions can be used for installing the dependencies:

  1. Install (regular) Miniconda, install newest Cuda driver (if available) and the MS Visual Studio 2019 community edition with Desktop development with C++ workload.

  2. Start anaconda powershell (miniconda) and install the following packages:

    $ conda install pybind11 scikit-build
    $ conda install pytorch -c pytorch
    $ conda install -c intel mkl mkl-devel mkl-static mkl-include
    

Using this method, please make sure that the flags -DRPU_BLAS=MKL and -G "Visual Studio 16 2019" are passed to the installation and compilation commands. In particular, use the following command instead of the default one in the Installing and compiling sub-section:

$ pip install -v aihwkit --install-option="-DUSE_CUDA=ON" --install-option="-DRPU_BLAS=MKL" --install-option="-GVisual Studio 16 2019"

Windows with OpenBLAS (Experimental)

As an alternative on Windows-based system, compilation using OpenBLAS is also possible. We recommend installing OpenBLAS following this OpenBLAS - Visual Studio installation and usage guide. It requires installing MS Visual Studio 2019 and Miniconda.

After compiling and installing OpenBLAS, in the same Miniconda terminal, the following commands can be used for installing the minimal dependencies:

$ conda install pybind11 scikit-build
$ conda install pytorch -c pytorch

For compiling aihwkit, it is recommended to use the x64 Native Tools Command Prompt for VS 2019.

Note

If you want to use pip instead of conda, the following commands can be used:

$ pip install cmake scikit-build pybind11
$ pip install torch -f https://download.pytorch.org/whl/torch_stable.html

Installing and compiling

Once the dependencies are in place, the following command can be used for compiling. Here we assume that you have already cloned the directory and changed into it:

$ git clone https://github.com/IBM/aihwkit.git
$ cd aihwkit

You can typically install requirements by (but see above for more specific details):

$ pip install -r requirements.txt
$ pip install -r requirements-dev.txt
$ pip install -r requirements-examples.txt
Without GPU support (with OpenBLAS):

This uses the OpenBLAS library for fast numerical computations:

$ make build

Note

Note that openblas needs to be installed, e.g. with::

$ conda install openblas

Without GPU support (with MKL):

This uses the Intel MKL library instead of the OpenBlas library:

$ make build_mkl

Note

Note that MKL needs to be installed and environment variable MKLROOT set if not in standard folders. E.g. with:

$ conda install -c intel mkl mkl-devel mkl-static mkl-include
With GPU support:

The CUDA library needs to be set up properly so that the compiler can find it (you may need to set CUDA_HOME). Please refer to the installation instructions. This also uses MKL as default, whihc thus needs to be installed (see above). Then:

$ make build_cuda

If you know your CUDA architecture, then you can give it directly (which will result typically in a much quicker initially loading time):

$ make build_cuda flags="-DRPU_CUDA_ARCHITECTURES='60'"

If there are any issue with the dependencies or the compilation, the output of the command will help diagnosing the issue.

In-place installation

If you want install the library inside the cloned directory (see also Development setup), it is more convenient for developers. For that simply replace the above make commands with build_inplace, e.g.:

$ make build_inplace_cuda

Here, you need to make sure that the PYTHONPATH is set to the src sub-directory of the ahwkit base directory, e.g. by (when being in the base directory):

$ export PYTHONPATH=`pwd`/src:$PYTHONPATH

CUDA-enabled docker image

As an alternative to a regular install, a CUDA-enabled docker image can also be built using the CUDA.Dockerfile included in the repository.

In order to build the image, first identify the CUDA_ARCH for your GPU using ` nvidia-smi` in your local machine:

export CUDA_ARCH=$(nvidia-smi --query-gpu=compute_cap --format=csv | sed -n '2 p' | tr -d '.')
echo $CUDA_ARCH

The image can be built via:

docker build \
--tag aihwkit:cuda \
--build-arg USERNAME=${USER} \
--build-arg USERID=(id -u $USER) \
--build-arg GROUPID=(id -g $USER) \
--build-arg CUDA_ARCH=${CUDA_ARCH} \
--build-arg CUDA_VER=11.7 \
--build-arg UBUNTU_VER=22.04 \
--build-arg PYTORCH_PIP_URL=https://download.pytorch.org/whl/cu116 \
--file CUDA.Dockerfile .

If building your image against a different CUDA or PyTorch version, please ensure setting the build arguments accordingly.

Note

Please note that the instructions on this page refer to installing as an end user. If you are planning to contribute to the project, an alternative setup and tips can be found at the Development setup section that is more tuned towards the needs of a development cycle.

1

This library uses PyTorch as both a build dependency and a runtime dependency. Please ensure that your torch installation includes libtorch and the development headers - they are included by default if installing torch from pip.

2

Support for the parts of the OpenMP 4.0+. Some compilers like LLVM or Clang do not support OpenMP. In case of you want to add shared memory processing support to the library using one of these compilers, you will need to install OpenMP library in your system.

3

Please note that currently support for conda-based distributions is experimental, and further commands might be needed.

4(1,2)

Both Nvidia CUB and googletest are downloaded and compiled automatically during the build process. As a result, they do not need to be installed manually.