This tutorial will get you a fresh build of PyTorch v1.0.0 on Fedora 29 with CUDA 10.0 and cuDNN 7.4.2 . You should be able to complete it in under 1 hour (compilation included).

This tutorial will get you a fresh build of PyTorch v1.0.0 on Fedora 29 with CUDA 10.0 and cuDNN 7.4.2 . You should be able to complete it in under 1 hour (compilation included).
First you will have to install CUDA 10.0 and cuDNN 7.4.2 from the website of Nvidia, which are both required by PyTorch (if you want access to GPU features). I suggest to install everything in a dedicated directory, for example /opt/nvidia/cuda-10/, so that you can be more flexible on your dependency management.

Install CUDA 10.0 and cuDNN 7.4.2

Install CUDA using the visual installer with sudo ./cuda_10.0.130_410.48_linux.run or directly with :

$ sudo ./cuda_10.0.130_410.48_linux.run --toolkitpath=/opt/nvidia/cuda-10.0/ --override --no-drm --silent --toolkit --verbose

Once CUDA is installed, it should look like this :

$ ls -l /opt/nvidia/cuda-10.0/
total 72
drwxr-xr-x  3 root root 4096 Dec 18 10:44 bin
drwxr-xr-x  5 root root 4096 Dec 18 10:44 doc
drwxr-xr-x  5 root root 4096 Dec 18 10:44 extras
drwxr-xr-x  6 root root 4096 Dec 18 10:50 include
drwxr-xr-x  5 root root 4096 Dec 18 10:44 jre
drwxr-xr-x  3 root root 4096 Dec 18 10:51 lib64
drwxr-xr-x  8 root root 4096 Dec 18 10:44 libnsight
drwxr-xr-x  7 root root 4096 Dec 18 10:44 libnvvp
drwxr-xr-x  8 root root 4096 Dec 18 10:44 NsightCompute-1.0
drwxr-xr-x  2 root root 4096 Dec 18 10:44 nsightee_plugins
drwxr-xr-x  3 root root 4096 Dec 18 10:44 nvml
drwxr-xr-x  7 root root 4096 Dec 18 10:44 nvvm
drwxr-xr-x  2 root root 4096 Dec 18 10:44 pkgconfig
drwxr-xr-x 11 root root 4096 Dec 18 10:44 samples
drwxr-xr-x  3 root root 4096 Dec 18 10:44 share
drwxr-xr-x  2 root root 4096 Dec 18 10:44 src
drwxr-xr-x  2 root root 4096 Dec 18 10:44 tools
-rw-r--r--  1 root root   22 Dec 18 10:44 version.txt

Then move the content of the cuDNN archive to the corresponding directories of CUDA:

$ ls -l /opt/nvidia/cudnn/
total 48
drwxr-xr-x 2 root root  4096 Dec 18 10:46 include
drwxr-xr-x 2 root root  4096 Dec 18 10:46 lib64
-r--r--r-- 1 root root 38963 Oct 10 18:53 NVIDIA_SLA_cuDNN_Support.txt

$ cp -r /opt/nvidia/cudnn/include/* /opt/nvidia/cuda-10.0/include
$ cp -r /opt/nvidia/cudnn/lib64/* /opt/nvidia/cuda-10.0/lib64

Export environment variables

In your ~/.bashrc file, add the variables to detect CUDA during the compilation of PyTorch and NVIDIA’s compiler tools.

# CUDA
MY_CUDA=/opt/nvidia/cuda-10.0
export CUDA_INC_PATH=$MY_CUDA/include
export CUDA_INCLUDE_DIRS=$MY_CUDA/include
export LD_LIBRARY_PATH=$MY_CUDA/lib64:$LD_LIBRARY_PATH
export PATH=$MY_CUDA/bin:$PATH

Install Anaconda

PyTorch officially provides pre-compiled binaries to make the installation process smoother, either through pip or conda. I recommend to use miniconda, which is lighter mini-me of anaconda. During the install, specify the target directory where your virtual environments will be located (libraries, binaries).

Dedicated PyTorch virtual environment

It’s now time to install the dependencies required by PyTorch, let’s first move to a dedicated environment for the compilation of v1.0.0.

$ conda create --name pytorch-v1.0
# or if you need a precise python version
$ conda create --name pytorch-v1.0 python=3.6
# Load the virtual env
$ source activate pytorch-v1.0

# Install dependencies
(pytorch-v1.0)$ conda install numpy pyyaml mkl mkl-include setuptools cmake cffi typing
(pytorch-v1.0)$ conda install -c pytorch magma-cuda100

CMake needs to know where these dependencies are located to assemble them during the compilation. You can find the root of your virtual env with :

(pytorch-v1.0)$ which python
~/deps/miniconda3/envs/pytorch-v1.0/bin/python

Where ~/deps/miniconda3/envs/pytorch-v1.0/ is the root directory of the pytorch-v1.0 env. Set the CMAKE env variable accordingly :

# update the dir with your path
(pytorch-v1.0)$ export CMAKE_PREFIX_PATH=~/deps/miniconda3/envs/pytorch-v1.0/

Download PyTorch source

(pytorch-v1.0)$ git clone --recursive https://github.com/pytorch/pytorch
(pytorch-v1.0)$ cd pytorch

# checkout to the v1.0.0 branch
(pytorch-v1.0)$ git checkout remotes/origin/v1.0.0

GCC and G++

On most systems, the gcc and g++ versions are not supported by CUDA (cuda 10.0 only supports fedora 27 with gcc 7.3.1). To solve this issue, we will be relying on two packages maintained by the great negativo17 repository.

$ sudo dnf config-manager --add-repo=https://negativo17.org/repos/fedora-nvidia.repo
# install GCC and G++
$ sudo dnf install cuda-gcc.x86_64 cuda-gcc-c++.x86_64

Build (and wait for 30 min)

Now you are ready to compile PyTorch and all its sub-projects. Don’t forget to specify which compilers you want to use :

(pytorch-v1.0)$ CC=cuda-gcc CXX=cuda-g++ python setup.py install

Enjoy

(pytorch-v1.0)$ cd
(pytorch-v1.0)$ python
Python 3.6.7 |Anaconda, Inc.| (default, Oct 23 2018, 19:16:44) 
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> x = torch.Tensor(10,10).random_(10).cuda()
>>> x.sum()
tensor(425., device='cuda:0')
>>> x.sum()
tensor(492., device='cuda:0')