Being with Deep Learning for a long time requires installing different packages or third-party software. But working in the GPU can sometimes lead to many problems. Basically the problem is with the version. That’s why I thought I’d write a note. However, I use Anaconda. So I will discuss how to avoid this problem in Anaconda –
Here I will use Anaconda Command Prompt instead of Anaconda GUI. So this rule will work on both Linux and Windows platforms. Even though I haven’t seen it by running on Macintosh, I hope it will work on Mac also. Those on Windows or Mac, should use the Anaconda Command Prompt from the Start menu. And in Linux you have to use Terminal.
Creating an Environment in Anaconda
conda create -n testenv python=3.7 conda activate testenv
Here testenv is the name of the environment we want to create and the version of Python 3.7 was selected by python = 3.7. Hopefully everyone has selected the right environment for Anaconda.
Tensorflow with GPU installed
conda install tensorflow-gpu==1.12 cudatoolkit==10.1 cudnn 7.6 h5py
Here tensorflow-gpu == 1.12 indicates that version 1.02 of the Tensorflow GPU will be installed here. cudatoolkit == 10.1 with cudnn 7.6 indicates that versions of cudatoolkit and cudnn will have versions 1.0 and 5.1 respectively.
If everything goes well, it will be installed successfully. Now we will check if it is installed correctly. That’s why we can write the following code using the Jupiter notebook. If there is no import problem then Tensorflow is properly installed with the GPU.
import tensorflow as tf sess=tf.session(config=tf.ConfigProto(log_device_placement=true)) tf.test.is_gpu_available( cuda_only=False, min_cuda_compute_capability=None )
Installing PyTorch with GPU
conda install pytorch torchvision cuda90 -c pytorch
Here cuda90 indicates the version of cuda 9.0 .
Or you can specify that version to install a specific version of PyTorch.
conda install pytorch=0.4.1 torchvision cuda90 -c pytorch
This is where PyTorch version 6.5.1 will be installed.
To verify that it was installed correctly, you can continue with the code below.
import torch torch.cuda.get_device_name(0) torch.cuda.is_available() torch.cuda.device_count()
If you want to see the available and used memory, try this piece of code-
if device.type == 'cuda': print(torch.cuda.get_device_name(0)) print('Memory Usage:') print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB') print('Cached: ', round(torch.cuda.memory_cached(0)/1024**3,1), 'GB')
Hopefully you’ve been able to install everything properly. thanks to everyone.