Table of Contents
CUDA stands for Compute Unified Device Architecture, and it is a parallel computing platform and programming model developed by NVIDIA. CuDNN stands for CUDA Deep Neural Network library, and it is a GPU-accelerated library of primitives for deep learning.
Using CUDA and CuDNN can significantly boost the performance and efficiency of your deep learning applications, especially if you have a powerful GPU like the Tesla T4.
In this article, we will show you how to install CUDA and CuDNN with Tesla T4 on Ubuntu 22.04. We will assume that you have a Tesla T4 GPU installed on your system, and that you have a basic knowledge of Linux commands and terminal. We will also assume that you have an internet connection and enough disk space of about 50GB to download and install the required files.
Update and Upgrade Ubuntu System
Before we start installing CUDA and CuDNN, we need to make sure that our Ubuntu system is up to date and has the latest packages installed. This will ensure that we have the best compatibility and security for our system. To do this, we need to open a terminal window and run the following commands:
sudo apt update
sudo apt upgrade
The first command will update the list of available packages from the repositories, and the second command will upgrade the installed packages to their latest versions. You might need to enter your password and confirm some prompts during this process.
Verify GPU
Verify if you have the GPU installed in your system using the below command.
lspci | grep -i nvidia
If you have your GPU you can proceed with the following installation.
Output
00:04.0 3D Controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1)
Install NVIDIA Driver for Tesla T4
The next step is to install the NVIDIA driver for our Tesla T4 GPU. The NVIDIA driver is a software component that allows our system to communicate with the GPU and use its features. To install the NVIDIA driver, we need to first check the driver version and compatibility for our GPU. We can do this by visiting the NVIDIA Driver Downloads page on the NVIDIA website, and selecting our product type (Tesla), product series (Tesla T4), operating system (Linux 64-bit), language (English), and clicking on Search.

Once you searched you will get an output similar to the one below.
Name | Details | |
---|---|---|
Version | 535.86.10 | |
Release Date | 2023.7.31 | |
Operating System | CBL Mariner, Linux 64-bit | |
CUDA Toolkit | 12.2 | |
Language | English (US) | |
File Size | 324.78 MB |
Now you can start installing Nvidia driver.
Install all necessary dependencies to install Nividia driver with the below command.
sudo apt install build-essential
sudo apt install linux-headers-$(uname -r)
The Nvidia driver package is available by default in some Ubuntu versions, if not you need to download the deb file and install it.
sudo apt install nvidia-driver-535
The installation process will start and ask us some questions along the way. We can follow the instructions on the screen and accept the default options or customize them as per our preference. The installation might take some time depending on our system configuration.
After the installation is complete, we need to reboot our system for the changes to take effect. We can do this by running:
sudo reboot
To verify that the driver installation was successful, we can run:
nvidia-smi
This will show us some information about our GPU, such as its name, driver version, memory usage, temperature, etc.
Install CUDA Toolkit for Ubuntu 22.04
The next step is to install the CUDA toolkit for Ubuntu 22.04. The CUDA toolkit is a collection of tools and libraries that enable us to develop and run CUDA applications on our GPU. It includes the CUDA compiler (nvcc), the CUDA runtime library (cudart), the CUDA math library (cublas), the CUDA linear algebra library (cusolver), the CUDA random number generation library (curand), the CUDA deep neural network library (cudnn), and many more.
To install the CUDA toolkit, we need to first download and install the CUDA repository package for Ubuntu 22.04. We can do this by visiting the CUDA Toolkit Downloads page on the NVIDIA website, and selecting our operating system (Linux), architecture (x86_64), distribution (Ubuntu), version (22.04), installer type [runfile (local)], and clicking on Download.
You will see the command to install CUDA. We need to follow these instructions carefully and run the commands as shown. For example, at the time of writing this article, the commands are:
wget https://developer.download.nvidia.com/compute/cuda/12.2.1/local_installers/cuda_12.2.1_535.86.10_linux.run
sudo sh cuda_12.2.1_535.86.10_linux.run
These commands will download and install the CUDA repository package for Ubuntu 22.04, which will allow us to install the CUDA toolkit from the NVIDIA repository.
Follow the on screen instructions.
- Accept the license agreement by typing accept
- Unselect the driver and then choose Install by using the arrow keys and space bar to move and select or unselect. You should not have
X
mark in the driver - Move the arrow key down to Install and click Enter.
After the installation is complete, we need to set up the environment variables and paths for CUDA, so that our system can find and use it properly.
Now CUDA will get installed in /usr/local/cuda-12.2
location. So we need to configure path by symlinking the directory.
sudo ln -snf /usr/local/cuda-12.2 /usr/local/cuda
To verify that the CUDA toolkit installation was successful, we can run:
nvcc --version
This will show us some information about our CUDA compiler, such as its version, release date, etc.
nvcC: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Aug__26_17:16:06_PDT_2023
Cuda compilation tools, release 12.2, V12.2.1
Build cuda_12.2.r12.2/compiler. 32688072_0
If we encounter any issues during or after the CUDA toolkit installation, we can refer to the [CUDA Installation Guide for Linux] or the [CUDA Toolkit Documentation] for troubleshooting tips.
Install CuDNN Library for Ubuntu 22.04
The next step is to install the CuDNN library for Ubuntu 22.04. The CuDNN library is a GPU-accelerated library of primitives for deep learning, such as convolution, pooling, activation, normalization, etc. It is designed to work with frameworks like TensorFlow, PyTorch, MXNet, etc.
To install the CuDNN library, we need to first download and install the CuDNN package for Ubuntu 22.04 and CUDA 12.1. We can do this by visiting the CuDNN Downloads page. You will be directed to the downloads page. Download the latest version for Linux x86_64 (Tar).

We will need to sign in or create an NVIDIA Developer account to access these downloads. We will see a list of files that we need to download to our system. For example, at the time of writing this article, the files are:
tar -zvxf cudnn-linux-x86_64-8.9.3.28_cuda12-archive.tar.xz
cd cudnn-linux-x86_64-8.9.3.28_cuda12-archive
sudo cp include/cudnn.h /usr/local/cuda-12.2/include
sudo cp lib/libcudnn* /usr/local/cuda-12.2/lib64
sudo chmod a+r /usr/local/cuda-12.2/include/cudnn.h /usr/local/cuda-12.2/lib64/libcudnn*
Now you have cuDNN installed.
Once the installation is complete, update the environment variables, and add the following lines to ~/.bashrc
export CUDA_HOME=/usr/local/cuda-12.2
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64
export PATH=${CUDA_HOME}/bin:${PATH}
Activate the environment variables:
source ~/.bashrc
Conclusion
In this article, we have shown you how to install CUDA and CuDNN with Tesla T4 on Ubuntu 22.04. By following these steps, you should be able to use your Tesla T4 GPU for developing and running deep learning applications on your Ubuntu system.
We hope you have found this article useful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you and help you with any issues you might encounter.
FAQs
What are the benefits of using CUDA and CuDNN for deep learning?
CUDA and CuDNN are software libraries that enable GPU-accelerated computing for deep learning applications.
What are the differences between Tesla T4 and other GPUs?
Tesla T4 is a high-performance computing GPU that delivers up to 8.1 TFLOPS of single-precision (FP32) performance, 65 TFLOPS of mixed-precision (FP16) performance, and 130 TFLOPS of tensor performance.