Data Parallelism: How to Train Deep Learning Models on Multiple GPUs

GRAPHIC: "NVIDIA Deep Learning Institute at the University of Florida.: With photo of HiPerGator supercomputer in the background.

The NVIDIA AI Technology Center at the University of Florida is offering an instructor-led, deep learning institute workshop in April: Data Parallelism: How to Train Deep Learning Models on Multiple GPUs.

Workshop Dates: April 11-12, 2024 (Thursday and Friday), from 1:00-5:00 p.m.

Registration Link: https://forms.gle/KiNxdjqxJ7AZCZFk6

The workshop will be held over two days (four hours each day) in Malachowsky Hall’s NVIDIA Auditorium. Its focus is on techniques for data-parallel deep learning training on multiple GPUs to shorten the training time required for data-intensive applications. Working with deep learning tools, frameworks, and workflows to perform neural network training, attendees will learn how to decrease model training time by distributing data to multiple GPUs, while retaining the accuracy of training on a single GPU. The full course outline may be found on this NVIDIA website page.

The course is FREE and open to the university community, but pre-registration is required. Also required is experience with Python. Technologies used in the workshop are PyTorch, PyTorch Distributed Data Parallel, and NCCL.

If you have any questions about this workshop, please email the instructor, NVIDIA Data Scientist Yungchao Yang (yunchaoyang@ufl.edu).