Course Overview
This workshop provides a comprehensive introduction to general-purpose GPU programming with CUDA. You'll learn how to write, compile, and run GPU-accelerated code, leverage CUDA core libraries to harness the power of massive parallelism provided by modern GPU accelerators, optimize memory migration between CPU and GPU, and implement your own algorithms. At the end of the workshop, you'll have access to additional resources to create your own GPU-accelerated applications.
Please note that once a booking has been confirmed, it is non-refundable. This means that after you have confirmed your seat for an event, it cannot be cancelled and no refund will be issued, regardless of attendance.
Prerequisites
- Basic C++ competency, including familiarity with lambda expressions, loops, conditional statements, functions, standard algorithms and containers.
- No previous knowledge of CUDA programming is assumed.
Course Objectives
At the conclusion of the workshop, you'll have an understanding of the fundamental concepts and techniques for accelerating C++ code with CUDA and be able to:
- Write and compile code that runs on the GPU
- Optimize memory migration between CPU and GPU
- Leverage powerful parallel algorithms that simplify adding GPU acceleration to your code
- Implement your own parallel algorithms by directly programming GPUs with CUDA kernels
- Utilize concurrent CUDA streams to overlap memory traffic with compute
- Know where, when, and how to best add CUDA acceleration to existing CPU-only applications