Loading Events

GPU to GPU Communication with MPI and Other Topics

GPUs are powerful devices, but one of their main weaknesses is the data transfer to and from the host device. A new feature appeared recently: GPU-to-GPU communications, allowing to bypass the need to send the data back through the host, with appropriate hardware support.

This workshop will cover how this can be achieved in MPI programs, as well as discuss other intermediate-level MPI topics.

You can join in-person OR virtually. This is a free workshop. Please register only if you plan to attend. 

Please join at 12:45pm (AWST) to ensure you can access the system. The class starts promptly at 1:15pm (AWST). 

What will I learn in this 3-hour, hands-on workshop?

This workshop revolves around MPI, used for distributed memory programming. For the GPU portion, we will also use HIP, which is used on Pawsey’s Setonix supercomputer. Join us to learn how to:

  • achieve GPU-to-GPU communications in MPI.
  • receive messages of unknown size.
  • use one-sided operations in MPI.
  • efficiently reuse known communication patterns.
  • use miscellaneous other MPI features that are good to know.

How about a more technical look at the agenda?

  • GPU-aware MPI
  • Probe-based message receiving  (MPI_Probe / MPI_Get_count / RDV vs EAGER protocols)
  • MPI RMA (MPI_Get / MPI_Put / MPI_Accumulate / MPI_Win_fence / MPI_Win_lock / MPI_Win_unlock / MPI_Win_post / MPI_Win_start / MPI_Win_complete / MPI_Win_wait)
  • Persistent communication requests (MPI_*_init / MPI_Start / MPI_Wait)
  • Miscellaneous features
    • Pack / unpack vs derived datatypes (MPI_Type_vector / MPI_Type_struct / MPI_Type_commit / MPI_Type_create_resized)
    • Finding neighbours vs virtual topologies programming and neighbourhood collective operations (MPI_Cart_* / MPI_Graph_* / MPI_Neighbor_*)
    • Shared memory and hybrid programming (MPI_Init_threads, MPI_Comm_split_type).

Pre-requisites:

Meet your Trainer!

Dr. Ludovic Capelli is a teaching fellow at EPCC, the High-Performance Institute of the University of Edinburgh, UK. His efforts are exclusively dedicated to the education of HPC, being part of the teaching team for both on-campus and online versions of the MSc in HPC and MSc in HPC with Data Science at EPCC. He focusses primarily on two major HPC technologies: OpenMP and MPI, being a member of the OpenMP language committee and the MPI forum, as well as the course organiser for the “Advanced Message-Passing Programming” module at EPCC.

Register Here: