Loading Events

APPLY: International HPC Summer School 2023

21 December 2022 - 31 January 2023
12:00am - 11:59pm

Graduate students and postdoctoral scholars from institutions in Canada, Europe, Japan, Australia and the United States are invited to apply for the 12th International High Performance Computing (HPC) Summer School, to be held on July 9-14, 2023 in Atlanta, Georgia, USA, hosted by the Extreme Science and Engineering Discovery Environment (XSEDE). The deadline for application is 23:59 Anywhere on Earth (AoE) on January 31, 2023.

The summer school is sponsored by the Extreme Science and Engineering Discovery Environment (XSEDE), the Partnership for Advanced Computing in Europe (PRACE), the RIKEN Center for Computational Science (R-CCS), the Pawsey Supercomputing Research Centre (Pawsey), and the SciNet HPC Consortium.


The deadline for application is 23:59 Anywhere on Earth (AoE) on January 31, 2023.

In a nutshell

The summer school will familiarize the best students in computational sciences with major state-of-the-art aspects of HPC and Big Data Analytics for a variety of scientific disciplines, catalyze the formation of networks, provide advanced mentoring, facilitate international exchange and open up further career options.

Leading Canadian, European, Japanese, Australian and American computational scientists and HPC technologists will offer instruction in parallel sessions on a variety of topics such as:

  • HPC and Big Data challenges in major scientific disciplines: You will receive short, high-level introductions to a variety of science areas, with a focus on HPC-related simulation approaches and algorithmic challenges in the respective fields.
  • Shared-memory programming: Using OpenMP, you will learn how to use the multiple cores present in modern processors, as well as related issues and optimizations.
  • Distributed-memory programming: for those who already know the basics of programming with the Message Passing Interface (MPI) you will learn about how to optimize performance based on the way the MPI library works internally, and more advanced MPI functionality.
  • GPU programming: Building on the OpenMP techniques taught earlier, you will learn how to program graphics processing units (GPUs), which are important enabler of modern scientific computing and machine learning.
  • Performance analysis and optimization on modern CPUs and GPUs: You will learn the basics of performance engineering, how to collect profiles and traces, and how to identify potential performance bottlenecks based on the collected profiles and traces.
  • Software engineering: You will learn state of the art technical approaches and best practices in developing and maintaining scientific software.
  • Numerical libraries: You will learn how to take advantage of already implemented algorithms in your code.
  • Big Data analytics: You will learn how to use the powerful and popular Spark framework to analyse very large data sets and integrate this with machine learning techniques.
  • Deep learning: You will extend already learned machine learning techniques into the leading edge with deep learning (also known as neural networks) using the standard TensorFlow framework.
  • Scientific visualization: You will learn how to use 3D visualization tools for large scientific data sets.
  • Canadian, European, Japanese, Australian and U.S. HPC-infrastructures: You will learn about resources available in your part of the world, as well as how to gain access to these resources.

The expense-paid program will benefit scholars from Canadian, European, Japanese, Australian and U.S. institutions who use advanced computing in their research. The ideal candidate will have many of the following qualities, however this list is not meant to be a “checklist” for applicants to meet all criteria:

  • A familiarity with HPC, not necessarily an HPC expert, but rather a scholar who could benefit from including advanced computing tools and methods into their existing computational work
  • A graduate student with a strong research plan or a postdoctoral fellow in the early stages of their research careers
  • Regular practice with, or interest in, parallel programming
  • Applicants from any research discipline are welcome, provided their research activities include computational work.