Presentation

3S in Distributed Graph Neural Networks: Sparse Communication, Sampling, and Scalability
Presenter
DescriptionThis talk will focus on distributed-memory parallel algorithms for graph neural network (GNN) training. We will first focus on utilizing sparse matrix primitives to parallelize mini-batch training based on node-wise and layer-wise sampling. Then, we will illustrate techniques that are based on sparsity-aware sparse matrix times dense matrix multiplication algorithms to accelerate both full-graph and mini-batch sampling based training.
TimeTuesday, June 417:30 - 18:00 CEST
LocationHG D 1.2
Event Type
Minisymposium
Domains
Computational Methods and Applied Mathematics