The MSRG members working on distributed deep learning focus on three topics: performance optimization, privacy-preserving and federated learning systems, and graph processing.
Our performance optimization efforts range from designing efficient data preprocessing pipelines over cost-optimal geo-distributed training of large models to brining foundation models to embedded hardware for privacy preserving training.
Our federated learning efforts center around energy-optimal utilization of modern AI accelerators for billion-parameter models. We also do research at the intersection of legal and tech for federated learning to set the pathway for future research priorities.
A special application area for distributed learning in our group are Graph Neural Networks. As we work on GNN systems, we are interested in optimizing the resource utilization for graph (pre-)processing and GNN training.
Professor
Toronto, Canada
PhD Student
Toronto, Canada
PhD Student
Munich, Germany
PhD Student
Munich, Germany
PhD Student
Munich, Germany
MASc Student
Toronto, Canada
Undergraduate Student
Toronto, Canada