The MSRG members working on distributed deep learning focus on three topics: performance optimization, privacy-preserving and federated learning systems, and graph processing.
At MSRG, we push the boundaries of distributed deep learning systems. Modern AI applications demand unprecedented computational resources and sophisticated distributed infrastructure to handle large-scale training and inference. Our research tackles fundamental challenges in this space, developing cutting-edge approaches for resource allocation, scheduling, and system-level optimizations that maximize performance while minimizing costs. In the edge computing space, we're pioneering new approaches to federated learning that maximize performance on resource-constrained and embedded devices, enabling AI to run efficiently on embedded hardware. Our innovations span the entire stack - from low-level system optimizations to high-level distributed learning algorithms, allowing us to achieve significant speedups and efficiency gains in real-world deployments.
Graph-based learning presents unique system challenges due to irregular data access patterns, complex dependencies, and large data scales. Our research develops novel partitioning and sampling techniques as well as system optimizations that significantly accelerate these complex workloads, making graph learning more practical at scale. We build distributed graph processing systems that can efficiently handle billions of nodes and edges, and design specialized optimizations for graph neural network training and inference.
Professor
Toronto, Canada
PhD Student
Toronto, Canada
PhD Student
Toronto, Canada
PhD Student
Munich, Germany
PhD Student
Munich, Germany
PhD Student
Munich, Germany
MASc Student
Toronto, Canada
Undergraduate Student
Toronto, Canada