- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Communcation-efficient algorithms for decentralized...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Communcation-efficient algorithms for decentralized multi-task learning Kuang, Yao
Abstract
Distributed optimization requires nodes to coordinate, yet full synchronization scales poorly. When multiple nodes collaborate through pairwise regularizers, standard methods require a number of communications proportional to the total number of regularizers per iteration. We propose randomized local coordination, in which each node independently samples one regularizer uniformly and coordinates only with nodes sharing that term. This approach exploits partial separability, where each regularizer depends on a subset of nodes. For graph-guided regularizers, where each regularizer depends on a pair of nodes, expected communication drops to exactly 2 messages per iteration regardless of the network topology. By replacing the proximal map of the sum with the proximal map of a single randomly selected regularizer, the method preserves convergence while eliminating global coordination. We show that this method achieves a convergence rate comparable to that of centralized stochastic gradient descent in various settings. Experiments validate both convergence rates and communication efficiency across synthetic and real-world datasets.
Item Metadata
Title |
Communcation-efficient algorithms for decentralized multi-task learning
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2025
|
Description |
Distributed optimization requires nodes to coordinate, yet full synchronization scales poorly. When multiple nodes collaborate through pairwise regularizers, standard methods require a number of communications proportional to the total number of regularizers per iteration. We propose randomized local coordination, in which each node independently samples one regularizer uniformly and coordinates only with nodes sharing that term. This approach exploits partial separability, where each regularizer depends on a subset of nodes. For graph-guided regularizers, where each regularizer depends on a pair of nodes, expected communication drops to exactly 2 messages per iteration regardless of the network topology. By replacing the proximal map of the sum with the proximal map of a single randomly selected regularizer, the method preserves convergence while eliminating global coordination. We show that this method achieves a convergence rate comparable to that of centralized stochastic gradient descent in various settings. Experiments validate both convergence rates and communication efficiency across synthetic and real-world datasets.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2025-09-05
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0450062
|
URI | |
Degree (Theses) | |
Program (Theses) | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2025-11
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International