Loading…

Distributed Kernel-Based Gradient Descent Algorithms

We study the generalization ability of distributed learning equipped with a divide-and-conquer approach and gradient descent algorithm in a reproducing kernel Hilbert space (RKHS). Using special spectral features of the gradient descent algorithms and a novel integral operator approach, we provide o...

Full description

Saved in:
Bibliographic Details
Published in:Constructive approximation 2018-04, Vol.47 (2), p.249-276
Main Authors: Lin, Shao-Bo, Zhou, Ding-Xuan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We study the generalization ability of distributed learning equipped with a divide-and-conquer approach and gradient descent algorithm in a reproducing kernel Hilbert space (RKHS). Using special spectral features of the gradient descent algorithms and a novel integral operator approach, we provide optimal learning rates of distributed gradient descent algorithms in probability and partly conquer the saturation phenomenon in the literature in the sense that the maximum number of local machines to guarantee the optimal learning rates does not vary if the regularity of the regression function goes beyond a certain quantity. We also find that additional unlabeled data can help relax the restriction on the number of local machines in distributed learning.
ISSN:0176-4276
1432-0940
DOI:10.1007/s00365-017-9379-1