Distributed Machine Learning and Gradient Optimization

You must be logged in to access this title.

Sign up now

Already a member? Log in

Synopsis

This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.

Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appeal to a broad audience in the field of machine learning, artificial intelligence, big data and database management.

Book details

Edition:
1st ed. 2022
Series:
Big Data Management
Author:
Jiawei Jiang, Bin Cui, Ce Zhang
ISBN:
9789811634208
Related ISBNs:
9789811634192
Publisher:
Springer Singapore, Singapore
Pages:
N/A
Reading age:
Not specified
Includes images:
Yes
Date of addition:
2022-03-18
Usage restrictions:
Copyright
Copyright date:
2022
Copyright by:
The Editor 
Adult content:
No
Language:
English
Categories:
Computers and Internet, Nonfiction