Lavoisier S.A.S.
14 rue de Provigny
94236 Cachan cedex
FRANCE

Heures d'ouverture 08h30-12h30/13h30-17h30
Tél.: +33 (0)1 47 40 67 00
Fax: +33 (0)1 47 40 67 02


Url canonique : www.lavoisier.fr/livre/informatique/federated-learning/descriptif_4949411
Url courte ou permalien : www.lavoisier.fr/livre/notice.asp?ouvrage=4949411

Federated Learning Theory and Practice

Langue : Anglais

Coordonnateurs : Nguyen Lam M., Hoang Trong Nghia, Chen Pin-Yu

Couverture de l’ouvrage Federated Learning
Federated Learning: Theory and Practi ce provides a holisti c treatment to federated learning as a distributed learning system with various forms of decentralized data and features. Part I of the book begins with a broad overview of opti mizati on fundamentals and modeling challenges, covering various aspects of communicati on effi ciency, theoretical convergence, and security. Part II features
emerging challenges stemming from many socially driven concerns of federated learning as a future public machine learning service. Part III concludes the book with a wide array of industrial applicati ons of federated learning, as well as ethical considerations, showcasing its immense potential for driving innovation while safeguarding sensitive data.

Federated Learning: Theory and Practi ce provides a comprehensive and accessible introducti on to federated learning which is suitable for researchers and students in academia, and industrial practitioners who seek to leverage the latest advance in machine learning for their entrepreneurial endeavors.
PART I: Optimization Fundamentals for Secure Federated Learning
1. Gradient Descent-Type Methods
2. Considerations on the Theory of Training Models with Differential Privacy
3. Privacy Preserving Federated Learning: Algorithms and Guarantees
4. Assessing Vulnerabilities and Securing Federated Learning
5. Adversarial Robustness in Federated Learning
6. Evaluating Gradient Inversion Attacks and Defenses

PART II: Emerging Topics
7. Personalized federated learning: theory and open problems
8. Fairness in Federated Learning
9. Meta Federated Learning
10. Graph-Aware Federated Learning
11. Vertical Asynchronous Federated Learning: Algorithms and theoretical guarantees
12. Hyperparameter Tuning for Federated Learning - Systems and Practices
13. Hyper-parameter Optimization for Federated Learning
14. Federated Sequential Decision-Making: Bayesian Optimization, Reinforcement Learning and Beyond
15. Data Valuation in Federated Learning

PART III: Applications and Ethical Considerations
16. Incentives in Federated Learning
17. Introduction to Federated Quantum Machine Learning
18. Federated Quantum Natural Gradient Descent for Quantum Federated Learning
19. Mobile Computing Framework for Federated Learning
20. Federated Learning for Privacy-preserving Speech Recognition
21. Ethical Considerations and Legal Issues Relating to Federated Learning

Lam M. Nguyen is a Staff Research Scientist at IBM Research, Thomas J. Watson Research Center working in the intersection of Optimization and Machine Learning/Deep Learning. He is also the PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Nguyen received his B.S. degree in Applied Mathematics and Computer Science from Lomonosov Moscow State University in 2008; M.B.A. degree from McNeese State University in 2013; and Ph.D. degree in Industrial and Systems Engineering from Lehigh University in 2018. Dr. Nguyen has extensive research experience in optimization for machine learning problems. He has published his work mainly in top AI/ML and Optimization publication venues, including ICML, NeurIPS, ICLR, AAAI, AISTATS, Journal of Machine Learning Research, and Mathematical Programming. He has been serving as an Action/Associate Editor for Journal of Machine Learning Research, Machine Learning, Neural Networks, IEEE Transactions on Neural Networks and Learning Systems, and Journal of Optimization Theory and Applications; an Area Chair for ICML, NeurIPS, ICLR, AAAI, CVPR, UAI, and AISTATS conferences. His current research interests include design and analysis of learning algorithms, optimization for representation learning, dynamical systems for machine learning, federated learning, reinforcement learning, time series, and trustworthy/explainable AI.


Trong Nghia Hoang: Dr. Hoang received the Ph.D. in Computer Science from National University of Singapore (NUS) in 2015. From 2015 to 2017, he was a Research Fellow at NUS. After NUS, Dr. Hoang did another postdoc at MIT (2017-2018). From 2018-2020, he was a Research Staff Member and Principal Investigator at the MIT-IBM Watson AI Lab in Cambridge, Massachusetts. In Nov 2020, Dr. Hoang joined the AWS AI Labs of Amazon in Santa Clara, California as a senior research scientist. His research interests span the broad areas of deep generative modeling with applications to (personalized) federated learning, meta learning,
  • Presents the fundamentals and a survey of key developments in the field of federated learning
  • Provides emerging, state-of-the art topics that build on fundamentals
  • Contains industry applications
  • Gives an overview of visions of the future

Date de parution :

Ouvrage de 434 p.

19x23.3 cm

Disponible chez l'éditeur (délai d'approvisionnement : 14 jours).

121,37 €

Ajouter au panier