Lavoisier S.A.S.
14 rue de Provigny
94236 Cachan cedex
FRANCE

Heures d'ouverture 08h30-12h30/13h30-17h30
Tél.: +33 (0)1 47 40 67 00
Fax: +33 (0)1 47 40 67 02


Url canonique : www.lavoisier.fr/livre/informatique/domain-specific-computer-architectures-for-emerging-applications/descriptif_5076707
Url courte ou permalien : www.lavoisier.fr/livre/notice.asp?ouvrage=5076707

Domain-Specific Computer Architectures for Emerging Applications Machine Learning and Neural Networks

Langue : Anglais
Couverture de l’ouvrage Domain-Specific Computer Architectures for Emerging Applications

With the end of Moore?s Law, domain-specific architecture (DSA) has become a crucial mode of implementing future computing architectures. This book discusses the system-level design methodology of DSAs and their applications, providing a unified design process that guarantees functionality, performance, energy efficiency, and real-time responsiveness for the target application.

DSAs often start from domain-specific algorithms or applications, analyzing the characteristics of algorithmic applications, such as computation, memory access, and communication, and proposing the heterogeneous accelerator architecture suitable for that particular application. This book places particular focus on accelerator hardware platforms and distributed systems for various novel applications, such as machine learning, data mining, neural networks, and graph algorithms, and also covers RISC-V open-source instruction sets. It briefly describes the system design methodology based on DSAs and presents the latest research results in academia around domain-specific acceleration architectures.

Providing cutting-edge discussion of big data and artificial intelligence scenarios in contemporary industry and typical DSA applications, this book appeals to industry professionals as well as academicians researching the future of computing in these areas.

Preface. 1 Overview of Domain‑Specific Computing. 2 Machine Learning Algorithms and Hardware Accelerator Customization. 3 Hardware Accelerator Customization for Data Mining Recommendation Algorithms. 4 Customization and Optimization of Distributed Computing Systems for Recommendation Algorithms. 5 Hardware Customization for Clustering Algorithms. 6 Hardware Accelerator Customization Techniques for Graph Algorithms. 7 Overview of Hardware Acceleration Methods for Neural Network Algorithms. 8 Customization of FPGA‑Based Hardware Accelerators for Deep Belief Networks. 9 FPGA‑Based Hardware Accelerator Customization for Recurrent Neural Networks. 10 Hardware Customization/Acceleration Techniques for Impulse Neural Networks. 11 Accelerators for Big Data Genome Sequencing. 12 RISC‑V Open Source Instruction Set and Architecture. 13 Compilation Optimization Methods in the Customization of Reconfigurable Accelerators Index.

General, Postgraduate, Professional Reference, and Professional Training

Dr. Chao Wang is a Professor with the University of Science and Technology of China, and also the Vice Dean of the School of Software Engineering. He serves as the Associate Editor of ACM TODAES and IEEE/ACM TCBB. Dr. Wang was the recipient of ACM China Rising Star Honorable Mention, and best IP nomination of DATE 2015, Best Paper Candidate of CODES+ISSS 2018. He is a senior member of ACM, senior member of IEEE, and distinguished member of CCF.

Date de parution :

15.6x23.4 cm

À paraître, réservez-le dès maintenant

123,79 €

Ajouter au panier