Lavoisier S.A.S.
14 rue de Provigny
94236 Cachan cedex
FRANCE

Heures d'ouverture 08h30-12h30/13h30-17h30
Tél.: +33 (0)1 47 40 67 00
Fax: +33 (0)1 47 40 67 02


Url canonique : www.lavoisier.fr/livre/informatique/putting-ai-in-the-critical-loop/descriptif_4949563
Url courte ou permalien : www.lavoisier.fr/livre/notice.asp?ouvrage=4949563

Putting AI in the Critical Loop Assured Trust and Autonomy in Human-Machine Teams

Langue : Anglais
Couverture de l’ouvrage Putting AI in the Critical Loop
Putting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams takes on the primary challenges of bidirectional trust and performance of autonomous systems, providing readers with a review of the latest literature, the science of autonomy, and a clear path towards the autonomy of human-machine teams and systems. Throughout this book, the intersecting themes of collective intelligence, bidirectional trust, and continual assurance form the challenging and extraordinarily interesting themes which will help lay the groundwork for the audience to not only bridge knowledge gaps, but also to advance this science to develop better solutions. The distinctively different characteristics and features of humans and machines are likely why they have the potential to work well together, overcoming each other's weaknesses through cooperation, synergy, and interdependence which forms a ?collective intelligence.” Trust is bidirectional and two-sided; humans need to trust AI technology, but future AI technology may also need to trust humans.
1. Introduction
2. Alternative paths to developing engineering solutions for human-machine teams
3. Risk determination versus risk perception: From hate speech, an erroneous drone attack, and military nuclear wastes to human machine autonomy
4. Appropriate Context-Dependent Artificial Trust in Human-Machine Teamwork
5. Toward a Causal Modeling Approach for Trust-Based Interventions in Human-Autonomy Teams
6. Risk Management in Human-in-the-Loop AI-Assisted Attention Aware Systems
7. Enabling Trustworthiness in Human-swarm Systems Through a Digital Twin
8. Building Trust with the Ethical Affordances of Education Technologies: A Sociotechnical Systems Perspective
9. Perceiving a Humorous Robot as a Social Partner
10. Real-Time AI: Using AI on the Tactical Edge
11. Building a Trustworthy AI Digital Twin: A Brave New World of Human Machine Teams & Autonomous Biological Internet of Things (BIoT)
12. A framework of Human Factors methods for safe, ethical, and usable Artificial Intelligence in Defence
13. A schema for harms-sensitive reasoning, and an approach to populate its ontology by human annotation
Prithviraj (Raj) Dasgupta is a computer engineer with the Distributed Intelligent Systems Section at the Naval Research Laboratory in Washington, D.C. His research interests are in the areas of machine learning, AI-based game playing, game theory and multi-agent systems. He received his Ph.D. in 2001 from the University of California, Santa Barbara. From 2001 to 2019, he was a full Professor with the computer science department at the University of Nebraska, Omaha where he established and directed the CMANTIC Robotics Laboratory. He has authored over 150 publications in leading journals and conferences in his research area. He is a senior member of IEEE.
James Llinas is an emeritus professor at the University at Buffalo, New York. He established and directed the Center for Multisource Information Fusion at the university, the only academic systems-centered information fusion center in the United States, leading it to carrying out well-funded multidisciplinary research for over 20 years. He was a co-author of the first book on data fusion and has co-edited and co-authored several additional books on data and information fusion. In 1998, he helped establish and was first President of the International Society for Information Fusion.
Tony Gillespie is a visiting professor at University College London and a fellow of the Royal Academy of Engineering. His career includes academic, industrial, and government research and research management. His work on ensuring highly-automated weapons meet legal requirements has been extended to other autonomous systems in recent years, authoring a book and academic papers. He has acted as a technical adviser to the UN and other meetings discussing potential bans on autonomous weapon systems.
Scott Fouse had a 42-year career in Aerospace R&D, mostly focused on exploring applications of AI to military applications. He was the VP of the Advanced Technology Center at Lockheed Martin Space where he led approximately 500 scientists
  • Assesses the latest research advances, engineering challenges, and the theoretical gaps surrounding the question of autonomy
  • Reviews the challenges of autonomy (e.g., trust, ethics, legalities, etc.), including gaps in the knowledge of the science
  • Offers a path forward to solutions
  • Investigates the value of trust by humans of HMTs, as well as the bidirectionality of trust, understanding how machines learn to trust their human teammates

Date de parution :

Ouvrage de 304 p.

19x23.3 cm

Disponible chez l'éditeur (délai d'approvisionnement : 14 jours).

209,75 €

Ajouter au panier