Foundations of Trusted Autonomy. (Record no. 304916)
[ view plain ]
000 -LEADER | |
---|---|
fixed length control field | 11119nam a22004933i 4500 |
001 - CONTROL NUMBER | |
control field | EBC5579657 |
005 - DATE AND TIME OF LATEST TRANSACTION | |
control field | 20240122001129.0 |
006 - FIXED-LENGTH DATA ELEMENTS--ADDITIONAL MATERIAL CHARACTERISTICS | |
fixed length control field | m o d | |
007 - PHYSICAL DESCRIPTION FIXED FIELD--GENERAL INFORMATION | |
fixed length control field | cr cnu|||||||| |
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION | |
fixed length control field | 231124s2018 xx o ||||0 eng d |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER | |
International Standard Book Number | 9783319648163 |
Qualifying information | (electronic bk.) |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER | |
Canceled/invalid ISBN | 9783319648156 |
035 ## - SYSTEM CONTROL NUMBER | |
System control number | (MiAaPQ)EBC5579657 |
035 ## - SYSTEM CONTROL NUMBER | |
System control number | (Au-PeEL)EBL5579657 |
035 ## - SYSTEM CONTROL NUMBER | |
System control number | (OCoLC)1020319127 |
040 ## - CATALOGING SOURCE | |
Original cataloging agency | MiAaPQ |
Language of cataloging | eng |
Description conventions | rda |
-- | pn |
Transcribing agency | MiAaPQ |
Modifying agency | MiAaPQ |
050 #4 - LIBRARY OF CONGRESS CALL NUMBER | |
Classification number | TJ212-225 |
100 1# - MAIN ENTRY--PERSONAL NAME | |
Personal name | Abbass, Hussein A. |
245 10 - TITLE STATEMENT | |
Title | Foundations of Trusted Autonomy. |
250 ## - EDITION STATEMENT | |
Edition statement | 1st ed. |
264 #1 - PRODUCTION, PUBLICATION, DISTRIBUTION, MANUFACTURE, AND COPYRIGHT NOTICE | |
Place of production, publication, distribution, manufacture | Cham : |
Name of producer, publisher, distributor, manufacturer | Springer International Publishing AG, |
Date of production, publication, distribution, manufacture, or copyright notice | 2018. |
264 #4 - PRODUCTION, PUBLICATION, DISTRIBUTION, MANUFACTURE, AND COPYRIGHT NOTICE | |
Date of production, publication, distribution, manufacture, or copyright notice | �2018. |
300 ## - PHYSICAL DESCRIPTION | |
Extent | 1 online resource (399 pages) |
336 ## - CONTENT TYPE | |
Content type term | text |
Content type code | txt |
Source | rdacontent |
337 ## - MEDIA TYPE | |
Media type term | computer |
Media type code | c |
Source | rdamedia |
338 ## - CARRIER TYPE | |
Carrier type term | online resource |
Carrier type code | cr |
Source | rdacarrier |
490 1# - SERIES STATEMENT | |
Series statement | Studies in Systems, Decision and Control Series ; |
Volume/sequential designation | v.117 |
505 0# - FORMATTED CONTENTS NOTE | |
Formatted contents note | Intro -- Foreword -- Preface -- Acknowledgements -- Contents -- Contributors -- 1 Foundations of Trusted Autonomy: An Introduction -- 1.1 Autonomy -- 1.2 Trust -- 1.3 Trusted Autonomy -- Autonomy -- 2 Universal Artificial Intelligence -- 2.1 Introduction -- 2.2 Background and History of AI -- 2.3 Universal Artificial Intelligence -- 2.3.1 Framework -- 2.3.2 Learning -- 2.3.3 Goal -- 2.3.4 Planning -- 2.3.5 AIXI -- Putting It All Together -- 2.4 Approximations -- 2.4.1 MC-AIXI-CTW -- 2.4.2 Feature Reinforcement Learning -- 2.4.3 Model-Free AIXI -- 2.4.4 Deep Learning -- 2.5 Fundamental Challenges -- 2.5.1 Optimality and Exploration -- 2.5.2 Asymptotically Optimal Agents -- 2.6 Predicting and Controlling Behaviour -- 2.6.1 Self-Modification -- 2.6.2 Counterfeiting Reward -- 2.6.3 Death and Self-Preservation -- 2.7 Conclusions -- References -- 3 Goal Reasoning and Trusted Autonomy -- 3.1 Introduction -- 3.2 Goal-Driven Autonomy Models -- 3.2.1 Goal-Driven Autonomy -- 3.2.2 Goal Selection -- 3.2.3 An Application for Human-Robot Teaming -- 3.3 Goal Refinement -- 3.3.1 Goal Lifecycle -- 3.3.2 Guaranteeing the Execution of Specified Behaviors -- 3.3.3 A Distributed Robotics Application -- 3.4 Future Topics -- 3.4.1 Adaptive Autonomy and Inverse Trust -- 3.4.2 Rebel Agents -- 3.5 Conclusion -- References -- 4 Social Planning for Trusted Autonomy -- 4.1 Introduction -- 4.2 Motivation and Background -- 4.2.1 Automated Planning -- 4.2.2 From Autistic Planning to Social Planning -- 4.3 Social Planning -- 4.3.1 A Formal Model for Multi-agent Epistemic Planning -- 4.3.2 Solving Multi-agent Epistemic Planning Problems -- 4.4 Social Planning for Human Robot Interaction -- 4.4.1 Search and Rescue -- 4.4.2 Collaborative Manufacturing -- 4.5 Discussion -- References -- 5 A Neuroevolutionary Approach to Adaptive Multi-agent Teams -- 5.1 Introduction. |
505 8# - FORMATTED CONTENTS NOTE | |
Formatted contents note | 5.2 The Legion II Game -- 5.2.1 The Map -- 5.2.2 Units -- 5.2.3 Game Play -- 5.2.4 Scoring the Game -- 5.3 Agent Control Architectures -- 5.3.1 Barbarian Sensors and Controllers -- 5.3.2 Legion Sensors and Controllers -- 5.4 Neuroevolution With Enforced Sub-Populations (ESP) -- 5.5 Experimental Methodology -- 5.5.1 Repeatable Gameplay -- 5.5.2 Training -- 5.5.3 Testing -- 5.6 Experiments -- 5.6.1 Learning the Division of Labor -- 5.6.2 Run-Time Readaptation -- 5.7 Discussion -- 5.8 Conclusions -- References -- 6 The Blessing and Curse of Emergence in Swarm Intelligence Systems -- 6.1 Introduction -- 6.2 Emergence in Swarm Intelligence -- 6.3 The `Blessing' of Emergence -- 6.4 The `Curse' of Emergence -- 6.5 Taking Advantage of the Good While Avoiding the Bad -- 6.6 Conclusion -- References -- 7 Trusted Autonomous Game Play -- 7.1 Introduction -- 7.2 TA Game AI -- 7.3 TA Game -- 7.4 TA Game Communities -- 7.5 TA Mixed Reality Games -- 7.6 Discussion: TA Games -- References -- Trust -- 8 The Role of Trust in Human-Robot Interaction -- 8.1 Introduction -- 8.2 Conceptualization of Trust -- 8.3 Modeling Trust -- 8.4 Factors Affecting Trust -- 8.4.1 System Properties -- 8.4.2 Properties of the Operator -- 8.4.3 Environmental Factors -- 8.5 Instruments for Measuring Trust -- 8.6 Trust in Human Robot Interaction -- 8.6.1 Performance-Based Interaction: Humans Influencing Robots -- 8.6.2 Social-Based Interactions: Robots Influencing Humans -- 8.7 Conclusions and Recommendations -- References -- 9 Trustworthiness of Autonomous Systems -- 9.1 Introduction -- 9.1.1 Autonomous Systems -- 9.1.2 Trustworthiness -- 9.2 Background -- 9.3 Who or What Is Trustworthy? -- 9.4 How do We Know Who or What Is Trustworthy -- 9.4.1 Implicit Justifications of Trust -- 9.4.2 Explicit Justifications of Trust -- 9.4.3 A Cognitive Model of Trust and Competence. |
505 8# - FORMATTED CONTENTS NOTE | |
Formatted contents note | 9.4.4 Trustworthiness and Risk -- 9.4.5 Summary -- 9.5 What or Who Should We Trust? -- 9.6 The Value of Trustworthy Autonomous Systems -- 9.7 Conclusion -- References -- 10 Trusted Autonomy Under Uncertainty -- 10.1 Trust and Uncertainty -- 10.1.1 What Is Trust? -- 10.1.2 Trust and Distrust in HRI -- 10.2 Trust and Uncertainty -- 10.2.1 Trust and Distrust Entail Unknowns -- 10.2.2 What Is Being Trusted -- What Is Uncertain? -- 10.2.3 Trust and Dilemmas -- 10.3 Factors Affecting Human Reactivity to Risk and Uncertainty, and Trust -- 10.3.1 Kinds of Uncertainty, Risks, Standards, and Dispositions -- 10.3.2 Presumptive and Organizational-Level Trust -- 10.3.3 Trust Repair -- 10.4 Concluding Remarks -- References -- 11 The Need for Trusted Autonomy in Military Cyber Security -- 11.1 Introduction -- 11.2 Cyber Security -- 11.3 Challenges and the Potential Application of Trusted Autonomy -- 11.4 Conclusion -- References -- 12 Reinforcing Trust in Autonomous Systems: A Quantum Cognitive Approach -- 12.1 Introduction -- 12.2 Compatible and Incompatible States -- 12.3 A Quantum Cognition Model for the Emergence of Trust -- 12.4 Conclusion -- References -- 13 Learning to Shape Errors with a Confusion Objective -- 13.1 Introduction -- 13.2 Foundations -- 13.2.1 Binomial Logistic Regression -- 13.2.2 Multinomial Logistic Regression -- 13.2.3 Multinomial Softmax Regression for Gaussian Case -- 13.3 Multinomial Softmax Regression on Confusion -- 13.4 Implementation and Results -- 13.4.1 Error Trading -- 13.4.2 Performance Using a Deep Network and Independent Data Sources -- 13.4.3 Adversarial Errors -- 13.5 Discussion -- 13.6 Conclusion -- References -- 14 Developing Robot Assistants with Communicative Cues for Safe, Fluent HRI -- 14.1 Introduction -- 14.2 CHARM - Collaborative Human-Focused Assistive Robotics for Manufacturing. |
505 8# - FORMATTED CONTENTS NOTE | |
Formatted contents note | 14.2.1 The Robot Assistant, Its Task, and Its Components -- 14.2.2 CHARM Streams and Thrusts -- 14.2.3 Plugfest -- 14.3 Identifying, Modeling, and Implementing Naturalistic Communicative Cues -- 14.3.1 Phase 1: Human-Human Studies -- 14.3.2 Phase 2: Behavioral Description -- 14.3.3 Phase 3: Human-Robot Interaction Studies -- 14.4 Communicative Cue Studies -- 14.4.1 Human-Robot Handovers -- 14.4.2 Hesitation -- 14.4.3 Tap and Push -- 14.5 Current and Future Work -- References -- Trusted Autonomy -- 15 Intrinsic Motivation for Truly Autonomous Agents -- 15.1 Introduction -- 15.2 Background -- 15.2.1 Previous Work on Intrinsic Human Motivation -- 15.2.2 Previous Work on Cognitive Architectures -- 15.3 A Cognitive Architecture with Intrinsic Motivation -- 15.3.1 Overview of Clarion -- 15.3.2 The Action-Centered Subsystem -- 15.3.3 The Non-Action-Centered Subsystem -- 15.3.4 The Motivational Subsystem -- 15.3.5 The Metacognitive Subsystem -- 15.4 Some Examples of Simulations -- 15.5 Concluding Remarks -- References -- 16 Computational Motivation, Autonomy and Trustworthiness: Can We Have It All? -- 16.1 Autonomous Systems -- 16.2 Intrinsically Motivated Swarms -- 16.2.1 Crowds of Motivated Agents -- 16.2.2 Motivated Particle Swarm Optimization for Adaptive Task Allocation -- 16.2.3 Motivated Guaranteed Convergence Particle Swarm Optimization for Exploration and Task Allocation Under Communication Constraints -- 16.3 Functional Implications of Intrinsically Motivated Swarms -- 16.3.1 Motivation and Diversity -- 16.3.2 Motivation and Adaptation -- 16.3.3 Motivation and Exploration -- 16.4 Implications of Motivation on Trust -- 16.4.1 Implications for Reliability -- 16.5 Implications for Privacy and Security -- 16.5.1 Implications for Safety -- 16.6 Implications of Complexity -- 16.7 Implications for Risk -- 16.7.1 Implications for Free Will. |
505 8# - FORMATTED CONTENTS NOTE | |
Formatted contents note | 16.8 Conclusion -- References -- 17 Are Autonomous-and-Creative Machines Intrinsically Untrustworthy? -- 17.1 Introduction -- 17.2 The Distressing Principle, Intuitively Put -- 17.3 The Distressing Principle, More Formally Put -- 17.3.1 The Ideal-Observer Point of View -- 17.3.2 Theory-of-Mind-Creativity -- 17.3.3 Autonomy -- 17.3.4 The Deontic Cognitive Event Calculus (mathcalDemathcalCEC) -- 17.3.5 Collaborative Situations -- Untrustworthiness -- 17.3.6 Theorem ACU -- 17.4 Computational Simulations -- 17.4.1 ShadowProver -- 17.4.2 The Simulation Proper -- 17.5 Toward the Needed Engineering -- References -- 18 Trusted Autonomous Command and Control -- 18.1 Scenario -- References -- 19 Trusted Autonomy in Training: A Future Scenario -- 19.1 Introduction -- 19.2 Scan of Changes -- 19.3 Trusted Autonomy Training System Map -- 19.4 Theory of Change -- 19.5 Narratives -- 19.5.1 The Failed Promise -- 19.5.2 Fake It Until You Break It -- 19.5.3 To Infinity, and Beyond! -- References -- 20 Future Trusted Autonomous Space Scenarios -- 20.1 Introduction -- 20.2 The Space Environment -- 20.3 Space Activity - Missions and Autonomy -- 20.4 Current State-of-the-Art of Trusted Autonomous Space Systems -- 20.5 Some Future Trusted Autonomous Space Scenarios -- 20.5.1 Autonomous Space Operations -- 20.5.2 Autonomous Space Traffic Management Systems -- 20.5.3 Autonomous Disaggregated Space Systems -- References -- 21 An Autonomy Interrogative -- 21.1 Introduction -- 21.2 Fundamental Uncertainty in Economics -- 21.2.1 Economic Agency and Autonomy -- 21.3 The Inadequacy of Bayesianism -- 21.4 Epistemic and Ontological Uncertainty -- 21.5 Black Swans and Universal Causality -- 21.6 Ontological Uncertainty and Incompleteness -- 21.6.1 Uncertainty as Non-ergodicity -- 21.7 Uncertainty and Incompleteness -- 21.8 Decision-Making Under Uncertainty -- 21.9 Barbell Strategies. |
505 8# - FORMATTED CONTENTS NOTE | |
Formatted contents note | 21.10 Theory of Self. |
588 ## - SOURCE OF DESCRIPTION NOTE | |
Source of description note | Description based on publisher supplied metadata and other sources. |
590 ## - LOCAL NOTE (RLIN) | |
Local note | Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2023. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries. |
655 #4 - INDEX TERM--GENRE/FORM | |
Genre/form data or focus term | Electronic books. |
700 1# - ADDED ENTRY--PERSONAL NAME | |
Personal name | Scholz, Jason. |
700 1# - ADDED ENTRY--PERSONAL NAME | |
Personal name | Reid, Darryn J. |
776 08 - ADDITIONAL PHYSICAL FORM ENTRY | |
Relationship information | Print version: |
Main entry heading | Abbass, Hussein A. |
Title | Foundations of Trusted Autonomy |
Place, publisher, and date of publication | Cham : Springer International Publishing AG,c2018 |
International Standard Book Number | 9783319648156 |
797 2# - LOCAL ADDED ENTRY--CORPORATE NAME (RLIN) | |
Corporate name or jurisdiction name as entry element | ProQuest (Firm) |
830 #0 - SERIES ADDED ENTRY--UNIFORM TITLE | |
Uniform title | Studies in Systems, Decision and Control Series |
856 40 - ELECTRONIC LOCATION AND ACCESS | |
Uniform Resource Identifier | <a href="https://ebookcentral.proquest.com/lib/bacm-ebooks/detail.action?docID=5579657">https://ebookcentral.proquest.com/lib/bacm-ebooks/detail.action?docID=5579657</a> |
Public note | Click to View |
No items available.