XxAI - Beyond Explainable AI : (Record no. 309521)
[ view plain ]
000 -LEADER | |
---|---|
fixed length control field | 11312nam a22005413i 4500 |
001 - CONTROL NUMBER | |
control field | EBC6954332 |
005 - DATE AND TIME OF LATEST TRANSACTION | |
control field | 20240122001544.0 |
006 - FIXED-LENGTH DATA ELEMENTS--ADDITIONAL MATERIAL CHARACTERISTICS | |
fixed length control field | m o d | |
007 - PHYSICAL DESCRIPTION FIXED FIELD--GENERAL INFORMATION | |
fixed length control field | cr cnu|||||||| |
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION | |
fixed length control field | 231124s2022 xx o ||||0 eng d |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER | |
International Standard Book Number | 9783031040832 |
Qualifying information | (electronic bk.) |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER | |
Canceled/invalid ISBN | 9783031040825 |
035 ## - SYSTEM CONTROL NUMBER | |
System control number | (MiAaPQ)EBC6954332 |
035 ## - SYSTEM CONTROL NUMBER | |
System control number | (Au-PeEL)EBL6954332 |
035 ## - SYSTEM CONTROL NUMBER | |
System control number | (OCoLC)1311285955 |
040 ## - CATALOGING SOURCE | |
Original cataloging agency | MiAaPQ |
Language of cataloging | eng |
Description conventions | rda |
-- | pn |
Transcribing agency | MiAaPQ |
Modifying agency | MiAaPQ |
050 #4 - LIBRARY OF CONGRESS CALL NUMBER | |
Classification number | Q334-342 |
082 0# - DEWEY DECIMAL CLASSIFICATION NUMBER | |
Classification number | 006.31 |
100 1# - MAIN ENTRY--PERSONAL NAME | |
Personal name | Holzinger, Andreas. |
245 10 - TITLE STATEMENT | |
Title | XxAI - Beyond Explainable AI : |
Remainder of title | International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers. |
250 ## - EDITION STATEMENT | |
Edition statement | 1st ed. |
264 #1 - PRODUCTION, PUBLICATION, DISTRIBUTION, MANUFACTURE, AND COPYRIGHT NOTICE | |
Place of production, publication, distribution, manufacture | Cham : |
Name of producer, publisher, distributor, manufacturer | Springer International Publishing AG, |
Date of production, publication, distribution, manufacture, or copyright notice | 2022. |
264 #4 - PRODUCTION, PUBLICATION, DISTRIBUTION, MANUFACTURE, AND COPYRIGHT NOTICE | |
Date of production, publication, distribution, manufacture, or copyright notice | �2022. |
300 ## - PHYSICAL DESCRIPTION | |
Extent | 1 online resource (397 pages) |
336 ## - CONTENT TYPE | |
Content type term | text |
Content type code | txt |
Source | rdacontent |
337 ## - MEDIA TYPE | |
Media type term | computer |
Media type code | c |
Source | rdamedia |
338 ## - CARRIER TYPE | |
Carrier type term | online resource |
Carrier type code | cr |
Source | rdacarrier |
490 1# - SERIES STATEMENT | |
Series statement | Lecture Notes in Computer Science Series ; |
Volume/sequential designation | v.13200 |
505 0# - FORMATTED CONTENTS NOTE | |
Formatted contents note | Intro -- Preface -- Organization -- Contents -- Editorial -- xxAI - Beyond Explainable Artificial Intelligence -- 1 Introduction and Motivation for Explainable AI -- 2 Explainable AI: Past and Present -- 3 Book Structure -- References -- Current Methods and Challenges -- Explainable AI Methods - A Brief Overview -- 1 Introduction -- 2 Explainable AI Methods - Overview -- 2.1 LIME (Local Interpretable Model Agnostic Explanations) -- 2.2 Anchors -- 2.3 GraphLIME -- 2.4 Method: LRP (Layer-wise Relevance Propagation) -- 2.5 Deep Taylor Decomposition (DTD) -- 2.6 Prediction Difference Analysis (PDA) -- 2.7 TCAV (Testing with Concept Activation Vectors) -- 2.8 XGNN (Explainable Graph Neural Networks) -- 2.9 SHAP (Shapley Values) -- 2.10 Asymmetric Shapley Values (ASV) -- 2.11 Break-Down -- 2.12 Shapley Flow -- 2.13 Textual Explanations of Visual Models -- 2.14 Integrated Gradients -- 2.15 Causal Models -- 2.16 Meaningful Perturbations -- 2.17 EXplainable Neural-Symbolic Learning (X-NeSyL) -- 3 Conclusion and Future Outlook -- References -- General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models -- 1 Introduction -- 2 Assuming One-Fits-All Interpretability -- 3 Bad Model Generalization -- 4 Unnecessary Use of Complex Models -- 5 Ignoring Feature Dependence -- 5.1 Interpretation with Extrapolation -- 5.2 Confusing Linear Correlation with General Dependence -- 5.3 Misunderstanding Conditional Interpretation -- 6 Misleading Interpretations Due to Feature Interactions -- 6.1 Misleading Feature Effects Due to Aggregation -- 6.2 Failing to Separate Main from Interaction Effects -- 7 Ignoring Model and Approximation Uncertainty -- 8 Ignoring the Rashomon Effect -- 9 Failure to Scale to High-Dimensional Settings -- 9.1 Human-Intelligibility of High-Dimensional IML Output -- 9.2 Computational Effort. |
505 8# - FORMATTED CONTENTS NOTE | |
Formatted contents note | 9.3 Ignoring Multiple Comparison Problem -- 10 Unjustified Causal Interpretation -- 11 Discussion -- References -- CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations -- 1 Introduction -- 2 Related Work -- 3 The CLEVR-X Dataset -- 3.1 The CLEVR Dataset -- 3.2 Dataset Generation -- 3.3 Dataset Analysis -- 3.4 User Study on Explanation Completeness and Relevance -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Evaluating Explanations Generated by State-of-the-Art Methods -- 4.3 Analyzing Results on CLEVR-X by Question and Answer Types -- 4.4 Influence of Using Different Numbers of Ground-Truth Explanations -- 4.5 Qualitative Explanation Generation Results -- 5 Conclusion -- References -- New Developments in Explainable AI -- A Rate-Distortion Framework for Explaining Black-Box Model Decisions -- 1 Introduction -- 2 Related Works -- 3 Rate-Distortion Explanation Framework -- 3.1 General Formulation -- 3.2 Implementation -- 4 Experiments -- 4.1 Images -- 4.2 Audio -- 4.3 Radio Maps -- 5 Conclusion -- References -- Explaining the Predictions of Unsupervised Learning Models -- 1 Introduction -- 2 A Brief Review of Explainable AI -- 2.1 Approaches to Attribution -- 2.2 Neuralization-Propagation -- 3 Kernel Density Estimation -- 3.1 Explaining Outlierness -- 3.2 Explaining Inlierness: Direct Approach -- 3.3 Explaining Inlierness: Random Features Approach -- 4 K-Means Clustering -- 4.1 Explaining Cluster Assignments -- 5 Experiments -- 5.1 Wholesale Customer Analysis -- 5.2 Image Analysis -- 6 Conclusion and Outlook -- A Attribution on CNN Activations -- A.1 Attributing Outlierness -- A.2 Attributing Inlierness -- A.3 Attributing Cluster Membership -- References -- Towards Causal Algorithmic Recourse -- 1 Introduction -- 1.1 Motivating Examples -- 1.2 Summary of Contributions and Structure of This Chapter -- 2 Preliminaries. |
505 8# - FORMATTED CONTENTS NOTE | |
Formatted contents note | 2.1 XAI: Counterfactual Explanations and Algorithmic Recourse -- 2.2 Causality: Structural Causal Models, Interventions, and Counterfactuals -- 3 Causal Recourse Formulation -- 3.1 Limitations of CFE-Based Recourse -- 3.2 Recourse Through Minimal Interventions -- 3.3 Negative Result: No Recourse Guarantees for Unknown Structural Equations -- 4 Recourse Under Imperfect Causal Knowledge -- 4.1 Probabilistic Individualised Recourse -- 4.2 Probabilistic Subpopulation-Based Recourse -- 4.3 Solving the Probabilistic Recourse Optimization Problem -- 5 Experiments -- 5.1 Compared Methods -- 5.2 Metrics -- 5.3 Synthetic 3-Variable SCMs Under Different Assumptions -- 5.4 Semi-synthetic 7-Variable SCM for Loan-Approval -- 6 Discussion -- 7 Conclusion -- References -- Interpreting Generative Adversarial Networks for Interactive Image Generation -- 1 Introduction -- 2 Supervised Approach -- 3 Unsupervised Approach -- 4 Embedding-Guided Approach -- 5 Concluding Remarks -- References -- XAI and Strategy Extraction via Reward Redistribution -- 1 Introduction -- 2 Background -- 2.1 Explainability Methods -- 2.2 Reinforcement Learning -- 2.3 Credit Assignment in Reinforcement Learning -- 2.4 Methods for Credit Assignment -- 2.5 Explainability Methods for Credit Assignment -- 2.6 Credit Assignment via Reward Redistribution -- 3 Strategy Extraction via Reward Redistribution -- 3.1 Strategy Extraction with Profile Models -- 3.2 Explainable Agent Behavior via Strategy Extraction -- 4 Experiments -- 4.1 Gridworld -- 4.2 Minecraft -- 5 Limitations -- 6 Conclusion -- References -- Interpretable, Verifiable, and Robust Reinforcement Learning via Program Synthesis -- 1 Introduction -- 2 Background on Reinforcement Learning -- 3 Programmatic Policies -- 3.1 Traditional Interpretable Models -- 3.2 State Machine Policies -- 3.3 List Processing Programs. |
505 8# - FORMATTED CONTENTS NOTE | |
Formatted contents note | 3.4 Neurosymbolic Policies -- 4 Synthesizing Programmatic Policies -- 4.1 Imitation Learning -- 4.2 Q-Guided Imitation Learning -- 4.3 Updating the DNN Policy -- 4.4 Program Synthesis for Supervised Learning -- 5 Case Studies -- 5.1 Interpretability -- 5.2 Verification -- 5.3 Robustness -- 6 Conclusions and Future Work -- References -- Interpreting and Improving Deep-Learning Models with Reality Checks -- 1 Interpretability: For What and For Whom? -- 2 Computing Interpretations for Feature Interactions and Transformations -- 2.1 Contextual Decomposition (CD) Importance Scores for General DNNs -- 2.2 Agglomerative Contextual Decomposition (ACD) -- 2.3 Transformation Importance with Applications to Cosmology (TRIM) -- 3 Using Attributions to Improve Models -- 3.1 Penalizing Explanations to Align Neural Networks with Prior Knowledge (CDEP) -- 3.2 Distilling Adaptive Wavelets from Neural Networks with Interpretations -- 4 Real-Data Problems Showcasing Interpretations -- 4.1 Molecular Partner Prediction -- 4.2 Cosmological Parameter Prediction -- 4.3 Improving Skin Cancer Classification via CDEP -- 5 Discussion -- 5.1 Building/Distilling Accurate and Interpretable Models -- 5.2 Making Interpretations Useful -- References -- Beyond the Visual Analysis of Deep Model Saliency -- 1 Introduction -- 2 Saliency-Based XAI in Vision -- 2.1 White-Box Models -- 2.2 Black-Box Models -- 3 XAI for Improved Models: Excitation Dropout -- 4 XAI for Improved Models: Domain Generalization -- 5 XAI for Improved Models: Guided Zoom -- 6 Conclusion -- References -- ECQx: Explainability-Driven Quantization for Low-Bit and Sparse DNNs -- 1 Introduction -- 2 Related Work -- 3 Neural Network Quantization -- 3.1 Entropy-Constrained Quantization -- 4 Explainability-Driven Quantization -- 4.1 Layer-Wise Relevance Propagation. |
505 8# - FORMATTED CONTENTS NOTE | |
Formatted contents note | 4.2 eXplainability-Driven Entropy-Constrained Quantization -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 ECQx Results -- 6 Conclusion -- References -- A Whale's Tail - Finding the Right Whale in an Uncertain World -- 1 Introduction -- 2 Related Work -- 3 Humpback Whale Data -- 3.1 Image Data -- 3.2 Expert Annotations -- 4 Methods -- 4.1 Landmark-Based Identification Framework -- 4.2 Uncertainty and Sensitivity Analysis -- 5 Experiments and Results -- 5.1 Experimental Setup -- 5.2 Uncertainty and Sensitivity Analysis of the Landmarks -- 5.3 Heatmapping Results and Comparison with Whale Expert Knowledge -- 5.4 Spatial Uncertainty of Individual Landmarks -- 6 Conclusion and Outlook -- References -- Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science -- 1 Introduction -- 2 XAI Applications -- 2.1 XAI in Remote Sensing and Weather Forecasting -- 2.2 XAI in Climate Prediction -- 2.3 XAI to Extract Forced Climate Change Signals and Anthropogenic Footprint -- 3 Development of Attribution Benchmarks for Geosciences -- 3.1 Synthetic Framework -- 3.2 Assessment of XAI Methods -- 4 Conclusions -- References -- An Interdisciplinary Approach to Explainable AI -- Varieties of AI Explanations Under the Law. From the GDPR to the AIA, and Beyond -- 1 Introduction -- 1.1 Functional Varieties of AI Explanations -- 1.2 Technical Varieties of AI Explanations -- 1.3 Roadmap of the Paper -- 2 Explainable AI Under Current Law -- 2.1 The GDPR: Rights-Enabling Transparency -- 2.2 Contract and Tort Law: Technical and Protective Transparency -- 2.3 Banking Law: More Technical and Protective Transparency -- 3 Regulatory Proposals at the EU Level: The AIA -- 3.1 AI with Limited Risk: Decision-Enabling Transparency (Art. 52 AIA)? -- 3.2 AI with High Risk: Encompassing Transparency (Art. 13 AIA)?. |
505 8# - FORMATTED CONTENTS NOTE | |
Formatted contents note | 3.3 Limitations. |
588 ## - SOURCE OF DESCRIPTION NOTE | |
Source of description note | Description based on publisher supplied metadata and other sources. |
590 ## - LOCAL NOTE (RLIN) | |
Local note | Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2023. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries. |
655 #4 - INDEX TERM--GENRE/FORM | |
Genre/form data or focus term | Electronic books. |
700 1# - ADDED ENTRY--PERSONAL NAME | |
Personal name | Goebel, Randy. |
700 1# - ADDED ENTRY--PERSONAL NAME | |
Personal name | Fong, Ruth. |
700 1# - ADDED ENTRY--PERSONAL NAME | |
Personal name | Moon, Taesup. |
700 1# - ADDED ENTRY--PERSONAL NAME | |
Personal name | M�uller, Klaus-Robert. |
700 1# - ADDED ENTRY--PERSONAL NAME | |
Personal name | Samek, Wojciech. |
776 08 - ADDITIONAL PHYSICAL FORM ENTRY | |
Relationship information | Print version: |
Main entry heading | Holzinger, Andreas |
Title | XxAI - Beyond Explainable AI |
Place, publisher, and date of publication | Cham : Springer International Publishing AG,c2022 |
International Standard Book Number | 9783031040825 |
797 2# - LOCAL ADDED ENTRY--CORPORATE NAME (RLIN) | |
Corporate name or jurisdiction name as entry element | ProQuest (Firm) |
830 #0 - SERIES ADDED ENTRY--UNIFORM TITLE | |
Uniform title | Lecture Notes in Computer Science Series |
856 40 - ELECTRONIC LOCATION AND ACCESS | |
Uniform Resource Identifier | <a href="https://ebookcentral.proquest.com/lib/bacm-ebooks/detail.action?docID=6954332">https://ebookcentral.proquest.com/lib/bacm-ebooks/detail.action?docID=6954332</a> |
Public note | Click to View |
No items available.