Explainable AI with Python (Record no. 177998)

MARC details
000 -LEADER
fixed length control field 06871nam a22005295i 4500
001 - CONTROL NUMBER
control field 978-3-030-68640-6
003 - CONTROL NUMBER IDENTIFIER
control field DE-He213
005 - DATE AND TIME OF LATEST TRANSACTION
control field 20240423125437.0
007 - PHYSICAL DESCRIPTION FIXED FIELD--GENERAL INFORMATION
fixed length control field cr nn 008mamaa
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION
fixed length control field 210428s2021 sz | s |||| 0|eng d
020 ## - INTERNATIONAL STANDARD BOOK NUMBER
International Standard Book Number 9783030686406
-- 978-3-030-68640-6
024 7# - OTHER STANDARD IDENTIFIER
Standard number or code 10.1007/978-3-030-68640-6
Source of number or code doi
050 #4 - LIBRARY OF CONGRESS CALL NUMBER
Classification number Q334-342
050 #4 - LIBRARY OF CONGRESS CALL NUMBER
Classification number TA347.A78
072 #7 - SUBJECT CATEGORY CODE
Subject category code UYQ
Source bicssc
072 #7 - SUBJECT CATEGORY CODE
Subject category code COM004000
Source bisacsh
072 #7 - SUBJECT CATEGORY CODE
Subject category code UYQ
Source thema
082 04 - DEWEY DECIMAL CLASSIFICATION NUMBER
Classification number 006.3
Edition number 23
100 1# - MAIN ENTRY--PERSONAL NAME
Personal name Gianfagna, Leonida.
Relator term author.
Relator code aut
-- http://id.loc.gov/vocabulary/relators/aut
245 10 - TITLE STATEMENT
Title Explainable AI with Python
Medium [electronic resource] /
Statement of responsibility, etc by Leonida Gianfagna, Antonio Di Cecco.
250 ## - EDITION STATEMENT
Edition statement 1st ed. 2021.
264 #1 -
-- Cham :
-- Springer International Publishing :
-- Imprint: Springer,
-- 2021.
300 ## - PHYSICAL DESCRIPTION
Extent VIII, 202 p. 119 illus., 103 illus. in color.
Other physical details online resource.
336 ## -
-- text
-- txt
-- rdacontent
337 ## -
-- computer
-- c
-- rdamedia
338 ## -
-- online resource
-- cr
-- rdacarrier
347 ## -
-- text file
-- PDF
-- rda
505 0# - FORMATTED CONTENTS NOTE
Formatted contents note 1 -- The Landscape -- 1.1 Examples of what Explainable AI is -- 1.1.1 Learning Phase -- 1.1.2 Knowledge Discovery -- 1.1.3 Reliability and Robustness -- 1.1.4 What have we learnt from the 3 examples -- 1.2 Machine Learning and XAI -- 1.2.1 Machine Learning tassonomy -- 1.2.2 Common Myths -- 1.3 The need for Explainable AI -- 1.4 Explainability and Interpretability: different words to say the same thing or not? -- 1.4.1 From World to Humans -- 1.4.2 Correlation is not causation -- 1.4.3 So what is the difference between interpretability and explainability? -- 1.5 Making Machine Learning systems explainable -- 1.5.1 The XAI flow -- 1.5.2 The big picture -- 1.6 Do we really need to make Machine Learning Models explainable? -- 1.7 Summary -- 1.8 References -- 2. Explainable AI: needs, opportunities and challenges -- 2.1 Human in the loop -- 2.1.1 Centaur XAI systems -- 2.1.2 XAI evaluation from “Human in The Loop perspective” -- 2.2 How to make Machine Learning models explainable -- 2.2.1 Intrinsic Explanations -- 2.2.2 Post-Hoc Explanations -- 2.2.3 Global or Local Explainability -- 2.3 Properties of Explanations -- 2.4 Summary -- 2.5 References -- 3 Intrinsic Explainable Models -- 3.1.Loss Function -- 3.2.Linear Regression -- 3.3.Logistic Regression -- 3.4.Decision Trees -- 3.5.K-Nearest Neighbors (KNN) -- 3.6.Summary -- 3.7 References -- 4. Model-agnostic methods for XAI -- 4.1 Global Explanations: permutation Importance and Partial Dependence Plot -- 4.1.1 Ranking features by Permutation Importance -- 4.1.2 Permutation Importance on the train set -- 4.1.3 Partial Dependence Plot -- 4.1.4 Properties of Explanations -- 4.2 Local Explanations: XAI with Shapley Additive explanations -- 4.2.1 Shapley Values: a game-theoretical approach -- 4.2.2 The first use of SHAP -- 4.2.3 Properties of Explanations -- 4.3 The road to KernelSHAP -- 4.3.1 The Shapley formula -- 4.3.2 How to calculate Shapley values -- 4.3.3 Local Linear Surrogate Models (LIME) -- 4.3.4 KernelSHAP is a unique form of LIME -- 4.4 Kernel SHAP and interactions -- 4.4.1 The NewYork Cab scenario -- 4.4.2 Train the Model with preliminary analysis -- 4.4.3 Making the model explainable with KernelShap -- 4.4.4 Interactions of features -- 4.5 A faster SHAP for boosted trees -- 4.5.1 Using TreeShap -- 4.5.2 Providing explanations -- 4.6 A naïve criticism to SHAP -- 4.7 Summary -- 4.8 References -- 5. Explaining Deep Learning Models -- 5.1 Agnostic Approach -- 5.1.1 Adversarial Features -- 5.1.2 Augmentations -- 5.1.3 Occlusions as augmentations -- 5.1.4 Occlusions as an Agnostic XAI Method -- 5.2 Neural Networks -- 5.2.1 The neural network structure -- 5.2.2 Why the neural network is Deep? (vs shallow) -- 5.2.3 Rectified activations (and Batch Normalization) -- 5.2.4 Saliency Maps -- 5.3 Opening Deep Networks -- 5.3.1 Different layer explanation -- 5.3.2 CAM (Class Activation Maps) and Grad-CAM -- 5.3.3 DeepShap / DeepLift -- 5.4 A critic of Saliency Methods -- 5.4.1 What the network sees -- 5.4.2 Explainability batch normalizing layer by layer -- 5.5 Unsupervised Methods -- 5.5.1 Unsupervised Dimensional Reduction -- 5.5.2 Dimensional reduction of convolutional filters -- 5.5.3 Activation Atlases: How to tell a wok from a pan -- 5.6 Summary -- 5.7 References -- 6.Making science with Machine Learning and XAI -- 6.1 Scientific method in the age of data -- 6.2 Ladder of Causation -- 6.3 Discovering physics concepts with ML and XAI -- 6.3.1 The magic of autoencoders -- 6.3.2 Discover the physics of damped pendulum with ML and XAI -- 6.3.3 Climbing the ladder of causation -- 6.4 Science in the age of ML and XAI -- 6.5 Summary -- 6.6 References -- 7. Adversarial Machine Learning and Explainability -- 7.1 Adversarial Examples (AE) crash course -- 7.1.2 Hands-on Adversarial Examples -- 7.2 Doing XAI with Adversarial Examples -- 7.3 Defending against Adversarial Attacks with XAI -- 7.4 Summary -- 7.5 References -- 8. A proposal for a sustainable model of Explainable AI -- 8.1 The XAI "fil rouge" -- 8.2 XAI and GDPR -- 8.2.1 FAST XAI -- 8.3 Conclusions -- 8.4 Summary -- 8.5 References -- Index.
520 ## - SUMMARY, ETC.
Summary, etc This book provides a full presentation of the current concepts and available techniques to make “machine learning” systems more explainable. The approaches presented can be applied to almost all the current “machine learning” models: linear and logistic regression, deep learning neural networks, natural language processing and image recognition, among the others. Progress in Machine Learning is increasing the use of artificial agents to perform critical tasks previously handled by humans (healthcare, legal and finance, among others). While the principles that guide the design of these agents are understood, most of the current deep-learning models are "opaque" to human understanding. Explainable AI with Python fills the current gap in literature on this emerging topic by taking both a theoretical and a practical perspective, making the reader quickly capable of working with tools and code for Explainable AI.
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM
Topical term or geographic name as entry element Artificial intelligence.
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM
Topical term or geographic name as entry element Machine learning.
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM
Topical term or geographic name as entry element Python (Computer program language).
650 14 - SUBJECT ADDED ENTRY--TOPICAL TERM
Topical term or geographic name as entry element Artificial Intelligence.
650 24 - SUBJECT ADDED ENTRY--TOPICAL TERM
Topical term or geographic name as entry element Machine Learning.
650 24 - SUBJECT ADDED ENTRY--TOPICAL TERM
Topical term or geographic name as entry element Python.
700 1# - ADDED ENTRY--PERSONAL NAME
Personal name Di Cecco, Antonio.
Relator term author.
Relator code aut
-- http://id.loc.gov/vocabulary/relators/aut
710 2# - ADDED ENTRY--CORPORATE NAME
Corporate name or jurisdiction name as entry element SpringerLink (Online service)
773 0# - HOST ITEM ENTRY
Title Springer Nature eBook
776 08 - ADDITIONAL PHYSICAL FORM ENTRY
Display text Printed edition:
International Standard Book Number 9783030686390
776 08 - ADDITIONAL PHYSICAL FORM ENTRY
Display text Printed edition:
International Standard Book Number 9783030686413
856 40 - ELECTRONIC LOCATION AND ACCESS
Uniform Resource Identifier <a href="https://doi.org/10.1007/978-3-030-68640-6">https://doi.org/10.1007/978-3-030-68640-6</a>
912 ## -
-- ZDB-2-SCS
912 ## -
-- ZDB-2-SXCS
942 ## - ADDED ENTRY ELEMENTS (KOHA)
Koha item type eBooks-CSE-Springer

No items available.

© 2024 IIIT-Delhi, library@iiitd.ac.in