000 | 03741nam a22005655i 4500 | ||
---|---|---|---|
001 | 978-981-19-6814-3 | ||
003 | DE-He213 | ||
005 | 20240423125019.0 | ||
007 | cr nn 008mamaa | ||
008 | 230428s2023 si | s |||| 0|eng d | ||
020 |
_a9789811968143 _9978-981-19-6814-3 |
||
024 | 7 |
_a10.1007/978-981-19-6814-3 _2doi |
|
050 | 4 | _aQ325.5-.7 | |
072 | 7 |
_aUYQM _2bicssc |
|
072 | 7 |
_aMAT029000 _2bisacsh |
|
072 | 7 |
_aUYQM _2thema |
|
082 | 0 | 4 |
_a006.31 _223 |
100 | 1 |
_aHuang, Xiaowei. _eauthor. _4aut _4http://id.loc.gov/vocabulary/relators/aut |
|
245 | 1 | 0 |
_aMachine Learning Safety _h[electronic resource] / _cby Xiaowei Huang, Gaojie Jin, Wenjie Ruan. |
250 | _a1st ed. 2023. | ||
264 | 1 |
_aSingapore : _bSpringer Nature Singapore : _bImprint: Springer, _c2023. |
|
300 |
_aXVII, 321 p. 1 illus. _bonline resource. |
||
336 |
_atext _btxt _2rdacontent |
||
337 |
_acomputer _bc _2rdamedia |
||
338 |
_aonline resource _bcr _2rdacarrier |
||
347 |
_atext file _bPDF _2rda |
||
490 | 1 |
_aArtificial Intelligence: Foundations, Theory, and Algorithms, _x2365-306X |
|
505 | 0 | _a1. Introduction -- 2. Safety of Simple Machine Learning Models -- 3. Safety of Deep Learning -- 4. Robustness Verification of Deep Learning -- 5. Enhancement to Robustness and Generalization -- 6. Probabilistic Graph Model -- A. Mathematical Foundations -- B. Competitions. | |
520 | _aMachine learning algorithms allow computers to learn without being explicitly programmed. Their application is now spreading to highly sophisticated tasks across multiple domains, such as medical diagnostics or fully autonomous vehicles. While this development holds great potential, it also raises new safety concerns, as machine learning has many specificities that make its behaviour prediction and assessment very different from that for explicitly programmed software systems. This book addresses the main safety concerns with regard to machine learning, including its susceptibility to environmental noise and adversarial attacks. Such vulnerabilities have become a major roadblock to the deployment of machine learning in safety-critical applications. The book presents up-to-date techniques for adversarial attacks, which are used to assess the vulnerabilities of machine learning models; formal verification, which is used to determine if a trained machine learning model is free of vulnerabilities; and adversarial training, which is used to enhance the training process and reduce vulnerabilities. The book aims to improve readers’ awareness of the potential safety issues regarding machine learning models. In addition, it includes up-to-date techniques for dealing with these issues, equipping readers with not only technical knowledge but also hands-on practical skills. | ||
650 | 0 | _aMachine learning. | |
650 | 0 | _aData protection. | |
650 | 0 | _aArtificial intelligence. | |
650 | 1 | 4 | _aMachine Learning. |
650 | 2 | 4 | _aData and Information Security. |
650 | 2 | 4 | _aArtificial Intelligence. |
700 | 1 |
_aJin, Gaojie. _eauthor. _4aut _4http://id.loc.gov/vocabulary/relators/aut |
|
700 | 1 |
_aRuan, Wenjie. _eauthor. _4aut _4http://id.loc.gov/vocabulary/relators/aut |
|
710 | 2 | _aSpringerLink (Online service) | |
773 | 0 | _tSpringer Nature eBook | |
776 | 0 | 8 |
_iPrinted edition: _z9789811968136 |
776 | 0 | 8 |
_iPrinted edition: _z9789811968150 |
776 | 0 | 8 |
_iPrinted edition: _z9789811968167 |
830 | 0 |
_aArtificial Intelligence: Foundations, Theory, and Algorithms, _x2365-306X |
|
856 | 4 | 0 | _uhttps://doi.org/10.1007/978-981-19-6814-3 |
912 | _aZDB-2-SCS | ||
912 | _aZDB-2-SXCS | ||
942 | _cSPRINGER | ||
999 |
_c173242 _d173242 |