000 04326nam a22005775i 4500
001 978-981-19-7554-7
003 DE-He213
005 20240423125516.0
007 cr nn 008mamaa
008 230529s2023 si | s |||| 0|eng d
020 _a9789811975547
_9978-981-19-7554-7
024 7 _a10.1007/978-981-19-7554-7
_2doi
050 4 _aQ325.5-.7
072 7 _aUYQM
_2bicssc
072 7 _aMAT029000
_2bisacsh
072 7 _aUYQM
_2thema
082 0 4 _a006.31
_223
245 1 0 _aDigital Watermarking for Machine Learning Model
_h[electronic resource] :
_bTechniques, Protocols and Applications /
_cedited by Lixin Fan, Chee Seng Chan, Qiang Yang.
250 _a1st ed. 2023.
264 1 _aSingapore :
_bSpringer Nature Singapore :
_bImprint: Springer,
_c2023.
300 _aXVI, 225 p. 1 illus.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
505 0 _aPart I. Preliminary -- Chapter 1. Introduction -- Chapter 2. Ownership Verification Protocols for Deep Neural Network Watermarks -- Part II Techniques -- Chapter 3. ModelWatermarking for Image Recovery DNNs -- Chapter 4. The Robust and Harmless ModelWatermarking -- Chapter 5. Protecting Intellectual Property of Machine Learning Models via Fingerprinting the Classification Boundary -- Chapter 6. Protecting Image Processing Networks via Model Water -- Chapter 7. Watermarks for Deep Reinforcement Learning -- Chapter 8. Ownership Protection for Image Captioning Models -- Chapter 9.Protecting Recurrent Neural Network by Embedding Key -- Part III Applications -- Chapter 10. FedIPR: Ownership Verification for Federated Deep Neural Network Models -- Chapter 11. Model Auditing For Data Intellectual Property .
520 _aMachine learning (ML) models, especially large pretrained deep learning (DL) models, are of high economic value and must be properly protected with regard to intellectual property rights (IPR). Model watermarking methods are proposed to embed watermarks into the target model, so that, in the event it is stolen, the model’s owner can extract the pre-defined watermarks to assert ownership. Model watermarking methods adopt frequently used techniques like backdoor training, multi-task learning, decision boundary analysis etc. to generate secret conditions that constitute model watermarks or fingerprints only known to model owners. These methods have little or no effect on model performance, which makes them applicable to a wide variety of contexts. In terms of robustness, embedded watermarks must be robustly detectable against varying adversarial attacks that attempt to remove the watermarks. The efficacy of model watermarking methods is showcased in diverse applications including image classification, image generation, image captions, natural language processing and reinforcement learning. This book covers the motivations, fundamentals, techniques and protocols for protecting ML models using watermarking. Furthermore, it showcases cutting-edge work in e.g. model watermarking, signature and passport embedding and their use cases in distributed federated learning settings.
650 0 _aMachine learning.
650 0 _aData protection.
650 0 _aImage processing
_xDigital techniques.
650 0 _aComputer vision.
650 0 _aImage processing.
650 1 4 _aMachine Learning.
650 2 4 _aData and Information Security.
650 2 4 _aComputer Imaging, Vision, Pattern Recognition and Graphics.
650 2 4 _aImage Processing.
700 1 _aFan, Lixin.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
700 1 _aChan, Chee Seng.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
700 1 _aYang, Qiang.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
710 2 _aSpringerLink (Online service)
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9789811975530
776 0 8 _iPrinted edition:
_z9789811975554
776 0 8 _iPrinted edition:
_z9789811975561
856 4 0 _uhttps://doi.org/10.1007/978-981-19-7554-7
912 _aZDB-2-SCS
912 _aZDB-2-SXCS
942 _cSPRINGER
999 _c178716
_d178716