Supervised Machine Learning with Plausible Deniability. Issue 112 (January 2022)
- Record Type:
- Journal Article
- Title:
- Supervised Machine Learning with Plausible Deniability. Issue 112 (January 2022)
- Main Title:
- Supervised Machine Learning with Plausible Deniability
- Authors:
- Rass, Stefan
König, Sandra
Wachter, Jasmin
Egger, Manuel
Hobisch, Manuel - Abstract:
- Highlights: Introduction of the new concept of "deniability" and "plausible deniability" in the context of machine learning, as a notion of security in this context Formally proven conditions under which plausible deniability applies (Theorems 1 and 2) Numerical evaluation of the concepts, showcasing it on several machine learning models Discussion on an extended level about other machine learning models to which our results do not apply to (to show the scope of our work) Derivation of implications and meaning for de facto standard methods of training machine learning, culminating in recommendations on how to practically train models to avoid security concerns connected to plausible deniability Graphical abstract: Abstract: We study the question of how well machine learning (ML) models trained on a certain data set provide privacy for the training data or, equivalently, whether it is possible to reverse-engineer the training data from a given ML model. While this is easy to answer negatively in the most general case, it is interesting to note that the protection extends beyond non-recoverability towards plausible deniability : Given a ML model f, we show that one can take a set of purely random training data, and from this define a suitable "learning rule" that will produce a ML model that is exactly f . Thus, any speculation about which data has been used to train f is deniable upon the claim that any other data could have led to the same results. We corroborate ourHighlights: Introduction of the new concept of "deniability" and "plausible deniability" in the context of machine learning, as a notion of security in this context Formally proven conditions under which plausible deniability applies (Theorems 1 and 2) Numerical evaluation of the concepts, showcasing it on several machine learning models Discussion on an extended level about other machine learning models to which our results do not apply to (to show the scope of our work) Derivation of implications and meaning for de facto standard methods of training machine learning, culminating in recommendations on how to practically train models to avoid security concerns connected to plausible deniability Graphical abstract: Abstract: We study the question of how well machine learning (ML) models trained on a certain data set provide privacy for the training data or, equivalently, whether it is possible to reverse-engineer the training data from a given ML model. While this is easy to answer negatively in the most general case, it is interesting to note that the protection extends beyond non-recoverability towards plausible deniability : Given a ML model f, we show that one can take a set of purely random training data, and from this define a suitable "learning rule" that will produce a ML model that is exactly f . Thus, any speculation about which data has been used to train f is deniable upon the claim that any other data could have led to the same results. We corroborate our theoretical finding with practical examples and open source implementations of how to find the learning rules for a chosen set of training data. … (more)
- Is Part Of:
- Computers & security. Issue 112(2022)
- Journal:
- Computers & security
- Issue:
- Issue 112(2022)
- Issue Display:
- Volume 112, Issue 112 (2022)
- Year:
- 2022
- Volume:
- 112
- Issue:
- 112
- Issue Sort Value:
- 2022-0112-0112-0000
- Page Start:
- Page End:
- Publication Date:
- 2022-01
- Subjects:
- Machine Learning -- Plausible Deniability -- Privacy -- Data Protection -- Artificial Intelligence
Computer security -- Periodicals
Electronic data processing departments -- Security measures -- Periodicals
005.805 - Journal URLs:
- http://www.sciencedirect.com/science/journal/01674048 ↗
http://www.elsevier.com/journals ↗ - DOI:
- 10.1016/j.cose.2021.102506 ↗
- Languages:
- English
- ISSNs:
- 0167-4048
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - 3394.781000
British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 20063.xml