Enhancing Security of Proof-of-Learning Against Spoofing Attacks Using Feature-Based Model Watermarking
The rapid advancement of machine learning (ML) technologies necessitates robust security frameworks to protect the integrity of ML model training processes. Proof-of-Learning (PoL) is a critical method for verifying the computational effort in training ML models, while model watermarking is a strate...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10741282/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850130254843084800 |
|---|---|
| author | Ozgur Ural Kenji Yoshigoe |
| author_facet | Ozgur Ural Kenji Yoshigoe |
| author_sort | Ozgur Ural |
| collection | DOAJ |
| description | The rapid advancement of machine learning (ML) technologies necessitates robust security frameworks to protect the integrity of ML model training processes. Proof-of-Learning (PoL) is a critical method for verifying the computational effort in training ML models, while model watermarking is a strategy for asserting model ownership. This research integrates PoL with feature-based model watermarking, embedding the watermark directly into the model’s features or parameters. This integration mitigates security risks associated with external key management and reduces computational overhead by eliminating the need for complex verification procedures. Our proposed dual-layered verification architecture embeds unique watermarks during the training phase. It records them alongside PoL proofs, enhancing security against sophisticated spoofing attacks where adversaries attempt to mimic a model’s computational trajectory and watermark. This approach addresses critical challenges, including maintaining watermark robustness and balancing security with model performance. Through a comprehensive analysis, we identify vulnerabilities in existing PoL systems and demonstrate how feature-based watermarking can enhance security. We present a secure PoL mechanism, supported by empirical validation, that significantly improves resilience to spoofing attacks. This advancement represents a crucial step towards securing ML models, paving the way for future research to protect diverse ML applications from various threats. |
| format | Article |
| id | doaj-art-e31b03075d234cd98812c6566153f07f |
| institution | OA Journals |
| issn | 2169-3536 |
| language | English |
| publishDate | 2024-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-e31b03075d234cd98812c6566153f07f2025-08-20T02:32:44ZengIEEEIEEE Access2169-35362024-01-011216956716959110.1109/ACCESS.2024.348977610741282Enhancing Security of Proof-of-Learning Against Spoofing Attacks Using Feature-Based Model WatermarkingOzgur Ural0https://orcid.org/0000-0003-1329-4303Kenji Yoshigoe1https://orcid.org/0000-0001-6040-4742Department of Electrical Engineering and Computer Science, Embry-Riddle Aeronautical University, Daytona Beach, FL, USADepartment of Electrical Engineering and Computer Science, Embry-Riddle Aeronautical University, Daytona Beach, FL, USAThe rapid advancement of machine learning (ML) technologies necessitates robust security frameworks to protect the integrity of ML model training processes. Proof-of-Learning (PoL) is a critical method for verifying the computational effort in training ML models, while model watermarking is a strategy for asserting model ownership. This research integrates PoL with feature-based model watermarking, embedding the watermark directly into the model’s features or parameters. This integration mitigates security risks associated with external key management and reduces computational overhead by eliminating the need for complex verification procedures. Our proposed dual-layered verification architecture embeds unique watermarks during the training phase. It records them alongside PoL proofs, enhancing security against sophisticated spoofing attacks where adversaries attempt to mimic a model’s computational trajectory and watermark. This approach addresses critical challenges, including maintaining watermark robustness and balancing security with model performance. Through a comprehensive analysis, we identify vulnerabilities in existing PoL systems and demonstrate how feature-based watermarking can enhance security. We present a secure PoL mechanism, supported by empirical validation, that significantly improves resilience to spoofing attacks. This advancement represents a crucial step towards securing ML models, paving the way for future research to protect diverse ML applications from various threats.https://ieeexplore.ieee.org/document/10741282/Proof-of-learningmodel watermarkingmachine learning securityspoofing attack countermeasuresdual-layered verificationmodel authenticity |
| spellingShingle | Ozgur Ural Kenji Yoshigoe Enhancing Security of Proof-of-Learning Against Spoofing Attacks Using Feature-Based Model Watermarking IEEE Access Proof-of-learning model watermarking machine learning security spoofing attack countermeasures dual-layered verification model authenticity |
| title | Enhancing Security of Proof-of-Learning Against Spoofing Attacks Using Feature-Based Model Watermarking |
| title_full | Enhancing Security of Proof-of-Learning Against Spoofing Attacks Using Feature-Based Model Watermarking |
| title_fullStr | Enhancing Security of Proof-of-Learning Against Spoofing Attacks Using Feature-Based Model Watermarking |
| title_full_unstemmed | Enhancing Security of Proof-of-Learning Against Spoofing Attacks Using Feature-Based Model Watermarking |
| title_short | Enhancing Security of Proof-of-Learning Against Spoofing Attacks Using Feature-Based Model Watermarking |
| title_sort | enhancing security of proof of learning against spoofing attacks using feature based model watermarking |
| topic | Proof-of-learning model watermarking machine learning security spoofing attack countermeasures dual-layered verification model authenticity |
| url | https://ieeexplore.ieee.org/document/10741282/ |
| work_keys_str_mv | AT ozgurural enhancingsecurityofproofoflearningagainstspoofingattacksusingfeaturebasedmodelwatermarking AT kenjiyoshigoe enhancingsecurityofproofoflearningagainstspoofingattacksusingfeaturebasedmodelwatermarking |