Showing 1 - 20 results of 82 for search 'general adversarial attacks', query time: 0.11s Refine Results
  1. 1

    G&G Attack: General and Geometry-Aware Adversarial Attack on the Point Cloud by Geng Chen, Zhiwen Zhang, Yuanxi Peng, Chunchao Li, Teng Li

    Published 2025-01-01
    “…Current methods suffer from issues such as generated point outliers and poor attack generalization. Consequently, it is not feasible to rely solely on overall or geometry-aware attacks to generate adversarial samples. …”
    Get full text
    Article
  2. 2

    Comprehensive Evaluation of Deepfake Detection Models: Accuracy, Generalization, and Resilience to Adversarial Attacks by Maryam Abbasi, Paulo Váz, José Silva, Pedro Martins

    Published 2025-01-01
    “…Existing detection methods face challenges with generalization across datasets and vulnerability to adversarial attacks. …”
    Get full text
    Article
  3. 3
  4. 4
  5. 5
  6. 6

    DOG: An Object Detection Adversarial Attack Method by Jinpeng Li, Xiaoyu Ji, Wenyuan Xu, Yushi Cheng

    Published 2025-01-01
    “…This study presents an object detection adversarial attack method (DOG) based on the dynamic optimization of a multi-scale feature grid cluster, aimed at addressing the challenges of poor transferability in white-box attacks and long generation cycles in black-box attacks within the current adversarial example generation techniques. …”
    Get full text
    Article
  7. 7

    A Survey on Adversarial Attacks for Malware Analysis by Kshitiz Aryal, Maanak Gupta, Mahmoud Abdelsalam, Pradip Kunwar, Bhavani Thuraisingham

    Published 2025-01-01
    “…We identify open problems and propose future research directions for developing more practical, robust, efficient, and generalized adversarial attacks on ML-based malware classifiers.…”
    Get full text
    Article
  8. 8

    Attacker Attribution in Multi-Step and Multi-Adversarial Network Attacks Using Transformer-Based Approach by Romina Torres, Ana García

    Published 2025-07-01
    “…Recent studies on network intrusion detection using deep learning primarily focus on detecting attacks or classifying attack types, but they often overlook the challenge of attributing each attack to its specific source among many potential adversaries (multi-adversary attribution). …”
    Get full text
    Article
  9. 9
  10. 10

    Stealthy Adversarial Attacks on Machine Learning-Based Classifiers of Wireless Signals by Wenhan Zhang, Marwan Krunz, Gregory Ditzler

    Published 2024-01-01
    “…Although highly accurate classifiers have been developed, research shows that these classifiers are, in general, vulnerable to adversarial machine learning (AML) attacks. …”
    Get full text
    Article
  11. 11

    On the Validity of Traditional Vulnerability Scoring Systems for Adversarial Attacks Against LLMs by Atmane Ayoub Mansour Bahar, Ahmad Samer Wazan

    Published 2025-01-01
    “…This research investigates the effectiveness of established vulnerability metrics, such as the Common Vulnerability Scoring System (CVSS), in evaluating attacks on Large Language Models (LLMs), with a focus on Adversarial Attacks (AAs). …”
    Get full text
    Article
  12. 12
  13. 13

    Randomized Purifier Based on Low Adversarial Transferability for Adversarial Defense by Sangjin Park, Yoojin Jung, Byung Cheol Song

    Published 2024-01-01
    “…Deep neural networks are generally very vulnerable to adversarial attacks. In order to defend against adversarial attacks in classifiers, Adversarial Purification (AP) was developed to neutralize adversarial perturbations using a generative model at the input stage. …”
    Get full text
    Article
  14. 14

    A CGAN-based adversarial attack method for data-driven state estimation by Qi Wang, Jing Zhang, Jianxiong Hu, Shutan Wu, Shiyi Hou, Yi Tang

    Published 2025-09-01
    “…Their sensitivity to small perturbations and the limitation of generalization ability make them vulnerable to adversarial attacks. …”
    Get full text
    Article
  15. 15
  16. 16

    Ctta: a novel chain-of-thought transfer adversarial attacks framework for large language models by Xinxin Yue, Zhiyong Zhang, Junchang Jing, Weiguo Wang

    Published 2025-06-01
    “…However, this capability also introduces the potential for more covert and effective adversarial attack methods. This paper proposes a CoT Transfer Adversarial attack framework (CTTA) for general LLMs. …”
    Get full text
    Article
  17. 17

    Adversarial Drift Detection in Intrusion Detection System by Yaguan Qian, Xiaohui Guan

    Published 2015-03-01
    “…The recent intrusion detection systems based on machine learning generally assume that the intrusion traffic always satisfies stationary of statistics.However,this assumption is not always held when adversaries arbitrarily alter the distribution of traffic data,or develop new attack techniques,which may reduce the detection rate.To overcome this adversarial drift,a novel drift detection approach based on weighted Rényi distance was suggested.The experiment on KDD Cup99 shows that the weighted Rényi distance is able to perfectly detect the adversarial drift,and improve the intrusion detection rate by retraining the model.…”
    Get full text
    Article
  18. 18

    Adversarial sample generation algorithm for vertical federated learning by Xiaolin CHEN, Daoguang ZAN, Bingchao WU, Bei GUAN, Yongji WANG

    Published 2023-08-01
    “…To adapt to the scenario characteristics of vertical federated learning (VFL) applications regarding high communication cost, fast model iteration, and decentralized data storage, a generalized adversarial sample generation algorithm named VFL-GASG was proposed.Specifically, an adversarial sample generation framework was constructed for the VFL architecture.A white-box adversarial attack in the VFL was implemented by extending the centralized machine learning adversarial sample generation algorithm with different policies such as L-BFGS, FGSM, and C&W.By introducing deep convolutional generative adversarial network (DCGAN), an adversarial sample generation algorithm named VFL-GASG was designed to address the problem of universality in the generation of adversarial perturbations.Hidden layer vectors were utilized as local prior knowledge to train the adversarial perturbation generation model, and through a series of convolution-deconvolution network layers, finely crafted adversarial perturbations were produced.Experiments show that VFL-GASG can maintain a high attack success while achieving a higher generation efficiency, robustness, and generalization ability than the baseline algorithm, and further verify the impact of relevant settings for adversarial attacks.…”
    Get full text
    Article
  19. 19

    TACO: Adversarial Camouflage Optimization on Trucks to Fool Object Detectors by Adonisz Dimitriu, Tamás Vilmos Michaletzky, Viktor Remeli

    Published 2025-03-01
    “…Adversarial attacks threaten the reliability of machine learning models in critical applications like autonomous vehicles and defense systems. …”
    Get full text
    Article
  20. 20

    Synthetic EMG Based on Adversarial Style Transfer Can Effectively Attack Biometric-Based Personal Identification Models by Peiqi Kang, Shuo Jiang, Peter B. Shull

    Published 2023-01-01
    “…We investigate the possibility of effectively attacking EMG-based identification models with adversarial biological input via a novel EMG signal individual-style transformer based on a generative adversarial network and tiny leaked data segments. …”
    Get full text
    Article