- Session 1: Deep Learning -- Day 2 (Nov.18), talks: 09:00-11:00 (5th floor Hall 1), poster session: 11:00-13:30
- Poster number: Mon06
- Download paper
Xing Wu (SunYat-sen university); Lifeng Huang (SunYat-sen university); Chengying Gao (SunYat-sen university)
Adversarial perturbation constructions have been demonstrated for object detection, but these are image-specific perturbations. Recent works have shown the existence of image-agnostic perturbations called universal adversarial perturbation (UAP) that can fool the classifiers over a set of natural images. In this paper, we extend this kind perturbation to attack deep proposal-based object detectors. We present a novel and effective approach called G-UAP to crafting universal adversarial perturbations, which can explicitly degrade the detection accuracy of a detector on a wide range of image samples. Our method directly misleads the Region Proposal Network (RPN) of the detectors into mistaking foreground (objects) for background without specifying an adversarial label for each target (RPN’s proposal), and even without considering that how many objects and object-like targets are in the image. The experimental results over three state-of-the-art detectors and two datasets demonstrate the effectiveness of the proposed method and transferability of the universal perturbations.