On the Adversarial Vulnerabilities of Transfer Learning in Remote Sensing
Tao Bai, Xingjian Tian, Yonghao Xu, and Bihan Wen
在遥感领域广泛使用来自一般计算机视觉任务的预训练模型,大大降低了培训成本,提高了性能。 然而,这种做法也引入了下游任务的漏洞,其中公开可用的预训练模型可以用作代理来妥协下游模型。 本文介绍了一种新的对抗神经元操纵方法,该方法通过在预训练模型中选择性地操纵单个或多个神经元来产生可转移的扰动。 与现有的攻击不同,这种方法消除了对特定领域信息的需求,使其更广泛地适用和高效。 通过靶向多个脆弱的神经元,扰动实现了卓越的攻击性能,揭示了深度学习模型中的关键漏洞。 对各种模型和遥感数据集进行的实验验证了拟议方法的有效性。 这种低访问对抗神经元操作技术突出了转移学习模型中的重大安全风险,强调在解决安全关键遥感任务时,迫切需要在设计中提供更强大的防御。
The use of pretrained models from general computer vision tasks is widespread in remote sensing, significantly reducing training costs and improving performance. However, this practice also introduces vulnerabilities to downstream tasks, where publicly available pretrained models can be used as a proxy to compromise downstream models. This paper presents a novel Adversarial Neuron Manipulation method, which generates transferable perturbations by selectively manipulating single or multiple neuro...