近年来,深度学习模型已广泛用于各种目的,包括对象识别,自动驾驶汽车,面部识别,语音识别,情感分析等。然而,近年来的研究表明,这些模型对噪声具有很强的抵抗能力,从而导致模型的误分类。这一问题已经在图像和音频领域得到了深入的研究。关于这一问题,关于文本数据的研究很少。对这一主题的调查甚至更少,以了解不同类型的攻击和防御技术。在这篇稿子中我们积累和分析了不同的攻击技术、不同的防御模式,对如何克服这一问题提供了较为全面的思路。随后,我们指出了所有论文中的一些有趣的发现,以及在这一领域取得进展需要克服的挑战。
原文标题:Adversarial Attacks and Defense on Textual Data: A Review
原文:Deep leaning models have been used widely for various purposes in recent years in object recognition, self-driving cars, face recognition, speech recognition, sentiment analysis and many others. However, in recent years it has been shown that these models possess weakness to noises which forces the model to misclassify. This issue has been studied profoundly in image and audio domain. Very little has been studied on this issue with respect to textual data. Even less survey on this topic has been performed to understand different types of attacks and defense techniques. In this manuscript we accumulated and analyzed different attacking techniques, various defense models on how to overcome this issue in order to provide a more comprehensive idea. Later we point out some of the interesting findings of all papers and challenges that need to be overcome in order to move forward in this field.
原文作者:Aminul Huq, Mst. Tasnim Pervin
原文地址:https://arxiv.org/abs/2005.14108
文本数据的对抗性攻击和防御:综述(CS CL).pdf ---来自腾讯云社区的---蔡秋纯
微信扫一扫打赏
支付宝扫一扫打赏