The Problem of the Adversarial Examples in Deep Learning
摘要:Adversarial examples are a hot issue in the field of deep learning security. The characteristic, generation and attack mode are the key problems to be solved against the adversarial examples. This paper expounds the key technical problems of adversarial examples from the concept of adversarial examples, the causes of adversarial examples, the attacking ways and reasons of adversarial examples. The concept of adversarial examples is mainly about the definition of adversarial examples, adversarial examples' targets and the knowledge of counter attack. This paper lists possible reasons for the causes of adversarial examples, at present, there are three main viewpoints on the causes of adversarial examples: low probability region interpretation in manifolds, linear interpretation, in addition, there is another viewpoint that linear interpretation has limitations, that is, the current conjecture can not be convincing. Further research on the causes of adversarial examples is an important research content in the future. Moreover, the main generation ways of adversarial examples analyzed are: F-BFGS method, FGS method, iterative method, iterative minimum possible class method and others. Meanwhile, their advantages and disadvantages and applicable scenarios are pointed out. Furthermore, the differences between several main ways of formation are compared. In addition, there are mainly two kinds of attacks of adversarial examples from the application scenario, one is white-box attack, and the other is black-box attack. The migration of adversarial examples is the reason for adversarial examples' attack. And this attribute means that an attacker can choose to attack a machine learning model without directly touching the underlying model to misclassify the examples. In addition, according to ways and causes of adversarial examples, the main defensive techniques against the examples are elaborated, including regularization method and preconditioning training based on antagonism, distillation, denial of classification and other methods, the application scenarios and shortcomings of different defense measures are pointed out, and it is explained that the above defense measures can not completely avoid the attack against the adversarial examples. Then, the application of adversarial examples has been discussed. So far, the application of adversarial examples is mainly used in confrontation evaluation and confrontation training. Finally, the future research direction of the adversarial examples is prospected. There are still a lot of theoretical and practical problems to solve on the problem of adversarial attack thoroughly. Finding out the characteristics of adversarial examples, considering the mathematical description of its practical application, discussing a universal method for generating adversarial examples, and investigating the generation mechanism and the attacking ways of adversarial examples are the key problems in the future study. It is the main goal to explore the defense algorithms against the attack of different adversarial examples. Combining these two parts to solve the attack of adversarial examples is the main research direction in the future.
© 2019, Science Press. All right reserved.
ISSN号:0254-4164
卷、期、页:v 42,n 8,p1886-1904
发表日期:2019-08-01
期刊分区(SCI为中科院分区):无
收录情况:EI(工程索引)
发表期刊名称:Jisuanji Xuebao/Chinese Journal of Computers
通讯作者:张思思
第一作者:左信,刘建伟
论文类型:期刊论文
论文概要:张思思,左信,刘建伟,The Problem of the Adversarial Examples in Deep Learning,Jisuanji Xuebao/Chinese Journal of Computers,2019,v 42,n 8,p1886-1904
论文题目:The Problem of the Adversarial Examples in Deep Learning