Attentive multi-view deep subspace clustering net q
摘要:In this paper, we propose a novel Attentive Multi-View Deep Subspace Nets (AMVDSN), which deeply explores underlying consistent and view-specific information from multiple views and fuse them by con-sidering each view & rsquo;s dynamic contribution obtained by attention mechanism. Unlike most multi-view subspace learning methods that they directly reconstruct data points on raw data or only consider con-sistency or complementarity when learning representation in deep or shallow space, our proposed method seeks to find a joint latent representation that explicitly considers both consensus and view-specific information among multiple views, and then performs subspace clustering on learned joint latent representation. Besides, different views contribute differently to representation learning, we therefore introduce attention mechanism to derive dynamic weight for each view, which performs much better than previous fusion methods in the field of multi-view subspace clustering. The proposed algorithm is intuitive and can be easily optimized just by using Stochastic Gradient Descent (SGD) because of the neural network framework, which also provides strong non-linear characterization capability compared with traditional subspace clustering approaches. The experimental results on seven real-world data sets have demonstrated the effectiveness of our proposed algorithm against some state-of-the-art subspace learning approaches. (c) 2021 Elsevier B.V. All rights reserved. In this paper, we propose a novel Attentive Multi-View Deep Subspace Nets (AMVDSN), which deeply explores underlying consistent and view-specific information from multiple views and fuse them by considering each view?s dynamic contribution obtained by attention mechanism. Unlike most multi-view subspace learning methods that they directly reconstruct data points on raw data or only consider consistency or complementarity when learning representation in deep or shallow space, our proposed method seeks to find a joint latent representation that explicitly considers both consensus and viewspecific information among multiple views, and then performs subspace clustering on learned joint latent representation. Besides, different views contribute differently to representation learning, we therefore introduce attention mechanism to derive dynamic weight for each view, which performs much better than previous fusion methods in the field of multi-view subspace clustering. The proposed algorithm is intuitive and can be easily optimized just by using Stochastic Gradient Descent (SGD) because of the neural network framework, which also provides strong non-linear characterization capability compared with traditional subspace clustering approaches. The experimental results on seven real-world data sets have demonstrated the effectiveness of our proposed algorithm against some state-of-the-art subspace learning approaches.
关键字:Multi-view learning; Subspace clustering; Deep learning; Attention
ISSN号:0925-2312
卷、期、页:卷: 435页: 186-196
发表日期:2021-05-07
期刊分区(SCI为中科院分区):二区
收录情况:SCIE(科学引文索引网络版),EI(工程索引)
发表期刊名称:Neurocomputing
通讯作者:卢润坤
第一作者:刘建伟,左信
论文类型:期刊论文
论文概要:卢润坤,刘建伟,左信,Attentive multi-view deep subspace clustering net q,Neurocomputing,2021,卷: 435页: 186-196
论文题目:Attentive multi-view deep subspace clustering net q