摘要:
The recent increase of asynchronous event sequence data in a diversity of fields, make researchers pay more attention to how to mine knowledge from them. In the initial research phase, researchers tend to make use of basic mathematical-based point process models, such as Poisson process and Hawkes process. And in recent years, recurrent neural network (RNN) based point process models are proposed which have significant model performance improvement, while it is still hard to describe the long-term relation between events. To address this issue, transformer Hawkes process is proposed. However, it is worth noting that transformer with a fixed stack of different layers is failure to implement the parallel processing, recursive learning, and abstracting the local salient properties, while they may be very important. In order to make up for this shortcoming, we present a Universal Transformer Hawkes Process (UTHP), which introduces the recurrent structure in encode process, and introduce convolutional neural network (CNN) in the position-wise-feed-forward neural network. Experiments on several datasets show that the performance of our model is improved compared to the performance of the state-of-the-art.
© 2021 IEEE.
卷、期、页:v 2021-July,
发表日期:2021-07-18
收录情况:EI(工程索引)
发表期刊名称:Proceedings of the International Joint Conference on Neural Networks
参与作者:李卫民
通讯作者:张鲁宁,宋志妍,刘泽宇
第一作者:刘建伟,左信
论文类型:会议论文
论文概要:张鲁宁,刘建伟,宋志妍,左信,李卫民,刘泽宇,Universal Transformer Hawkes process,Proceedings of the International Joint Conference on Neural Networks,2021,v 2021-July,
论文题目:Universal Transformer Hawkes process