{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,8,22]],"date-time":"2025-08-22T05:03:59Z","timestamp":1755839039631,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":32,"publisher":"ACM","license":[{"start":{"date-parts":[[2020,10,19]],"date-time":"2020-10-19T00:00:00Z","timestamp":1603065600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2020,10,19]]},"DOI":"10.1145\/3340531.3412696","type":"proceedings-article","created":{"date-parts":[[2020,10,19]],"date-time":"2020-10-19T07:01:44Z","timestamp":1603090904000},"page":"2813-2820","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["Deep Behavior Tracing with Multi-level Temporality Preserved Embedding"],"prefix":"10.1145","author":[{"given":"Runze","family":"Wu","sequence":"first","affiliation":[{"name":"Fuxi AI Lab, NetEase Games, Hangzhou, China"}]},{"given":"Hao","family":"Deng","sequence":"additional","affiliation":[{"name":"Fuxi AI Lab, NetEase Games, Hangzhou, China"}]},{"given":"Jianrong","family":"Tao","sequence":"additional","affiliation":[{"name":"Fuxi AI Lab, NetEase Games, Hangzhou, China"}]},{"given":"Changjie","family":"Fan","sequence":"additional","affiliation":[{"name":"Fuxi AI Lab, NetEase Games, Hangzhou, China"}]},{"given":"Qi","family":"Liu","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"given":"Liang","family":"Chen","sequence":"additional","affiliation":[{"name":"Sun Yat-Sen University, Guangzhou, China"}]}],"member":"320","published-online":{"date-parts":[[2020,10,19]]},"reference":[{"key":"e_1_3_2_2_1_1","volume-title":"An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271","author":"Bai Shaojie","year":"2018","unstructured":"Shaojie Bai , J Zico Kolter , and Vladlen Koltun . 2018. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271 ( 2018 ). Shaojie Bai, J Zico Kolter, and Vladlen Koltun. 2018. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271 (2018)."},{"key":"e_1_3_2_2_2_1","volume-title":"Recurrent neural networks for multivariate time series with missing values. Scientific reports","author":"Che Zhengping","year":"2018","unstructured":"Zhengping Che , Sanjay Purushotham , Kyunghyun Cho , David Sontag , and Yan Liu . 2018. Recurrent neural networks for multivariate time series with missing values. Scientific reports , Vol. 8 , 1 ( 2018 ), 6085. Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, and Yan Liu. 2018. Recurrent neural networks for multivariate time series with missing values. Scientific reports, Vol. 8, 1 (2018), 6085."},{"key":"e_1_3_2_2_3_1","volume-title":"Fast and accurate deep network learning by exponential linear units (elus). arXiv","author":"Clevert Djork-Arn\u00e9","year":"2015","unstructured":"Djork-Arn\u00e9 Clevert , Thomas Unterthiner , and Sepp Hochreiter . 2015. Fast and accurate deep network learning by exponential linear units (elus). arXiv , Vol. abs\/ 1511 .07289 ( 2015 ). Djork-Arn\u00e9 Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2015. Fast and accurate deep network learning by exponential linear units (elus). arXiv, Vol. abs\/1511.07289 (2015)."},{"key":"e_1_3_2_2_4_1","volume-title":"Transformer-xl: Attentive language models beyond a fixed-length context. arXiv","author":"Dai Zihang","year":"2019","unstructured":"Zihang Dai , Zhilin Yang , Yiming Yang , William W Cohen , Jaime Carbonell , Quoc V Le , and Ruslan Salakhutdinov . 2019 . Transformer-xl: Attentive language models beyond a fixed-length context. arXiv , Vol. abs\/ 1901 .02860 (2019). Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv, Vol. abs\/1901.02860 (2019)."},{"key":"e_1_3_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3109859.3109877"},{"key":"e_1_3_2_2_6_1","volume-title":"Long short-term memory. Neural computation","author":"Hochreiter Sepp","year":"1997","unstructured":"Sepp Hochreiter and J\u00fcrgen Schmidhuber . 1997. Long short-term memory. Neural computation , Vol. 9 , 8 ( 1997 ), 1735--1780. Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, Vol. 9, 8 (1997), 1735--1780."},{"key":"e_1_3_2_2_7_1","volume-title":"Time2Vec: Learning a Vector Representation of Time. arXiv","author":"Kazemi Seyed Mehran","year":"2019","unstructured":"Seyed Mehran Kazemi , Rishab Goel , Sepehr Eghbali , Janahan Ramanan , Jaspreet Sahota , Sanjay Thakur , Stella Wu , Cathal Smyth , Pascal Poupart , and Marcus Brubaker . 2019. Time2Vec: Learning a Vector Representation of Time. arXiv , Vol. abs\/ 1907 .05321 ( 2019 ). Seyed Mehran Kazemi, Rishab Goel, Sepehr Eghbali, Janahan Ramanan, Jaspreet Sahota, Sanjay Thakur, Stella Wu, Cathal Smyth, Pascal Poupart, and Marcus Brubaker. 2019. Time2Vec: Learning a Vector Representation of Time. arXiv, Vol. abs\/1907.05321 (2019)."},{"key":"e_1_3_2_2_8_1","volume-title":"Human activity recognition and pattern discovery","author":"Kim Eunju","year":"2009","unstructured":"Eunju Kim , Sumi Helal , and Diane Cook . 2009. Human activity recognition and pattern discovery . IEEE pervasive computing, Vol. 9 , 1 ( 2009 ), 48--53. Eunju Kim, Sumi Helal, and Diane Cook. 2009. Human activity recognition and pattern discovery. IEEE pervasive computing, Vol. 9, 1 (2009), 48--53."},{"volume-title":"Adam: A method for stochastic optimization. arXiv","author":"Kingma Diederik P","key":"e_1_3_2_2_9_1","unstructured":"Diederik P Kingma and Jimmy Ba. [n.d.]. Adam: A method for stochastic optimization. arXiv , Vol. abs\/ 1412 .6980 ([n.,d.]). Diederik P Kingma and Jimmy Ba. [n.d.]. Adam: A method for stochastic optimization. arXiv, Vol. abs\/1412.6980 ([n.,d.])."},{"key":"e_1_3_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3178876.3186161"},{"key":"e_1_3_2_2_11_1","volume-title":"Time-dependent representation for neural event sequence prediction. ArXiv","author":"Li Yang","year":"2017","unstructured":"Yang Li , Nan Du , and Samy Bengio . 2017. Time-dependent representation for neural event sequence prediction. ArXiv , Vol. abs\/ 1708 .00065 ( 2017 ). Yang Li, Nan Du, and Samy Bengio. 2017. Time-dependent representation for neural event sequence prediction. ArXiv, Vol. abs\/1708.00065 (2017)."},{"key":"e_1_3_2_2_12_1","volume-title":"EKT: Exercise-aware Knowledge Tracing for Student Performance Prediction","author":"Liu Qi","year":"2019","unstructured":"Qi Liu , Zhenya Huang , Yu Yin , Enhong Chen , Hui Xiong , Yu Su , and Guoping Hu . 2019 . EKT: Exercise-aware Knowledge Tracing for Student Performance Prediction . IEEE Transactions on Knowledge and Data Engineering ( 2019), 1--1. Qi Liu, Zhenya Huang, Yu Yin, Enhong Chen, Hui Xiong, Yu Su, and Guoping Hu. 2019. EKT: Exercise-aware Knowledge Tracing for Student Performance Prediction. IEEE Transactions on Knowledge and Data Engineering (2019), 1--1."},{"key":"e_1_3_2_2_13_1","volume-title":"Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025","author":"Luong Minh-Thang","year":"2015","unstructured":"Minh-Thang Luong , Hieu Pham , and Christopher D Manning . 2015. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 ( 2015 ). Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 (2015)."},{"key":"e_1_3_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3331184.3331221"},{"key":"e_1_3_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.5555\/1888028.1888059"},{"key":"e_1_3_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/1772690.1772773"},{"key":"e_1_3_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2011.6126349"},{"key":"e_1_3_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1016\/0304-405X(77)90041-1"},{"key":"e_1_3_2_2_19_1","volume-title":"Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research","author":"Srivastava Nitish","year":"2014","unstructured":"Nitish Srivastava , Geoffrey Hinton , Alex Krizhevsky , Ilya Sutskever , and Ruslan Salakhutdinov . 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research , Vol. 15 , 1 ( 2014 ), 1929--1958. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, Vol. 15, 1 (2014), 1929--1958."},{"key":"e_1_3_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3357384.3357830"},{"key":"e_1_3_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330687"},{"key":"e_1_3_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3219925"},{"key":"e_1_3_2_2_23_1","volume-title":"Modeling order in neural word embeddings at scale. arXiv","author":"Trask Andrew","year":"2015","unstructured":"Andrew Trask , David Gilmore , and Matthew Russell . 2015. Modeling order in neural word embeddings at scale. arXiv , Vol. abs\/ 1506 .02338 ( 2015 ). Andrew Trask, David Gilmore, and Matthew Russell. 2015. Modeling order in neural word embeddings at scale. arXiv, Vol. abs\/1506.02338 (2015)."},{"key":"e_1_3_2_2_24_1","unstructured":"Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998--6008.  Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998--6008."},{"key":"e_1_3_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00365-006-0663-2"},{"key":"e_1_3_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3161413"},{"key":"e_1_3_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDM.2015.79"},{"key":"e_1_3_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.5555\/3298239.3298479"},{"key":"e_1_3_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE.2019.00120"},{"key":"e_1_3_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11618"},{"key":"e_1_3_2_2_31_1","doi-asserted-by":"crossref","unstructured":"Yu Zhu Hao Li Yikang Liao Beidou Wang Ziyu Guan Haifeng Liu and Deng Cai. 2017. What to Do Next: Modeling User Behaviors by Time-LSTM. In IJCAI. 3602--3608.  Yu Zhu Hao Li Yikang Liao Beidou Wang Ziyu Guan Haifeng Liu and Deng Cai. 2017. What to Do Next: Modeling User Behaviors by Time-LSTM. In IJCAI. 3602--3608.","DOI":"10.24963\/ijcai.2017\/504"},{"key":"e_1_3_2_2_32_1","doi-asserted-by":"crossref","unstructured":"Ali Zonoozi Jung-jae Kim Xiao-Li Li and Gao Cong. 2018. Periodic-CRN: A Convolutional Recurrent Model for Crowd Density Prediction with Recurring Periodic Patterns. In IJCAI. 3732--3738.  Ali Zonoozi Jung-jae Kim Xiao-Li Li and Gao Cong. 2018. Periodic-CRN: A Convolutional Recurrent Model for Crowd Density Prediction with Recurring Periodic Patterns. In IJCAI. 3732--3738.","DOI":"10.24963\/ijcai.2018\/519"}],"event":{"name":"CIKM '20: The 29th ACM International Conference on Information and Knowledge Management","sponsor":["SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web","SIGIR ACM Special Interest Group on Information Retrieval"],"location":"Virtual Event Ireland","acronym":"CIKM '20"},"container-title":["Proceedings of the 29th ACM International Conference on Information &amp; Knowledge Management"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3340531.3412696","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3340531.3412696","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T22:02:55Z","timestamp":1750197775000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3340531.3412696"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,10,19]]},"references-count":32,"alternative-id":["10.1145\/3340531.3412696","10.1145\/3340531"],"URL":"https:\/\/doi.org\/10.1145\/3340531.3412696","relation":{},"subject":[],"published":{"date-parts":[[2020,10,19]]},"assertion":[{"value":"2020-10-19","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}