Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Zhang, Yihaoa | Zhao, Chua; * | Yuan, Menga | Chen, Miana | Liu, Xiaoyangb
Affiliations: [a] School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China | [b] School of Computer and Engineering, Chongqing University of Technology, Chongqing, China
Correspondence: [*] Corresponding author: Chu Zhao, School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China. E-mail: [email protected].
Abstract: The autoencoder network has been proven to be one of the powerful techniques for recommender systems. Currently, the ways of utilizing autoencoder in recommender systems can be divided into two categories: modeling user-item interaction rely solely on autoencoder and integrating autoencoder with other models. Most existing methods based on autoencoder assume that all features of model’s input are equally the same contributing to the final prediction, which can be regarded as attention weight vectors; however, this hypothesis is not reliable, especially when exploring users’ interaction frequency with different items. Moreover, combining autoencoder with traditional methods, the usual strategy is to leverage a linear kernel of the inner product of user and item vectors to predict user preferences, which will lead to insufficient expression power and hurt the performance of recommendation when facing data sparsity and cold start problems. To tackle the above two problems, we propose a novel hybrid deep learning model for top-n recommendation, called attentive stacked sparse autoencoder (A-SAERec), which can capture attention weights vector of a user for items, and then combined with the neural matrix factorization to improve the performance of recommender model. Extensive experiments on four real-world datasets show that our A-SAERec algorithm has significant improvements over state-of-the-art algorithms.
Keywords: Attention mechanism, sparse autoencoder, collaborative filtering, recommender systems
DOI: 10.3233/IDA-216049
Journal: Intelligent Data Analysis, vol. 26, no. 4, pp. 841-857, 2022
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]