帳號:guest(3.147.72.11)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士以作者查詢全國書目勘誤回報
作者(中):段寶鈞
作者(英):Tuan, Pao-Chun
論文名稱(中):基於知識圖譜表示法學習增強使用者與物品交互關係於推薦系統之效能改進
論文名稱(英):Improving Recommendation Performance via Enhanced User-Item relations based on Knowledge Graph Embedding
指導教授(中):蔡銘峰
指導教授(英):Tsai, Ming-Feng
口試委員:蘇家玉
王釧茹
蔡銘峰
口試委員(外文):Su, Chia-Yu
Wang, Chuan-Ju
Tsai, Ming-Feng
學位類別:碩士
校院名稱:國立政治大學
系所名稱:資訊科學系
出版年:2021
畢業學年度:110
語文別:中文
論文頁數:33
中文關鍵詞:推薦系統知識圖譜連線文本資訊
英文關鍵詞:Recommendation systemKnowledge graphAlignmentTextual information
Doi Url:http://doi.org/10.6814/NCCU202101566
相關次數:
  • 推薦推薦:0
  • 點閱點閱:68
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:32
  • gshot_favorites title msg收藏:0
  在推薦系統(Recommendation System)中,知識圖譜(Knowledge Graph)扮演著越來越重要的角色。但幾乎沒有任何方法考慮到知識圖譜為不完整的可能性,現有方法大多單純透過標題或其他簡易資訊將使用者-物品偏好關係圖(User-item Interaction Graph)上的物品(Item)與知識圖譜上的實體(Entity)進行連線(Alignment),卻不曾考慮到連線可能有誤或是物品其實並不存在於知識圖譜上。因此本論文提出了一個新的想法,便是透過物品和實體的文本特徵,加入模型來計算兩邊的相似度,進而獲得連線。
  另外,我們發現現有的推薦系統幾乎都是使用一對一連線,在訓練過程中直接將連線的物品與實體合併為同一點,並透過知識圖譜上其他相關資訊的連線來協助訓練。但這種透過知識圖譜上的多點跳躍(Multi-hop)所訓練出來的推薦系統,有丟失資訊、訓練時間過長或模型過擬合(Overfitting)的可能性發生。於是,本論文基於此,提出將一對一連線擴展至多對多連線的概念。因為本論文之連線方式都是計算兩邊的相似度來進行連線,因此也很容易可得到多對多連線。另外,本論文將 Text-aware Preference Ranking for Recommender Systems(TPR)模型的物品與詞語關係圖(Item-word Graph)的詞語部分替換為實體來進行訓練達成了多對多連線之目的。
  本論文在四個真實世界的巨量資料集上進行 Top-N 推薦任務,且為了證明連線數多寡是否影響推薦效果,我們也進行了多對一與多對多的比較實驗。除此之外,我們將物品與實體進行隨機連線,來確認本論文提出之連線方式的有效性。本論文也透過更替知識圖譜的實驗,來確保多對多連線方式在不同的條件下依然能夠保持相同表現。而我們也透過實驗來驗證「連線正確與否並不影響推薦成效」之假說。最後,在實驗結果的部分,其數據表現呈現出我們所提出之多對多連線方式與使用者-物品推薦系統或加入知識圖譜之圖神經網路(Graph Neural Network)推薦模型實際比較後大多能取得最佳的推薦效果。
 The knowledge graph is playing an increasingly important role in recommendation systems. However, there is almost no method to consider the problem of the incomplete knowledge graph. Most previous studies use titles or other simple information to align items on the user-item interaction graph and entities on the knowledge graph. This method does not consider the condition that the connection may be wrong or the items are nonexistent in the knowledge graph. Therefore, this thesis proposes a new idea: to use a model to calculate the similarity between items and entities through the text features and then obtain the possible alignments for enriching the knowledge graph.
 In addition, we found that almost all existing recommendation systems use one-to-one connections between items and entities. During the training process, the related items and entities are directly merged into one node, then use other related information on the knowledge graph to assist in training. However, this kind of recommendation system trained through multi-hop on the knowledge graph may lose information in the process. Also, most of their training time is too long. Moreover, sometimes the model may be overfitting. Therefore, based on these possible issues, this paper proposes the concept of extending one-to-one alignments to many-to-many alignments. Because the alignment method in this thesis calculates the similarity between items and entities, it is easy to get many-to-many alignments. In addition, we replace the text part of the item-word graph in Text-aware Preference Ranking for Recommender Systems (TPR) model with entities for training to achieve our purpose of many-to-many alignments.
 This thesis provided Top-N recommendation tasks on four real-world datasets, and to prove whether the number of alignments affects the recommendation performance, we also provided many-to-one and many-to-many comparison experiments. In addition, we randomly connect items and entities to confirm the validity of the alignment method proposed in this paper. We also replace the knowledge graph to ensure that the many-to-many alignment method maintains the same performance under different conditions. Moreover, we also verify the hypothesis: “The correct or incorrect alignment will not affect the performance of the recommendation.” Finally, the experimental results show that our many-to-many alignment method achieved the best recommendation performance than the user-item recommendation system and Graph Neural Network recommendation model with knowledge graph in most cases.
致謝 i
中文摘要 ii
Abstract iii
第一章 緒論 1
 1.1  前言 1
 1.2  研究目的 2
第二章 相關文獻探討 4
 2.1  詞嵌入 4
 2.2  知識圖譜 5
 2.3  推薦系統 6
第三章 研究方法 8
 3.1  文本數據 8
 3.2  模型學習與進行連線 10
  3.2.1 連線型態 10
  3.2.2 BM25 11
  3.2.3 Word2vec 12
 3.3  模型優化 13
  3.3.1 以 BERT 優化 BM25 之 Top-k 連線表 13
  3.3.2 以 BPR 預訓練物品詞嵌入再用以優化 Word2vec 14
 3.4  多對多連線推薦系統 16
  3.4.1 TPR 16
  3.4.2 基於物品-實體連線之 TPR 18
第四章 實驗結果與討論 20
 4.1  資料集 20
 4.2  比較模型 21
 4.3  比較連線方式 21
 4.4  實驗設定與評估標準 22
  4.4.1 實驗設定 22
  4.4.2 評估標準 22
 4.5  實驗結果 23
  4.5.1 Top-N 推薦任務 25
  4.5.2 連線數目多寡之比較 25
  4.5.3 多對多連線的有效性 27
  4.5.4 實現於不同知識圖譜之效果 28
  4.5.5 連線正確與否對於推薦系統之影響效果 28
第五章 結論 30
 5.1 結論 30
 5.2 未來研究 30
參考文獻 32
[1] H.-T. Cheng, L. Koc, J. Harmsen, T. Shaked, T. Chandra, H. Aradhye, G. Anderson, G. Corrado, W. Chai, M. Ispir, R. Anil, Z. Haque, L. Hong, V. Jain, X. Liu, and H. Shah. Wide & deep learning for recommender systems. In DLRS@RecSys, pages 7–10, 2016.
[2] Y.-N. Chuang, C.-M. Chen, C.-J. Wang, M.-F. Tsai, Y. Fang, and E.-P. Lim. Tpr: Text-aware preference ranking for recommender systems. In CIKM, pages 215–224, 2020.
[3] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, pages 4171–4186, 2019.
[4] X. He and T.-S. Chua. Neural factorization machines for sparse predictive analytics. In SIGIR, pages 355–364, 2017.
[5] X. He, Z. He, J. Song, Z. Liu, Y.-G. Jiang, and T.-S. Chua. Nais: Neural attentive item similarity model for recommendation. In TKDE, pages 2354–2366, 2018.
[6] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.-S. Chua. Neural collaborative filtering. In WWW, pages 173–182, 2017.
[7] J. Lian, X. Zhou, F. Zhang, Z. Chen, X. Xie, and G. Sun. xdeepfm: Combining explicit and implicit feature interactions for recommender systems. In KDD, pages 1754–1763, 2018.
[8] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. In ICLR, 2013.
[9] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In UAI, pages 452–461, 2009.
[10] S. Rendle, Z. Gantner, C. Freudenthaler, and L. Schmidt-Thieme. Fast context-aware recommendations with factorization machines. In SIGIR, pages 635–644, 2011.
[11] S. E. Robertson, S. Walker, S. Jones, M. M. Hancock-Beaulieu, and M. Gatford. Okapi at trec-3. In TREC, pages 109–126, 1996.
[12] Y. Shan, T. R. Hoens, J. Jiao, H. Wang, D. Yu, and J. C. Mao. Deep crossing: Webscale modeling without manually crafted combinatorial features. In KDD, pages 255–262, 2016.
[13] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In NIPS, pages 6000–6010, 2017.
[14] X. Wang, X. He, Y. Cao, M. Liu, and T.-S. Chua. Kgat: Knowledge graph attention network for recommendation. In KDD, pages 950–958, 2019.
[15] X. Wang, X. He, F. Feng, L. Nie, and T.-S. Chua. Tem: Tree-enhanced embedding model for explainable recommendation. In WWW, pages 1543–1552, 2018.
[16] X. Wang, X. He, L. Nie, and T.-S. Chua. Item silk road: Recommending items from information domains to social users. In SIGIR, pages 185–194, 2017.
[17] X. Wang, X. He, M. Wang, F. Feng, and T.-S. Chua. Neural graph collaborative filtering. In SIGIR, pages 165–174, 2019.
[18] X. Wang, T. Huang, D. Wang, Y. Yuan, Z. Liu, and X. H. T.-S. Chua. Learning intents behind interactions with knowledge graph for recommendation. In WWW, pages 878–887, 2021.
[19] T. Wu, E. K.-I. Chio, H.-T. Cheng, Y. Du, S. Rendle, D. Kuzmin, R. Agarwal, L. Zhang, J. Anderson, S. Singh, T. Chandra, E. H. Chi, W. Li, A. Kumar, X. Ma, A. Soares, N. Jindal, and P. Cao. Zero-shot heterogeneous transfer learning from recommender systems to cold-start search retrieval. In CIKM, pages 2821–2828, 2020.
[20] G. Zhou, X. Zhu, C. Song, Y. Fan, H. Zhu, Y. Y. Xiao Ma, J. Jin, H. Li, and K. Gai. Deep interest network for click-through rate prediction. In KDD, pages 1059–1068, 2018.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *