Summary Recommender Systems in the Era of Large Language Models arxiv.org
16,820 words - PDF document - View PDF document
One Line
Large language models can improve recommender systems by addressing the challenges of understanding user interests and textual information that deep neural networks struggle with.
Slides
Slide Presentation (10 slides)
Key Points
- Recommender systems are crucial for personalized suggestions in e-commerce and web applications.
- Deep neural networks (DNNs) have improved recommender systems, but still face limitations in understanding user interests and capturing textual information.
- Large language models (LLMs) have the potential to revolutionize recommender systems, but current models like BERT have limitations in capturing textual knowledge and generalizing to unseen recommendation tasks.
- Leveraging textual side information of users and items can enhance recommender systems, with language models like BERT serving as text encoders.
- Fine-tuning of large language models (LLMs) can be achieved through adapter structures, expanding and generalizing their capabilities.
- Prompt tuning strategies for recommender systems can be categorized into hard prompt tuning and soft prompt tuning.
- The lack of open-source LLMs for recommender systems hinders explainability and further research is needed.
- User-item interactions and collaborative knowledge can improve recommendation performance in recommender systems.
Summaries
31 word summary
Recommender systems are important for personalized suggestions. Deep neural networks have improved these systems, but struggle with understanding user interests and textual information. Large language models can help address these challenges.
38 word summary
Recommender systems play a crucial role in personalized suggestions for e-commerce and web applications. Deep neural networks (DNNs) have improved these systems, but they still struggle with understanding user interests and capturing textual information. Large language models (LLMs
802 word summary
Recommender systems are crucial for personalized suggestions in e-commerce and web applications. Deep neural networks (DNNs) have improved these systems, but still face limitations in understanding user interests and capturing textual information. Large language models (LLMs) like
Textual side information about users and items can enhance recommender systems. Deep Neural Networks (DNNs) have been widely used in recommender systems, with different architectures like Recurrent Neural Networks (RNNs) and Graph Neural Networks (G
Large Language Models (LLMs) have the potential to revolutionize recommender systems. However, current recommender systems based on pre-trained language models like BERT have limitations in capturing textual knowledge and generalizing to unseen recommendation tasks. Existing deep neural network
Recommender systems use user-item interactions and content-based methods to improve recommendation performance. Deep learning techniques, such as NeuMF and GNN, have been effective in developing recommender systems. DeepCoNN and NARRE integrate textual knowledge using CNN
This method uses multiple demonstrations to guide the reasoning process of large language models (LLMs) in recommender systems. The concept of self-consistency, an extension of this method, implements a majority voting mechanism on answers. LLMs like ChatG
To improve recommender systems, leveraging textual side information of users and items, such as user profiles, reviews, and item descriptions, can be a promising solution. Language models like BERT can serve as text encoders to map items or users into a
PTUM proposes two pre-training tasks, MBP and NBP, to model user behaviors in recommender systems. M6 adopts text-infilling and auto-regressive language generation objectives. P5 uses multi-mask modeling and mixes datasets for pre-training
Fine-tuning of large language models (LLMs) can be achieved through the use of adapter structures, which introduce extra trainable weights. These adapters are embedded into the transformer structure of LLMs and contribute to expanding and generalizing their capabilities. Low
Liu et al. propose prompting ChatGPT for the review summary task in recommendations using text summarization. Few-shot prompting provides input-output examples to guide pre-trained language models for specific downstream tasks. In-Context Learning (ICL) is an
Zero-shot In-Context Learning (ICL) can be used for conversational recommendations, where users do not provide demonstrations. Chain-of-Thought (CoT) prompting enhances the reasoning ability of large language models (LLMs) by annotating intermediate
Prompt tuning strategies for Recommender Systems (RecSys) can be categorized into hard prompt tuning and soft prompt tuning. Hard prompt tuning involves updating the prompt using gradient methods, while soft prompt tuning only updates the soft prompt and minimal parameters. Soft prompts can
The second stage of fine-tuning Large Language Models (LLMs) for recommender systems involves categorizing works based on fine-tuning methods. Examples include using LoRA for lightweight instruction tuning and using LLM-based prompt constructors to enhance graph understanding.
LLMs for Recommender Systems (RecSys) are being guided with prompts to improve item-side fairness, but more research is needed. The lack of open-source LLMs for RecSys makes them difficult to understand, hindering explainability. Eff
Vertical domain-specific LLMs can save time by providing tailored recommendations. However, LLMs may struggle with long texts in Recommender Systems (RecSys). User-item interactions contain collaborative knowledge that can improve recommendations. User and item indexing can address the
This excerpt is a list of references to various research papers and articles related to recommender systems and large language models. The references cover topics such as graph neural networks for social recommendations, negative sampling for recommendations, explainable recommender systems, knowledge graph completion
This summary provides an overview of various research papers and conference proceedings related to recommender systems and large language models. The papers cover topics such as recommendation approaches empowered by large language models, language modeling paradigm adaptations in recommender systems, and the use of large
This summary provides a concise version of the text excerpt, highlighting the key points and preserving important details. The original order of ideas is retained, and separate paragraphs are used to distinguish distinct ideas for readability.
Several papers are referenced that discuss the use of large
A list of recent research papers in the field of recommender systems is provided. The papers cover various topics such as multi-task learning, item indexing, generative retrieval, sequence representation learning, text-based collaborative filtering, transferable sequential recommenders, zero
This document contains a list of references to papers related to recommender systems in the era of large language models. These papers cover various topics such as denoising sequence-to-sequence pre-training, parameter-efficient transfer learning, low-rank adaptation of large
Several papers related to recommender systems, language models, and AI ethics were cited in this document. One paper discussed the use of automatically generated prompts to elicit knowledge from language models. Another paper surveyed in-context learning, while another explored personalized prompts for
This excerpt includes citations of various research papers related to recommender systems and large language models. The papers cover a range of topics, including medical advice, virtual legal assistants, financial language models, and multilingual shopping recommendation datasets. The authors of the papers