SOTAVerified

News Recommendation

Papers

Showing 101125 of 193 papers

TitleStatusHype
Efficient-FedRec: Efficient Federated Learning Framework for Privacy-Preserving News RecommendationCode1
Uni-FedRec: A Unified Privacy-Preserving News Recommendation Framework for Model Training and Online Serving0
Generating Self-Contained and Summary-Centric Question Answer Pairs via Differentiable Reward Imitation LearningCode0
Neural News Recommendation with Collaborative News Encoding and Structural User EncodingCode1
Event Prominence Extraction Combining a Knowledge-Based Syntactic Parser and a BERT Classifier for Dutch0
Fastformer: Additive Attention Can Be All You NeedCode1
Is News Recommendation a Sequential Recommendation Task?0
UNBERT: User-News Matching BERT for News RecommendationCode0
Are we human, or are we users? The role of natural language processing in human-centric news recommenders that nudge users to diverse content0
WG4Rec: Modeling Textual Content with Word Graph for News RecommendationCode0
Personalized News Recommendation: Methods and Challenges0
DebiasGAN: Eliminating Position Bias in News Recommendation with Adversarial Learning0
HieRec: Hierarchical User Interest Modeling for Personalized News Recommendation0
PP-Rec: News Recommendation with Personalized User Interest and Time-aware News Popularity0
Personalized News Recommendation with Knowledge-aware Interactive MatchingCode0
DebiasedRec: Bias-aware User Modeling and Click Prediction for Personalized News Recommendation0
MM-Rec: Multimodal News RecommendationCode0
Empowering News Recommendation with Pre-trained Language ModelsCode1
Two Birds with One Stone: Unified Model Learning for Both Recall and Ranking in News Recommendation0
No NLP Task Should be an Island: Multi-disciplinarity for Diversity in News Recommender Systems0
Implementing Evaluation Metrics Based on Theories of Democracy in News Comment Recommendation (Hackathon Report)0
A News Recommender System Considering Temporal Dynamics and Diversity0
Training Large-Scale News Recommenders with Pretrained Language Models in the LoopCode1
Less is More: Pre-train a Strong Text Encoder for Dense Retrieval Using a Weak DecoderCode1
NewsBERT: Distilling Pre-trained Language Model for Intelligent News Application0
Show:102550
← PrevPage 5 of 8Next →

No leaderboard results yet.