SOTAVerified

Diversity

Diversity in data sampling is crucial across various use cases, including search, recommendation systems, and more. Ensuring diverse samples means capturing a wide range of variations and perspectives, which leads to more robust, unbiased, and comprehensive models. In search use cases, for instance, diversity helps avoid redundancy, ensuring that users are exposed to a broader set of relevant information rather than repeated similar results.

Papers

Showing 44764500 of 9051 papers

TitleStatusHype
Boosting Out-of-Distribution Detection with Multiple Pre-trained ModelsCode0
Recommending on graphs: a comprehensive review from a data perspective0
Multi-Frequency Channel Modeling for Millimeter Wave and THz Wireless Communication via Generative Adversarial NetworksCode0
Cross-Linguistic Syntactic Difference in Multilingual BERT: How Good is It and How Does It Affect Transfer?Code0
Secure and Privacy Preserving Proxy Biometrics Identities0
Multi-Metric AutoRec for High Dimensional and Sparse User Behavior Data Prediction0
Efficient aggregation of face embeddings for decentralized face recognition deployments (extended version)0
A Pattern Discovery Approach to Multivariate Time Series Forecasting0
CausalDialogue: Modeling Utterance-level Causality in ConversationsCode0
Data Curation Alone Can Stabilize In-context LearningCode1
On-the-fly Denoising for Data Augmentation in Natural Language UnderstandingCode0
DISCO: Distilling Counterfactuals with Large Language ModelsCode1
DimonGen: Diversified Generative Commonsense Reasoning for Explaining Concept RelationshipsCode0
Unleashing the Power of Visual Prompting At the Pixel LevelCode0
Model Ratatouille: Recycling Diverse Models for Out-of-Distribution Generalization0
Exploring Hybrid and Ensemble Models for Multiclass Prediction of Mental Health Status on Social Media0
Natural Language to Code Generation in Interactive Data Science Notebooks0
PVGRU: Generating Diverse and Relevant Dialogue Responses via Pseudo-Variational Mechanism0
On the Connection between Invariant Learning and Adversarial Training for Out-of-Distribution Generalization0
Beyond Digital "Echo Chambers": The Role of Viewpoint Diversity in Political DiscussionCode0
DAG: Depth-Aware Guidance with Denoising Diffusion Probabilistic ModelsCode1
DuNST: Dual Noisy Self Training for Semi-Supervised Controllable Text GenerationCode0
Natural Language Processing in Customer Service: A Systematic Review0
Controllable Text Generation via Probability Density Estimation in the Latent SpaceCode1
Experiments on Generalizability of BERTopic on Multi-Domain Short Text0
Show:102550
← PrevPage 180 of 363Next →

No leaderboard results yet.