SOTAVerified

Diversity

Diversity in data sampling is crucial across various use cases, including search, recommendation systems, and more. Ensuring diverse samples means capturing a wide range of variations and perspectives, which leads to more robust, unbiased, and comprehensive models. In search use cases, for instance, diversity helps avoid redundancy, ensuring that users are exposed to a broader set of relevant information rather than repeated similar results.

Papers

Showing 45514575 of 9051 papers

TitleStatusHype
Improving Response Diversity through Commonsense-Aware Empathetic Response Generation0
Improving robustness and calibration in ensembles with diversity regularization0
Activitynet 2019 Task 3: Exploring Contexts for Dense Captioning Events in Videos0
Black Box to White Box: Discover Model Characteristics Based on Strategic Probing0
Predicate Debiasing in Vision-Language Models Integration for Scene Graph Generation Enhancement0
Topic-to-essay generation with knowledge-based content selection0
Improving Sequential Determinantal Point Processes for Supervised Video Summarization0
Improving speech recognition models with small samples for air traffic control systems0
Improving Structural Diversity of Blackbox LLMs via Chain-of-Specification Prompting0
Improving Style-Content Disentanglement in Image-to-Image Translation0
Bit Error Rate Analysis of M-ARY PSK and M-ARY QAM Over Rician Fading Channel0
Top-N recommendations in the presence of sparsity: An NCD-based approach0
Improving the Diversity of Bootstrapped DQN by Replacing Priors With Noise0
Top-nσ: Not All Logits Are You Need0
Improving the Estimation of Attenuation in Q/V Band Systems with a Kalman-Based Scintillation Filter0
Active Task Randomization: Learning Robust Skills via Unsupervised Generation of Diverse and Feasible Tasks0
Social Diversity and Spread of Pandemic: Evidence from India0
Improving the Generalization of Unseen Crowd Behaviors for Reinforcement Learning based Local Motion Planners0
Improving the Naturalness and Diversity of Referring Expression Generation models using Minimum Risk Training0
Improving the performance of weak supervision searches using data augmentation0
Improving the Robustness of Large Language Models via Consistency Alignment0
Improving the Robustness of Quantized Deep Neural Networks to White-Box Attacks using Stochastic Quantization and Information-Theoretic Ensemble Training0
Topological conditions drive stability in meta-ecosystems0
Improving the Transferability of Adversarial Examples by Feature Augmentation0
Improving the Transferability of Adversarial Examples by Inverse Knowledge Distillation0
Show:102550
← PrevPage 183 of 363Next →

No leaderboard results yet.