SOTAVerified

Diversity

Diversity in data sampling is crucial across various use cases, including search, recommendation systems, and more. Ensuring diverse samples means capturing a wide range of variations and perspectives, which leads to more robust, unbiased, and comprehensive models. In search use cases, for instance, diversity helps avoid redundancy, ensuring that users are exposed to a broader set of relevant information rather than repeated similar results.

Papers

Showing 25262550 of 9051 papers

TitleStatusHype
Hyperparameter Auto-tuning in Self-Supervised Robotic LearningCode0
Differentiable Instruction Optimization for Cross-Task GeneralizationCode0
FuncGenFoil: Airfoil Generation and Editing Model in Function SpaceCode0
FS-NCSR: Increasing Diversity of the Super-Resolution Space via Frequency Separation and Noise-Conditioned Normalizing FlowCode0
Divergence Frontiers for Generative Models: Sample Complexity, Quantization Effects, and Frontier IntegralsCode0
Full-Stack Filters to Build Minimum Viable CNNsCode0
Divergent Ensemble Networks: Enhancing Uncertainty Estimation with Shared Representations and Independent BranchingCode0
From structure mining to unsupervised exploration of atomic octahedral networksCode0
From On-chain to Macro: Assessing the Importance of Data Source Diversity in Cryptocurrency Market ForecastingCode0
From Text to Emotion: Unveiling the Emotion Annotation Capabilities of LLMsCode0
From Bytes to Borsch: Fine-Tuning Gemma and Mistral for the Ukrainian Language RepresentationCode0
DIDI: Diffusion-Guided Diversity for Offline Behavioral GenerationCode0
From characters to words: the turning point of BPE mergesCode0
An Asynchronous Updating Reinforcement Learning Framework for Task-oriented Dialog SystemCode0
Frequency Tracking Features for Data-Efficient Deep Siren IdentificationCode0
Black-Box Testing of Deep Neural Networks Through Test Case DiversityCode0
Illuminating the Space of Beatable Lode Runner Levels Produced By Various Generative Adversarial NetworksCode0
From Distributional to Overton Pluralism: Investigating Large Language Model AlignmentCode0
Diversity inducing Information Bottleneck in Model EnsemblesCode0
Forming Effective Human-AI Teams: Building Machine Learning Models that Complement the Capabilities of Multiple ExpertsCode0
Benchmark tasks for Quality-Diversity applied to Uncertain domainsCode0
Foundation Models at Work: Fine-Tuning for Fairness in Algorithmic HiringCode0
Benchmarking the Fairness of Image Upsampling MethodsCode0
Forest Parameter Prediction by Multiobjective Deep Learning of Regression Models Trained with Pseudo-Target ImputationCode0
Dialogue Quality and Emotion Annotations for Customer Support ConversationsCode0
Show:102550
← PrevPage 102 of 363Next →

No leaderboard results yet.