SOTAVerified

Diversity

Diversity in data sampling is crucial across various use cases, including search, recommendation systems, and more. Ensuring diverse samples means capturing a wide range of variations and perspectives, which leads to more robust, unbiased, and comprehensive models. In search use cases, for instance, diversity helps avoid redundancy, ensuring that users are exposed to a broader set of relevant information rather than repeated similar results.

Papers

Showing 75767600 of 9051 papers

TitleStatusHype
Input-gradient space particle inference for neural network ensemblesCode0
Enhancing Robustness of AI Offensive Code Generators via Data AugmentationCode0
Enhancing Relation Extraction Using Syntactic Indicators and Sentential ContextsCode0
Boosting Deep Ensemble Performance with Hierarchical PruningCode0
Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasmCode0
BOLD5000: A public fMRI dataset of 5000 imagesCode0
DAOC: Stable Clustering of Large NetworksCode0
Iterative Graph AlignmentCode0
Block Flow: Learning Straight Flow on Data BlocksCode0
Table Detection in the Wild: A Novel Diverse Table Detection Dataset and MethodCode0
INGB: Informed Nonlinear Granular Ball Oversampling Framework for Noisy Imbalanced ClassificationCode0
SCOPE-DTI: Semi-Inductive Dataset Construction and Framework Optimization for Practical Usability Enhancement in Deep Learning-Based Drug Target Interaction PredictionCode0
Jacquard: A Large Scale Dataset for Robotic Grasp DetectionCode0
PacGAN: The power of two samples in generative adversarial networksCode0
Information-Theoretic Active Learning for Content-Based Image RetrievalCode0
JALMBench: Benchmarking Jailbreak Vulnerabilities in Audio Language ModelsCode0
SCOPE: Sign Language Contextual Processing with Embedding from LLMsCode0
Enhancing Output Diversity Improves Conjugate Gradient-based Adversarial AttacksCode0
Intention-based Long-Term Human Motion AnticipationCode0
A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic lossCode0
PAIR: A Novel Large Language Model-Guided Selection Strategy for Evolutionary AlgorithmsCode0
Information-Seeking Decision Strategies Mitigate Risk in Dynamic, Uncertain EnvironmentsCode0
Dank Learning: Generating Memes Using Deep Neural NetworksCode0
Enhancing Molecular Property Prediction via Mixture of Collaborative ExpertsCode0
BLESS: Benchmarking Large Language Models on Sentence SimplificationCode0
Show:102550
← PrevPage 304 of 363Next →

No leaderboard results yet.