SOTAVerified

Informativeness

Papers

Showing 125 of 735 papers

TitleStatusHype
ActiveAnno3D -- An Active Learning Framework for Multi-Modal 3D Object DetectionCode4
Large Language Models Are Human-Level Prompt EngineersCode3
WhisperNER: Unified Open Named Entity and Speech RecognitionCode3
Active Generalized Category DiscoveryCode2
KG-FIT: Knowledge Graph Fine-Tuning Upon Open-World KnowledgeCode2
β-DPO: Direct Preference Optimization with Dynamic βCode2
Automated Evaluation of Retrieval-Augmented Language Models with Task-Specific Exam GenerationCode2
Self-supervised Dataset Distillation: A Good Compression Is All You NeedCode2
Dataset Distillation via FactorizationCode1
Context-aware Attentional Pooling (CAP) for Fine-grained Visual ClassificationCode1
Controllable Abstractive Sentence Summarization with Guiding EntitiesCode1
Analysis of Social Media Data using Multimodal Deep Learning for Disaster ResponseCode1
Coverage-based Example Selection for In-Context LearningCode1
A comprehensive survey on deep active learning in medical image analysisCode1
Dataset Factorization for CondensationCode1
Best Practices for Multi-Fidelity Bayesian Optimization in Materials and Molecular ResearchCode1
Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-AttentionCode1
BEAMetrics: A Benchmark for Language Generation Evaluation EvaluationCode1
Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge GeneratorsCode1
Enhanced spatio-temporal electric load forecasts using less data with active deep learningCode1
Active Learning for Deep Object Detection via Probabilistic ModelingCode1
Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive TasksCode1
Computation-Efficient Knowledge Distillation via Uncertainty-Aware MixupCode1
Conformal Alignment: Knowing When to Trust Foundation Models with GuaranteesCode1
BARTScore: Evaluating Generated Text as Text GenerationCode1
Show:102550
← PrevPage 1 of 30Next →

No leaderboard results yet.