SOTAVerified

General Knowledge

This task aims to evaluate the ability of a model to answer general-knowledge questions.

Source: BIG-bench

Papers

Showing 2650 of 399 papers

TitleStatusHype
Keeping Yourself is Important in Downstream Tuning Multimodal Large Language ModelCode2
F-LMM: Grounding Frozen Large Multimodal ModelsCode2
LLM-RG4: Flexible and Factual Radiology Report Generation across Diverse Input ContextsCode2
Exploring the Potential of Large Language Models (LLMs) in Learning on GraphsCode2
GeoGalactica: A Scientific Large Language Model in GeoscienceCode1
Generic Knowledge Boosted Pre-training For Remote Sensing ImagesCode1
GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language ModelCode1
FuseChat-3.0: Preference Optimization Meets Heterogeneous Model FusionCode1
A Dual-Space Framework for General Knowledge Distillation of Large Language ModelsCode1
Generative Pre-Training from MoleculesCode1
Go From the General to the Particular: Multi-Domain Translation with Domain Transformation NetworksCode1
A New Learning Paradigm for Foundation Model-based Remote Sensing Change DetectionCode1
Decoupling General and Personalized Knowledge in Federated Learning via Additive and Low-Rank DecompositionCode1
DA-Ada: Learning Domain-Aware Adapter for Domain Adaptive Object DetectionCode1
Bert4XMR: Cross-Market Recommendation with Bidirectional Encoder Representations from TransformerCode1
DIAGen: Diverse Image Augmentation with Generative ModelsCode1
DomainRAG: A Chinese Benchmark for Evaluating Domain-specific Retrieval-Augmented GenerationCode1
HAE-RAE Bench: Evaluation of Korean Knowledge in Language ModelsCode1
Benchmarking Large Language Models for Persian: A Preliminary Study Focusing on ChatGPTCode1
BEAR: A Unified Framework for Evaluating Relational Knowledge in Causal and Masked Language ModelsCode1
BEAMetrics: A Benchmark for Language Generation Evaluation EvaluationCode1
E2Map: Experience-and-Emotion Map for Self-Reflective Robot Navigation with Language ModelsCode1
Better Question-Answering Models on a BudgetCode1
Aligning Medical Images with General Knowledge from Large Language ModelsCode1
ElecBench: a Power Dispatch Evaluation Benchmark for Large Language ModelsCode1
Show:102550
← PrevPage 2 of 16Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Chinchilla-70B (few-shot, k=5)Accuracy94.3Unverified
2Gopher-280B (few-shot, k=5)Accuracy93.9Unverified
3Chinchilla-70B (few-shot, k=5)Accuracy 85.7Unverified
4Gopher-280B (few-shot, k=5)Accuracy 84.8Unverified
5Gopher-280B (few-shot, k=5)Accuracy84.2Unverified
6Gopher-280B (few-shot, k=5)Accuracy 84.1Unverified
7Gopher-280B (few-shot, k=5)Accuracy 83.9Unverified
8Gopher-280B (few-shot, k=5)Accuracy83.3Unverified
9Gopher-280B (few-shot, k=5)Accuracy 81.8Unverified
10Gopher-280B (few-shot, k=5)Accuracy 81Unverified