SOTAVerified

General Knowledge

This task aims to evaluate the ability of a model to answer general-knowledge questions.

Source: BIG-bench

Papers

Showing 5175 of 399 papers

TitleStatusHype
DomainRAG: A Chinese Benchmark for Evaluating Domain-specific Retrieval-Augmented GenerationCode1
E2Map: Experience-and-Emotion Map for Self-Reflective Robot Navigation with Language ModelsCode1
Aligning Medical Images with General Knowledge from Large Language ModelsCode1
KGPT: Knowledge-Grounded Pre-Training for Data-to-Text GenerationCode1
Importance-based Neuron Allocation for Multilingual Neural Machine TranslationCode1
Automated Phrase Mining from Massive Text CorporaCode1
KALA: Knowledge-Augmented Language Model AdaptationCode1
Generic Knowledge Boosted Pre-training For Remote Sensing ImagesCode1
Can Editing LLMs Inject Harm?Code1
Generative Pre-Training from MoleculesCode1
Can LLM Generate Culturally Relevant Commonsense QA Data? Case Study in Indonesian and SundaneseCode1
FuseChat-3.0: Preference Optimization Meets Heterogeneous Model FusionCode1
Knowledge Graph Contrastive Learning for RecommendationCode1
Knowledge Prompt-tuning for Sequential RecommendationCode1
A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question AnsweringCode1
DIAGen: Diverse Image Augmentation with Generative ModelsCode1
HELM: Hyperbolic Large Language Models via Mixture-of-Curvature ExpertsCode1
A General Knowledge Injection Framework for ICD CodingCode1
Decoupling General and Personalized Knowledge in Federated Learning via Additive and Low-Rank DecompositionCode1
How Well Do LLMs Handle Cantonese? Benchmarking Cantonese Capabilities of Large Language ModelsCode1
HYDRA: Model Factorization Framework for Black-Box LLM PersonalizationCode1
CurriculumLoc: Enhancing Cross-Domain Geolocalization through Multi-Stage RefinementCode1
DA-Ada: Learning Domain-Aware Adapter for Domain Adaptive Object DetectionCode1
Health Index Estimation Through Integration of General Knowledge with Unsupervised LearningCode1
GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language ModelCode1
Show:102550
← PrevPage 3 of 16Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Chinchilla-70B (few-shot, k=5)Accuracy94.3Unverified
2Gopher-280B (few-shot, k=5)Accuracy93.9Unverified
3Chinchilla-70B (few-shot, k=5)Accuracy 85.7Unverified
4Gopher-280B (few-shot, k=5)Accuracy 84.8Unverified
5Gopher-280B (few-shot, k=5)Accuracy84.2Unverified
6Gopher-280B (few-shot, k=5)Accuracy 84.1Unverified
7Gopher-280B (few-shot, k=5)Accuracy 83.9Unverified
8Gopher-280B (few-shot, k=5)Accuracy83.3Unverified
9Gopher-280B (few-shot, k=5)Accuracy 81.8Unverified
10Gopher-280B (few-shot, k=5)Accuracy 81Unverified