SOTAVerified

General Knowledge

This task aims to evaluate the ability of a model to answer general-knowledge questions.

Source: BIG-bench

Papers

Showing 51100 of 399 papers

TitleStatusHype
CityBench: Evaluating the Capabilities of Large Language Models for Urban TasksCode1
RAD: A Comprehensive Dataset for Benchmarking the Robustness of Image Anomaly DetectionCode1
DomainRAG: A Chinese Benchmark for Evaluating Domain-specific Retrieval-Augmented GenerationCode1
HYDRA: Model Factorization Framework for Black-Box LLM PersonalizationCode1
Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise NetworksCode1
CRoFT: Robust Fine-Tuning with Concurrent Optimization for OOD Generalization and Open-Set OOD DetectionCode1
Health Index Estimation Through Integration of General Knowledge with Unsupervised LearningCode1
BEAR: A Unified Framework for Evaluating Relational Knowledge in Causal and Masked Language ModelsCode1
Benchmarking Large Language Models for Persian: A Preliminary Study Focusing on ChatGPTCode1
Prompt Learning via Meta-RegularizationCode1
See Through Their Minds: Learning Transferable Neural Representation from Cross-Subject fMRICode1
MedSafetyBench: Evaluating and Improving the Medical Safety of Large Language ModelsCode1
Can LLM Generate Culturally Relevant Commonsense QA Data? Case Study in Indonesian and SundaneseCode1
OMGEval: An Open Multilingual Generative Evaluation Benchmark for Large Language ModelsCode1
Pre-training and Diagnosing Knowledge Base Completion ModelsCode1
The Unreasonable Effectiveness of Easy Training Data for Hard TasksCode1
Generic Knowledge Boosted Pre-training For Remote Sensing ImagesCode1
GeoGalactica: A Scientific Large Language Model in GeoscienceCode1
Time Travelling Pixels: Bitemporal Features Integration with Foundation Model for Remote Sensing Image Change DetectionCode1
VIEScore: Towards Explainable Metrics for Conditional Image Synthesis EvaluationCode1
Prediction and Control in Continual Reinforcement LearningCode1
A New Learning Paradigm for Foundation Model-based Remote Sensing Change DetectionCode1
MultiGPrompt for Multi-Task Pre-Training and Prompting on GraphsCode1
CurriculumLoc: Enhancing Cross-Domain Geolocalization through Multi-Stage RefinementCode1
Structured Chemistry Reasoning with Large Language ModelsCode1
A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question AnsweringCode1
HAE-RAE Bench: Evaluation of Korean Knowledge in Language ModelsCode1
Overcoming Generic Knowledge Loss with Selective Parameter UpdateCode1
DR-Tune: Improving Fine-tuning of Pretrained Visual Models by Distribution Regularization with Semantic CalibrationCode1
PMET: Precise Model Editing in a TransformerCode1
Knowledge Prompt-tuning for Sequential RecommendationCode1
Towards Task Sampler Learning for Meta-LearningCode1
GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language ModelCode1
Bert4XMR: Cross-Market Recommendation with Bidirectional Encoder Representations from TransformerCode1
MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic SegmentationCode1
Better Question-Answering Models on a BudgetCode1
EPVT: Environment-aware Prompt Vision Transformer for Domain Generalization in Skin Lesion RecognitionCode1
Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report GenerationCode1
Few-Shot Class-Incremental Learning via Class-Aware Bilateral DistillationCode1
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model AdaptationCode1
Dual Modality Prompt Tuning for Vision-Language Pre-Trained ModelCode1
Learning with Recoverable ForgettingCode1
CC-Riddle: A Question Answering Dataset of Chinese Character RiddlesCode1
Prompt-aligned Gradient for Prompt TuningCode1
Relphormer: Relational Graph Transformer for Knowledge Graph RepresentationsCode1
Seed-Guided Topic Discovery with Out-of-Vocabulary SeedsCode1
Knowledge Graph Contrastive Learning for RecommendationCode1
KALA: Knowledge-Augmented Language Model AdaptationCode1
BEAMetrics: A Benchmark for Language Generation Evaluation EvaluationCode1
Generative Pre-Training from MoleculesCode1
Show:102550
← PrevPage 2 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Chinchilla-70B (few-shot, k=5)Accuracy94.3Unverified
2Gopher-280B (few-shot, k=5)Accuracy93.9Unverified
3Chinchilla-70B (few-shot, k=5)Accuracy 85.7Unverified
4Gopher-280B (few-shot, k=5)Accuracy 84.8Unverified
5Gopher-280B (few-shot, k=5)Accuracy84.2Unverified
6Gopher-280B (few-shot, k=5)Accuracy 84.1Unverified
7Gopher-280B (few-shot, k=5)Accuracy 83.9Unverified
8Gopher-280B (few-shot, k=5)Accuracy83.3Unverified
9Gopher-280B (few-shot, k=5)Accuracy 81.8Unverified
10Gopher-280B (few-shot, k=5)Accuracy 81Unverified