SOTAVerified

General Knowledge

This task aims to evaluate the ability of a model to answer general-knowledge questions.

Source: BIG-bench

Papers

Showing 101125 of 399 papers

TitleStatusHype
HELM: Hyperbolic Large Language Models via Mixture-of-Curvature ExpertsCode1
Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?Code1
Towards Task Sampler Learning for Meta-LearningCode1
Seed-Guided Topic Discovery with Out-of-Vocabulary SeedsCode1
Prompt Learning via Meta-RegularizationCode1
Show, Attend and Distill:Knowledge Distillation via Attention-based Feature MatchingCode1
DomainRAG: A Chinese Benchmark for Evaluating Domain-specific Retrieval-Augmented GenerationCode1
A New Learning Paradigm for Foundation Model-based Remote Sensing Change DetectionCode1
Can LVLMs Obtain a Driver's License? A Benchmark Towards Reliable AGI for Autonomous Driving0
Are LLMs Good Cryptic Crossword Solvers?0
AcademicGPT: Empowering Academic Research0
Learning Electromagnetic Metamaterial Physics With ChatGPT0
Enhancing Action Recognition from Low-Quality Skeleton Data via Part-Level Knowledge Distillation0
Enhance Graph Alignment for Large Language Models0
Advancing Retrieval-Augmented Generation for Persian: Development of Language Models, Comprehensive Benchmarks, and Best Practices for Optimization0
Enabling Autonomic Microservice Management through Self-Learning Agents0
Applying SoftTriple Loss for Supervised Language Model Fine Tuning0
AnomalyPainter: Vision-Language-Diffusion Synergy for Zero-Shot Realistic and Diverse Industrial Anomaly Synthesis0
CALM: Unleashing the Cross-Lingual Self-Aligning Ability of Language Model Question Answering0
Few Exemplar-Based General Medical Image Segmentation via Domain-Aware Selective Adaptation0
Enhancing Target-unspecific Tasks through a Features Matrix0
Efficient illumination angle self-calibration in Fourier ptychography0
Bridge-Coder: Unlocking LLMs' Potential to Overcome Language Gaps in Low-Resource Code0
Evaluating Company-specific Biases in Financial Sentiment Analysis using Large Language Models0
Bootstrapping Cognitive Agents with a Large Language Model0
Show:102550
← PrevPage 5 of 16Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Chinchilla-70B (few-shot, k=5)Accuracy94.3Unverified
2Gopher-280B (few-shot, k=5)Accuracy93.9Unverified
3Chinchilla-70B (few-shot, k=5)Accuracy 85.7Unverified
4Gopher-280B (few-shot, k=5)Accuracy 84.8Unverified
5Gopher-280B (few-shot, k=5)Accuracy84.2Unverified
6Gopher-280B (few-shot, k=5)Accuracy 84.1Unverified
7Gopher-280B (few-shot, k=5)Accuracy 83.9Unverified
8Gopher-280B (few-shot, k=5)Accuracy83.3Unverified
9Gopher-280B (few-shot, k=5)Accuracy 81.8Unverified
10Gopher-280B (few-shot, k=5)Accuracy 81Unverified