SOTAVerified

General Knowledge

This task aims to evaluate the ability of a model to answer general-knowledge questions.

Source: BIG-bench

Papers

Showing 131140 of 399 papers

TitleStatusHype
MM-Eval: A Hierarchical Benchmark for Modern Mongolian Evaluation in LLMsCode0
PELMS: Pre-training for Effective Low-Shot Multi-Document SummarizationCode0
Knowledge graphs for empirical concept retrievalCode0
Learning to Learn Variational Semantic MemoryCode0
DAGPrompT: Pushing the Limits of Graph Prompting with a Distribution-aware Graph Prompt Tuning ApproachCode0
Knowledge Distillation for Detection Transformer with Consistent Distillation Points SamplingCode0
Avoiding Copyright Infringement via Large Language Model UnlearningCode0
Joey NMT: A Minimalist NMT Toolkit for NovicesCode0
ContextFlow++: Generalist-Specialist Flow-based Generative Models with Mixed-Variable Context EncodingCode0
GenKnowSub: Improving Modularity and Reusability of LLMs through General Knowledge SubtractionCode0
Show:102550
← PrevPage 14 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Chinchilla-70B (few-shot, k=5)Accuracy94.3Unverified
2Gopher-280B (few-shot, k=5)Accuracy93.9Unverified
3Chinchilla-70B (few-shot, k=5)Accuracy 85.7Unverified
4Gopher-280B (few-shot, k=5)Accuracy 84.8Unverified
5Gopher-280B (few-shot, k=5)Accuracy84.2Unverified
6Gopher-280B (few-shot, k=5)Accuracy 84.1Unverified
7Gopher-280B (few-shot, k=5)Accuracy 83.9Unverified
8Gopher-280B (few-shot, k=5)Accuracy83.3Unverified
9Gopher-280B (few-shot, k=5)Accuracy 81.8Unverified
10Gopher-280B (few-shot, k=5)Accuracy 81Unverified