SOTAVerified

General Knowledge

This task aims to evaluate the ability of a model to answer general-knowledge questions.

Source: BIG-bench

Papers

Showing 101110 of 399 papers

TitleStatusHype
Importance-based Neuron Allocation for Multilingual Neural Machine TranslationCode1
Large Pre-trained Language Models Contain Human-like Biases of What is Right and Wrong to DoCode1
Show, Attend and Distill:Knowledge Distillation via Attention-based Feature MatchingCode1
KGPT: Knowledge-Grounded Pre-Training for Data-to-Text GenerationCode1
Transformers as Soft Reasoners over LanguageCode1
Go From the General to the Particular: Multi-Domain Translation with Domain Transformation NetworksCode1
RDF2Vec: RDF Graph Embeddings and Their ApplicationsCode1
Automated Phrase Mining from Massive Text CorporaCode1
PROL : Rehearsal Free Continual Learning in Streaming Data via Prompt Online LearningCode0
Reinforcement Fine-Tuning Naturally Mitigates Forgetting in Continual Post-Training0
Show:102550
← PrevPage 11 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Chinchilla-70B (few-shot, k=5)Accuracy94.3Unverified
2Gopher-280B (few-shot, k=5)Accuracy93.9Unverified
3Chinchilla-70B (few-shot, k=5)Accuracy 85.7Unverified
4Gopher-280B (few-shot, k=5)Accuracy 84.8Unverified
5Gopher-280B (few-shot, k=5)Accuracy84.2Unverified
6Gopher-280B (few-shot, k=5)Accuracy 84.1Unverified
7Gopher-280B (few-shot, k=5)Accuracy 83.9Unverified
8Gopher-280B (few-shot, k=5)Accuracy83.3Unverified
9Gopher-280B (few-shot, k=5)Accuracy 81.8Unverified
10Gopher-280B (few-shot, k=5)Accuracy 81Unverified