SOTAVerified

General Knowledge

This task aims to evaluate the ability of a model to answer general-knowledge questions.

Source: BIG-bench

Papers

Showing 171180 of 399 papers

TitleStatusHype
Eraser: Jailbreaking Defense in Large Language Models via Unlearning Harmful KnowledgeCode0
BEAR: A Unified Framework for Evaluating Relational Knowledge in Causal and Masked Language ModelsCode1
Benchmarking Large Language Models for Persian: A Preliminary Study Focusing on ChatGPTCode1
Prompt Learning via Meta-RegularizationCode1
Juru: Legal Brazilian Large Language Model from Reputable Sources0
Are LLMs Good Cryptic Crossword Solvers?0
CoIN: A Benchmark of Continual Instruction tuNing for Multimodel Large Language ModelCode2
DiPrompT: Disentangled Prompt Tuning for Multiple Latent Domain Generalization in Federated Learning0
See Through Their Minds: Learning Transferable Neural Representation from Cross-Subject fMRICode1
Deep Prompt Multi-task Network for Abuse Language Detection0
Show:102550
← PrevPage 18 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Chinchilla-70B (few-shot, k=5)Accuracy94.3Unverified
2Gopher-280B (few-shot, k=5)Accuracy93.9Unverified
3Chinchilla-70B (few-shot, k=5)Accuracy 85.7Unverified
4Gopher-280B (few-shot, k=5)Accuracy 84.8Unverified
5Gopher-280B (few-shot, k=5)Accuracy84.2Unverified
6Gopher-280B (few-shot, k=5)Accuracy 84.1Unverified
7Gopher-280B (few-shot, k=5)Accuracy 83.9Unverified
8Gopher-280B (few-shot, k=5)Accuracy83.3Unverified
9Gopher-280B (few-shot, k=5)Accuracy 81.8Unverified
10Gopher-280B (few-shot, k=5)Accuracy 81Unverified