SOTAVerified

Known Unknowns

Language models have a tendency to generate text containing false statements that are often referred to as "Hallucinations." The primary purpose of this task is to test for this failure case by probing whether a model can correctly identify that the answer to a question is unknown. A common failure mode would be to prefer a prediction of false on unknown truth over a prediction that the answer is unknown.

Source: BIG-bench

Papers

Showing 115 of 15 papers

TitleStatusHype
Training Compute-Optimal Large Language ModelsCode6
PaLM: Scaling Language Modeling with PathwaysCode2
Scaling Language Models: Methods, Analysis & Insights from Training GopherCode2
Known Unknowns: Out-of-Distribution Property Prediction in Materials and MoleculesCode1
Generative ODE Modeling with Known UnknownsCode1
Machine learning for advancing low-temperature plasma modeling and simulation0
Domain Concretization from Examples: Addressing Missing Domain Knowledge via Robust Planning0
Classification Uncertainty of Deep Neural Networks Based on Gradient Information0
Researchy Questions: A Dataset of Multi-Perspective, Decompositional Questions for LLM Web Agents0
The known unknowns of the Hsp90 chaperone0
Toward Open-Set Face Recognition0
The division of labor in communication: Speakers help listeners account for asymmetries in visual perspectiveCode0
Known Unknowns: Uncertainty Quality in Bayesian Neural NetworksCode0
High-dimensional forecasting with known knowns and known unknownsCode0
Knowledge of Knowledge: Exploring Known-Unknowns Uncertainty with Large Language ModelsCode0
Show:102550

No leaderboard results yet.