SOTAVerified

Known Unknowns

Language models have a tendency to generate text containing false statements that are often referred to as "Hallucinations." The primary purpose of this task is to test for this failure case by probing whether a model can correctly identify that the answer to a question is unknown. A common failure mode would be to prefer a prediction of false on unknown truth over a prediction that the answer is unknown.

Source: BIG-bench

Papers

Showing 110 of 15 papers

TitleStatusHype
Training Compute-Optimal Large Language ModelsCode6
Scaling Language Models: Methods, Analysis & Insights from Training GopherCode2
PaLM: Scaling Language Modeling with PathwaysCode2
Known Unknowns: Out-of-Distribution Property Prediction in Materials and MoleculesCode1
Generative ODE Modeling with Known UnknownsCode1
Knowledge of Knowledge: Exploring Known-Unknowns Uncertainty with Large Language ModelsCode0
Known Unknowns: Uncertainty Quality in Bayesian Neural NetworksCode0
The division of labor in communication: Speakers help listeners account for asymmetries in visual perspectiveCode0
High-dimensional forecasting with known knowns and known unknownsCode0
Machine learning for advancing low-temperature plasma modeling and simulation0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.