SOTAVerified

Known Unknowns

Language models have a tendency to generate text containing false statements that are often referred to as "Hallucinations." The primary purpose of this task is to test for this failure case by probing whether a model can correctly identify that the answer to a question is unknown. A common failure mode would be to prefer a prediction of false on unknown truth over a prediction that the answer is unknown.

Source: BIG-bench

Papers

Showing 1115 of 15 papers

TitleStatusHype
Domain Concretization from Examples: Addressing Missing Domain Knowledge via Robust Planning0
The division of labor in communication: Speakers help listeners account for asymmetries in visual perspectiveCode0
Classification Uncertainty of Deep Neural Networks Based on Gradient Information0
Toward Open-Set Face Recognition0
Known Unknowns: Uncertainty Quality in Bayesian Neural NetworksCode0
Show:102550
← PrevPage 2 of 2Next →

No leaderboard results yet.