SOTAVerified

Novel Concepts

Measures the ability of models to uncover an underlying concept that unites several ostensibly disparate entities, which hopefully would not co-occur frequently. This provides a limited test of a model's ability to creatively construct the necessary abstraction to make sense of a situation that it cannot have memorized in training.

Source: BIG-bench

Papers

Showing 91100 of 158 papers

TitleStatusHype
PaLM: Scaling Language Modeling with PathwaysCode2
A Closer Look at Rehearsal-Free Continual Learning0
FALCON: Fast Visual Concept Learning by Integrating Images, Linguistic descriptions, and Conceptual Relations0
Training Compute-Optimal Large Language ModelsCode6
Emergence of hierarchical reference systems in multi-agent communicationCode0
Statistical Depth Functions for Ranking Distributions: Definitions, Statistical Learning and Applications0
Scaling Language Models: Methods, Analysis & Insights from Training GopherCode2
Learning Instance and Task-Aware Dynamic Kernels for Few Shot LearningCode1
Extract Free Dense Labels from CLIPCode1
Generative Pre-Trained Transformer for Design Concept Generation: An Exploration0
Show:102550
← PrevPage 10 of 16Next →

No leaderboard results yet.