SOTAVerified

Novel Concepts

Measures the ability of models to uncover an underlying concept that unites several ostensibly disparate entities, which hopefully would not co-occur frequently. This provides a limited test of a model's ability to creatively construct the necessary abstraction to make sense of a situation that it cannot have memorized in training.

Source: BIG-bench

Papers

Showing 110 of 158 papers

TitleStatusHype
Explaining deep neural network models for electricity price forecasting with XAI0
What happens when generative AI models train recursively on each others' generated outputs?0
From Data to Modeling: Fully Open-vocabulary Scene Graph Generation0
Neuro-Symbolic Concepts0
Exploring internal representation of self-supervised networks: few-shot learning abilities and comparison with human semantics and recognition of objects0
Contrastive Visual Data Augmentation0
Efficient Transmission of Radiomaps via Physics-Enhanced Semantic Communications0
Towards A Litmus Test for Common Sense0
Open Ad-hoc Categorization with Contextualized Feature Learning0
AFANet: Adaptive Frequency-Aware Network for Weakly-Supervised Few-Shot Semantic SegmentationCode1
Show:102550
← PrevPage 1 of 16Next →

No leaderboard results yet.