SOTAVerified

World Knowledge

Papers

Showing 431440 of 818 papers

TitleStatusHype
Vision Language Models are In-Context Value Learners0
Vision-Language Models Provide Promptable Representations for Reinforcement Learning0
Visual Commonsense in Pretrained Unimodal and Multimodal Models0
Visual Language Tracking with Multi-modal Interaction: A Robust Benchmark0
Visual Programming for Text-to-Image Generation and Evaluation0
Visual Riddles: a Commonsense and World Knowledge Challenge for Large Vision and Language Models0
VLABench: A Large-Scale Benchmark for Language-Conditioned Robotics Manipulation with Long-Horizon Reasoning Tasks0
VQA-Diff: Exploiting VQA and Diffusion for Zero-Shot Image-to-3D Vehicle Asset Generation in Autonomous Driving0
We Usually Don't Like Going to the Dentist: Using Common Sense to Detect Irony on Twitter0
What does the Failure to Reason with "Respectively" in Zero/Few-Shot Settings Tell Us about Language Models?0
Show:102550
← PrevPage 44 of 82Next →

No leaderboard results yet.