SOTAVerified

Visual Prompting

Visual Prompting is the task of streamlining computer vision processes by harnessing the power of prompts, inspired by the breakthroughs of text prompting in NLP. This innovative approach involves using a few visual prompts to swiftly convert an unlabeled dataset into a deployed model, significantly reducing development time for both individual projects and enterprise solutions.

Papers

Showing 150 of 127 papers

TitleStatusHype
Segment AnythingCode5
GPT4Scene: Understand 3D Scenes from Videos with Vision-Language ModelsCode4
Groma: Localized Visual Tokenization for Grounding Multimodal Large Language ModelsCode4
Visual In-Context PromptingCode4
Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4VCode4
Generative Multimodal Models are In-Context LearnersCode3
Chameleon: Fast-slow Neuro-symbolic Lane Topology ExtractionCode2
Attention Prompting on Image for Large Vision-Language ModelsCode2
Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language ModelsCode2
Memory-Space Visual Prompting for Efficient Vision-Language Fine-TuningCode2
Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You WantCode2
Tokenize Anything via PromptingCode2
Explicit Visual Prompting for Universal Foreground SegmentationsCode2
Explicit Visual Prompting for Low-Level Structure SegmentationsCode2
Visual Prompting via Image InpaintingCode2
Exploring Visual Prompts for Adapting Large-Scale ModelsCode2
Vision Graph Prompting via Semantic Low-Rank DecompositionCode1
Token Coordinated Prompt Attention is Needed for Visual PromptingCode1
LoR-VP: Low-Rank Visual Prompting for Efficient Vision Model AdaptationCode1
Selective Visual Prompting in Vision MambaCode1
Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction TuningCode1
Improved GUI Grounding via Iterative NarrowingCode1
Improving Visual Object Tracking through Visual PromptingCode1
Open-Vocabulary Action Localization with Iterative Visual PromptingCode1
EarthMarker: A Visual Prompting Multi-modal Large Language Model for Remote SensingCode1
By My Eyes: Grounding Multimodal Large Language Models with Sensor Data via Visual PromptingCode1
Dynamic Domains, Dynamic Solutions: DPCore for Continual Test-Time AdaptationCode1
OT-VP: Optimal Transport-guided Visual Prompting for Test-Time AdaptationCode1
Visual Prompting for Generalized Few-shot Segmentation: A Multi-scale ApproachCode1
Exploring the Transferability of Visual Prompting for Multimodal Large Language ModelsCode1
Finding Visual Task VectorsCode1
Scaffolding Coordinates to Promote Vision-Language Coordination in Large Multi-Modal ModelsCode1
Tune-An-Ellipse: CLIP Has Potential to Find What You WantCode1
EZ-CLIP: Efficient Zeroshot Video Action RecognitionCode1
ViscoNet: Bridging and Harmonizing Visual and Textual Conditioning for ControlNetCode1
Visual Prompting Upgrades Neural Network Sparsification: A Data-Model PerspectiveCode1
GeoSAM: Fine-tuning SAM with Multi-Modal Prompts for Mobility Infrastructure SegmentationCode1
AutoVP: An Automated Visual Prompting Framework and BenchmarkCode1
Visual Instruction Inversion: Image Editing via Visual PromptingCode1
Fine-Grained Visual PromptingCode1
UPGPT: Universal Diffusion Model for Person Image Generation, Editing and Pose TransferCode1
BlackVIP: Black-Box Visual Prompting for Robust Transfer LearningCode1
Diversity-Aware Meta Visual PromptingCode1
Text-Visual Prompting for Efficient 2D Temporal Video GroundingCode1
Understanding and Improving Visual Prompting: A Label-Mapping PerspectiveCode1
Visual Prompting for Adversarial RobustnessCode1
Stepwise Decomposition and Dual-stream Focus: A Novel Approach for Training-free Camouflaged Object SegmentationCode0
RSVP: Reasoning Segmentation via Visual Prompting and Multi-modal Chain-of-Thought0
Grid-LOGAT: Grid Based Local and Global Area Transcription for Video Question Answering0
DINO-R1: Incentivizing Reasoning Capability in Vision Foundation Models0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.