SOTAVerified

Video Editing

Papers

Showing 150 of 346 papers

TitleStatusHype
Wan: Open and Advanced Large-Scale Video Generative ModelsCode11
VACE: All-in-One Video Creation and EditingCode7
VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the WildCode5
Segment Anything for Videos: A Systematic SurveyCode5
Mora: Enabling Generalist Video Generation via A Multi-Agent FrameworkCode5
Video Seal: Open and Efficient Video WatermarkingCode4
AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing TasksCode4
VideoPainter: Any-length Video Inpainting and Editing with Plug-and-Play Context ControlCode4
A Survey on Video Diffusion ModelsCode4
Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video GeneratorsCode4
Dynamic 3D Gaussians: Tracking by Persistent Dynamic View SynthesisCode4
Taming Rectified Flow for Inversion and EditingCode4
JoyGen: Audio-Driven 3D Depth-Aware Talking-Face Video EditingCode3
MotionFollower: Editing Video Motion via Lightweight Score-Guided DiffusionCode3
Diffusion Model-Based Video Editing: A SurveyCode3
DiTCtrl: Exploring Attention Control in Multi-Modal Diffusion Transformer for Tuning-Free Multi-Prompt Longer Video GenerationCode3
Movie Gen: A Cast of Media Foundation ModelsCode3
FateZero: Fusing Attentions for Zero-shot Text-based Video EditingCode3
TokenFlow: Consistent Diffusion Features for Consistent Video EditingCode3
StableVideo: Text-driven Consistency-aware Diffusion Video EditingCode3
FlashDepth: Real-time Streaming Video Depth Estimation at 2K ResolutionCode3
AutoVFX: Physically Realistic Video Editing from Natural Language InstructionsCode3
Lumiere: A Space-Time Diffusion Model for Video GenerationCode3
A Survey of Multimodal-Guided Image Editing with Text-to-Image Diffusion ModelsCode3
Exploring Temporally-Aware Features for Point TrackingCode2
Video-P2P: Video Editing with Cross-attention ControlCode2
VidToMe: Video Token Merging for Zero-Shot Video EditingCode2
Unveiling Deep Shadows: A Survey and Benchmark on Image and Video Shadow Detection, Removal, and Generation in the Deep Learning EraCode2
FRAG: Frequency Adapting Group for Diffusion Video EditingCode2
Towards Unified Keyframe Propagation ModelsCode2
FLATTEN: optical FLow-guided ATTENtion for consistent text-to-video editingCode2
LAMP: Learn A Motion Pattern for Few-Shot-Based Video GenerationCode2
UniVST: A Unified Framework for Training-free Localized Video Style TransferCode2
Video-P2P: Video Editing with Cross-attention ControlCode2
Clearer Frames, Anytime: Resolving Velocity Ambiguity in Video Frame InterpolationCode2
Alias-Free Latent Diffusion Models:Improving Fractional Shift Equivariance of Diffusion Latent SpaceCode2
Text2LIVE: Text-Driven Layered Image and Video EditingCode2
Slicedit: Zero-Shot Video Editing With Text-to-Image Diffusion Models Using Spatio-Temporal SlicesCode2
DPE: Disentanglement of Pose and Expression for General Video Portrait EditingCode2
StableV2V: Stablizing Shape Consistency in Video-to-Video EditingCode2
Third Time's the Charm? Image and Video Editing with StyleGAN3Code2
NaRCan: Natural Refined Canonical Image with Integration of Diffusion Prior for Video EditingCode2
RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion ModelsCode2
Contextualized Diffusion Models for Text-Guided Image and Video GenerationCode2
ControlVideo: Conditional Control for One-shot Text-driven Video Editing and BeyondCode2
Compositional Video Generation as Flow EqualizationCode2
Control-A-Video: Controllable Text-to-Video Diffusion Models with Motion Prior and Reward Feedback LearningCode2
Sketch Video SynthesisCode2
VE-Bench: Subjective-Aligned Benchmark Suite for Text-Driven Video Editing Quality AssessmentCode2
Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion ModelsCode2
Show:102550
← PrevPage 1 of 7Next →

No leaderboard results yet.