SOTAVerified

Image to Video Generation

Image to Video Generation refers to the task of generating a sequence of video frames based on a single still image or a set of still images. The goal is to produce a video that is coherent and consistent in terms of appearance, motion, and style, while also being temporally consistent, meaning that the generated video should look like a coherent sequence of frames that are temporally ordered. This task is typically tackled using deep generative models, such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs), that are trained on large datasets of videos. The models learn to generate plausible video frames that are conditioned on the input image, as well as on any other auxiliary information, such as a sound or text track.

Papers

Showing 1120 of 85 papers

TitleStatusHype
MagicMotion: Controllable Video Generation with Dense-to-Sparse Trajectory Guidance0
Step-Video-TI2V Technical Report: A State-of-the-Art Text-Driven Image-to-Video Generation ModelCode3
I2V3D: Controllable image-to-video generation with 3D guidance0
DualDiff+: Dual-Branch Diffusion for High-Fidelity Video Generation with Reward GuidanceCode1
Extrapolating and Decoupling Image-to-Video Generation Models: Motion Modeling is Easier Than You ThinkCode1
Object-Centric Image to Video Generation with Language GuidanceCode1
RealCam-I2V: Real-World Image-to-Video Generation with Interactive Complex Camera Control0
Magic 1-For-1: Generating One Minute Video Clips within One MinuteCode0
VidCRAFT3: Camera, Object, and Lighting Control for Image-to-Video Generation0
MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video Generation0
Show:102550
← PrevPage 2 of 9Next →

No leaderboard results yet.