SOTAVerified

Image to 3D

Papers

Showing 150 of 162 papers

TitleStatusHype
InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction ModelsCode7
Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single ImageCode7
Direct3D-S2: Gigascale 3D Generation Made Easy with Spatial Sparse AttentionCode5
Wonder3D: Single Image to 3D using Cross-Domain DiffusionCode5
MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View ImagesCode5
Zero-1-to-3: Zero-shot One Image to 3D ObjectCode4
GRM: Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and GenerationCode4
Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion PriorsCode3
3D-Adapter: Geometry-Consistent Multi-View Diffusion for High-Quality 3D GenerationCode3
SOAP: Style-Omniscient Animatable PortraitsCode3
LRM: Large Reconstruction Model for Single Image to 3DCode3
SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image using Latent Video DiffusionCode3
One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape OptimizationCode3
Kiss3DGen: Repurposing Image Diffusion Models for 3D Asset GenerationCode3
Controllable Text-to-3D Generation via Surface-Aligned Gaussian SplattingCode3
Direct3D: Scalable Image-to-3D Generation via 3D Latent Diffusion TransformerCode3
Generic 3D Diffusion Adapter Using Controlled Multi-View EditingCode3
Hi3D: Pursuing High-Resolution Image-to-3D Generation with Video Diffusion ModelsCode3
PhysX: Physical-Grounded 3D Asset GenerationCode3
REPARO: Compositional 3D Assets Generation with Differentiable 3D Layout AlignmentCode2
Repaint123: Fast and High-quality One Image to 3D Generation with Progressive Controllable 2D RepaintingCode2
SVAD: From Single Image to 3D Avatar via Synthetic Data Generation with Video Diffusion and Data AugmentationCode2
Ouroboros3D: Image-to-3D Generation via 3D-aware Recursive DiffusionCode2
Objaverse++: Curated 3D Object Dataset with Quality AnnotationsCode2
NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with 360° ViewsCode2
Collaborative Neural Rendering using Anime Character SheetsCode2
DiMeR: Disentangled Mesh Reconstruction ModelCode2
Tactile DreamFusion: Exploiting Tactile Sensing for 3D GenerationCode2
Diffusion Time-step Curriculum for One Image to 3D GenerationCode2
Isotropic3D: Image-to-3D Generation Based on a Single CLIP EmbeddingCode2
The More You See in 2D, the More You Perceive in 3DCode2
Idea23D: Collaborative LMM Agents Enable 3D Model Generation from Interleaved Multimodal InputsCode2
Garment3DGen: 3D Garment Stylization and Texture GenerationCode2
The More You See in 2D the More You Perceive in 3DCode2
Efficient4D: Fast Dynamic 3D Object Generation from a Single-view VideoCode2
Fancy123: One Image to High-Quality 3D Mesh Generation via Plug-and-Play DeformationCode2
FlexiDreamer: Single Image-to-3D Generation with FlexiCubesCode2
Hash3D: Training-free Acceleration for 3D GenerationCode2
6Img-to-3D: Few-Image Large-Scale Outdoor Driving Scene ReconstructionCode2
Envision3D: One Image to 3D with Anchor Views InterpolationCode2
Hybrid Fourier Score Distillation for Efficient One Image to 3D Object GenerationCode2
An Empirical Study of GPT-4o Image Generation CapabilitiesCode1
SyncDreamer: Generating Multiview-consistent Images from a Single-view ImageCode1
OneTo3D: One Image to Re-editable Dynamic 3D Model and Video GenerationCode1
Back to Optimization: Diffusion-based Zero-Shot 3D Human Pose EstimationCode1
Monocular 3D Human Pose Estimation by Generation and Ordinal RankingCode1
IPDreamer: Appearance-Controllable 3D Object Generation with Complex Image PromptsCode1
MEAT: Multiview Diffusion Model for Human Generation on Megapixels with Mesh AttentionCode1
HarmonyView: Harmonizing Consistency and Diversity in One-Image-to-3DCode1
Human Parsing Based Texture Transfer from Single Image to 3D Human via Cross-View ConsistencyCode1
Show:102550
← PrevPage 1 of 4Next →

No leaderboard results yet.