SOTAVerified

3D Generation

Papers

Showing 251300 of 430 papers

TitleStatusHype
Hash3D: Training-free Acceleration for 3D GenerationCode2
DreamView: Injecting View-specific Text Guidance into Text-to-3D GenerationCode1
StylizedGS: Controllable Stylization for 3D Gaussian Splatting0
Diffusion Time-step Curriculum for One Image to 3D GenerationCode2
Idea23D: Collaborative LMM Agents Enable 3D Model Generation from Interleaved Multimodal InputsCode2
Towards Robust 3D Pose Transfer with Adversarial Learning0
Diffusion^2: Dynamic 3D Content Generation via Score Composition of Video and Multi-view Diffusion ModelsCode2
Sketch3D: Style-Consistent Guidance for Sketch-to-3D Generation0
FlexiDreamer: Single Image-to-3D Generation with FlexiCubesCode2
Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation0
Gamba: Marry Gaussian Splatting with Mamba for single view 3D reconstructionCode2
VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation0
DreamPolisher: Towards High-Quality Text-to-3D Generation via Geometric Diffusion0
Exploiting Priors from 3D Diffusion Models for RGB-Based One-Shot View PlanningCode0
InterFusion: Text-Driven Generation of 3D Human-Object InteractionCode2
DreamFlow: High-Quality Text-to-3D Generation by Approximating Probability Flow0
STAG4D: Spatial-Temporal Anchored Generative 4D Gaussians0
ThemeStation: Generating Theme-Aware 3D Assets from Few ExemplarsCode3
LATTE3D: Large-scale Amortized Text-To-Enhanced3D Synthesis0
DreamReward: Text-to-3D Generation with Human Preference0
Compress3D: a Compressed Latent Space for 3D Generation from a Single Image0
GVGEN: Text-to-3D Generation with Volumetric Representation0
ComboVerse: Compositional 3D Assets Creation Using Spatially-Aware Diffusion Guidance0
Precise-Physics Driven Text-to-3D Generation0
Generic 3D Diffusion Adapter Using Controlled Multi-View EditingCode3
SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image using Latent Video DiffusionCode3
LN3Diff: Scalable Latent Neural Fields Diffusion for Speedy 3D GenerationCode3
BrightDreamer: Generic 3D Gaussian Generative Framework for Fast Text-to-3D SynthesisCode2
Isotropic3D: Image-to-3D Generation Based on a Single CLIP EmbeddingCode2
Controllable Text-to-3D Generation via Surface-Aligned Gaussian SplattingCode3
Hyper-3DG: Text-to-3D Gaussian Generation via HypergraphCode2
Sculpt3D: Multi-View Consistent Text-to-3D Generation with Sparse 3D Prior0
Make-Your-3D: Fast and Consistent Subject-Driven 3D Content Generation0
V3D: Video Diffusion Models are Effective 3D GeneratorsCode4
3DTopia: Large Text-to-3D Generation Model with Hybrid Diffusion PriorsCode4
TripoSR: Fast 3D Object Reconstruction from a Single ImageCode9
MVD^2: Efficient Multiview 3D Reconstruction for Multiview Diffusion0
Place Anything into Any Video0
Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability0
IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation0
GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting0
SPAD : Spatially Aware Multiview Diffusers0
Retrieval-Augmented Score Distillation for Text-to-3D GenerationCode2
Advances in 3D Generation: A Survey0
Geometry aware 3D generation from in-the-wild images in ImageNet0
BoostDream: Efficient Refining for High-Quality Text-to-3D Generation from Multi-View Diffusion0
StableIdentity: Inserting Anybody into Anywhere at First SightCode3
Sketch2NeRF: Multi-view Sketch-guided Text-to-3D Generation0
Sat2Scene: 3D Urban Scene Generation from Satellite Images with DiffusionCode1
Consistent3D: Towards Consistent High-Fidelity Text-to-3D Generation with Deterministic Sampling PriorCode2
Show:102550
← PrevPage 6 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MDMFD_ClaTr6.79Unverified
2DIRECTOR BFD_ClaTr6.1Unverified
3DIRECTOR AFD_ClaTr3.88Unverified
4DIRECTOR CFD_ClaTr3.76Unverified