SOTAVerified

Denoising

Denoising is a task in image processing and computer vision that aims to remove or reduce noise from an image. Noise can be introduced into an image due to various reasons, such as camera sensor limitations, lighting conditions, and compression artifacts. The goal of denoising is to recover the original image, which is considered to be noise-free, from a noisy observation.

( Image credit: Beyond a Gaussian Denoiser )

Papers

Showing 24762500 of 7282 papers

TitleStatusHype
Align-A-Video: Deterministic Reward Tuning of Image Diffusion Models for Consistent Video Editing0
LITA-GS: Illumination-Agnostic Novel View Synthesis via Reference-Free 3D Gaussian Splatting and Physical Priors0
Fingerprinting Denoising Diffusion Probabilistic Models0
Encapsulated Composition of Text-to-Image and Text-to-Video Models for High-Quality Video Synthesis0
Decouple-Then-Merge: Finetune Diffusion Models as Multi-Task Learning0
Multi-Modal Contrastive Masked Autoencoders: A Two-Stage Progressive Pre-training Approach for RGBD Datasets0
SeaLion: Semantic Part-Aware Latent Point Diffusion Models for 3D Generation0
FeedEdit: Text-Based Image Editing with Dynamic Feedback Regulation0
TFCustom: Customized Image Generation with Time-Aware Frequency Feature Guidance0
CoordFlow: Coordinate Flow for Pixel-wise Neural Video Representation0
Pioneering 4-Bit FP Quantization for Diffusion Models: Mixup-Sign Quantization and Timestep-Aware Fine-Tuning0
EdgeDiff: Edge-aware Diffusion Network for Building Reconstruction from Point Clouds0
APT: Adaptive Personalized Training for Diffusion Models with Limited Data0
VODiff: Controlling Object Visibility Order in Text-to-Image Generation0
V2X-R: Cooperative LiDAR-4D Radar Fusion with Denoising Diffusion for 3D Object Detection0
Random Conditioning for Diffusion Model Compression with Distillation0
DriveScape: High-Resolution Driving Video Generation by Multi-View Feature Fusion0
Composing Parts for Expressive Object Generation0
Layered Motion Fusion: Lifting Motion Segmentation to 3D in Egocentric Videos0
Towards Precise Embodied Dialogue Localization via Causality Guided Diffusion0
Unboxed: Geometrically and Temporally Consistent Video Outpainting0
RaSS: Improving Denoising Diffusion Samplers with Reinforced Active Sampling Scheduler0
Discrete to Continuous: Generating Smooth Transition Poses from Sign Language Observations0
Diffusion Prism: Enhancing Diversity and Morphology Consistency in Mask-to-Image Diffusion0
STINR: Deciphering Spatial Transcriptomics via Implicit Neural Representation0
Show:102550
← PrevPage 100 of 292Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SINDyPSNR81Unverified
2Pixel-shuffling DownsamplingPSNR38.4Unverified
3TWSCPSNR37.93Unverified
4CBDNet(Syn)PSNR37.57Unverified
5MCWNNMPSNR37.38Unverified
6Han et alPSNR35.95Unverified
7FFDNetPSNR34.4Unverified
8TNRDPSNR33.65Unverified
9CDnCNN-BPSNR32.43Unverified
10NLRNPSNR30.8Unverified
#ModelMetricClaimedVerifiedStatus
1DRUnet_Poisson_0.01Average PSNR (dB)33.92Unverified
#ModelMetricClaimedVerifiedStatus
1DRANetAverage PSNR39.64Unverified
#ModelMetricClaimedVerifiedStatus
1PCNN+RL+HMEAverage84.61Unverified