SOTAVerified

Denoising

Denoising is a task in image processing and computer vision that aims to remove or reduce noise from an image. Noise can be introduced into an image due to various reasons, such as camera sensor limitations, lighting conditions, and compression artifacts. The goal of denoising is to recover the original image, which is considered to be noise-free, from a noisy observation.

( Image credit: Beyond a Gaussian Denoiser )

Papers

Showing 526550 of 7282 papers

TitleStatusHype
A-IDE : Agent-Integrated Denoising Experts0
Generating, Fast and Slow: Scalable Parallel Video Generation with Video Interface Networks0
Recovering Pulse Waves from Video Using Deep Unrolling and Deep Equilibrium Models0
Exploring the Efficacy of Partial Denoising Using Bit Plane Slicing for Enhanced Fracture Identification: A Comparative Study of Deep Learning-Based Approaches and Handcrafted Feature Extraction Techniques0
Fed-NDIF: A Noise-Embedded Federated Diffusion Model For Low-Count Whole-Body PET Denoising0
Denoising-based Contractive Imitation LearningCode0
Scale-wise Distillation of Diffusion Models0
DnLUT: Ultra-Efficient Color Image Denoising via Channel-Aware Lookup TablesCode2
BlockDance: Reuse Structurally Similar Spatio-Temporal Features to Accelerate Diffusion Transformers0
Temporal Score Analysis for Understanding and Correcting Diffusion Artifacts0
SceneMI: Motion In-betweening for Modeling Human-Scene Interactions0
MiLA: Multi-view Intensive-fidelity Long-term Video Generation World Model for Autonomous DrivingCode1
ScalingNoise: Scaling Inference-Time Search for Generating Infinite Videos0
Shining Yourself: High-Fidelity Ornaments Virtual Try-on with Diffusion Model0
Patch-based learning of adaptive Total Variation parameter maps for blind image denoising0
Analysis and Extension of Noisy-target Training for Unsupervised Target Signal Enhancement0
SuperPC: A Single Diffusion Model for Point Cloud Completion, Upsampling, Denoising, and Colorization0
MagicComp: Training-free Dual-Phase Refinement for Compositional Video Generation0
MOSAIC: Generating Consistent, Privacy-Preserving Scenes from Multiple Depth Views in Multi-Room Environments0
Revealing higher-order neural representations of uncertainty with the Noise Estimation through Reinforcement-based Diffusion (NERD) model0
SIR-DIFF: Sparse Image Sets Restoration with Multi-View Diffusion Model0
DIFFVSGG: Diffusion-Driven Online Video Scene Graph GenerationCode1
SketchFusion: Learning Universal Sketch Features through Fusing Foundation Models0
Fundamental Limits of Matrix Sensing: Exact Asymptotics, Universality, and Applications0
SALAD: Skeleton-aware Latent Diffusion for Text-driven Motion Generation and EditingCode2
Show:102550
← PrevPage 22 of 292Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SINDyPSNR81Unverified
2Pixel-shuffling DownsamplingPSNR38.4Unverified
3TWSCPSNR37.93Unverified
4CBDNet(Syn)PSNR37.57Unverified
5MCWNNMPSNR37.38Unverified
6Han et alPSNR35.95Unverified
7FFDNetPSNR34.4Unverified
8TNRDPSNR33.65Unverified
9CDnCNN-BPSNR32.43Unverified
10NLRNPSNR30.8Unverified
#ModelMetricClaimedVerifiedStatus
1DRUnet_Poisson_0.01Average PSNR (dB)33.92Unverified
#ModelMetricClaimedVerifiedStatus
1DRANetAverage PSNR39.64Unverified
#ModelMetricClaimedVerifiedStatus
1PCNN+RL+HMEAverage84.61Unverified