SOTAVerified

Dimensionality Reduction

Dimensionality reduction is the task of reducing the dimensionality of a dataset.

( Image credit: openTSNE )

Papers

Showing 126150 of 3304 papers

TitleStatusHype
The Signature Kernel is the solution of a Goursat PDECode1
Correlation-based feature selection to identify functional dynamics in proteinsCode1
DataLens: Scalable Privacy Preserving Training via Gradient Compression and AggregationCode1
Adversarial AutoencodersCode1
Aha! Adaptive History-Driven Attack for Decision-Based Black-Box ModelsCode1
Clustering with UMAP: Why and How Connectivity MattersCode1
Scalable conditional deep inverse Rosenblatt transports using tensor-trains and gradient-based dimension reductionCode1
Effective Sample Size, Dimensionality, and Generalization in Covariate Shift AdaptationCode1
Curvature-based Feature Selection with Application in Classifying Electronic Health RecordsCode1
Deep Convolutional Autoencoders for reconstructing magnetic resonance images of the healthy brainCode1
Deep Dimension Reduction for Supervised Representation LearningCode1
Deep Learning for Functional Data Analysis with Adaptive Basis LayersCode1
Deep Learning for Reduced Order Modelling and Efficient Temporal Evolution of Fluid SimulationsCode1
A local approach to parameter space reduction for regression and classification tasksCode1
DeepView: Visualizing Classification Boundaries of Deep Neural Networks as Scatter Plots Using Discriminative Dimensionality ReductionCode1
A hyperparameter-tuning approach to automated inverse planningCode1
Derivative-Informed Neural Operator: An Efficient Framework for High-Dimensional Parametric Derivative LearningCode1
An efficient aggregation method for the symbolic representation of temporal dataCode1
Dimensionality reduction to maximize prediction generalization capabilityCode1
A Memory Efficient Baseline for Open Domain Question AnsweringCode1
A Hybrid Architecture for Out of Domain Intent Detection and Intent DiscoveryCode1
Disentangling Identifiable Features from Noisy Data with Structured Nonlinear ICACode1
DistilProtBert: A distilled protein language model used to distinguish between real proteins and their randomly shuffled counterpartsCode1
Distributional Principal AutoencodersCode1
An Embedding is Worth a Thousand Noisy LabelsCode1
Show:102550
← PrevPage 6 of 133Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1UDRNClassification Accuracy90.9Unverified
2tSNEClassification Accuracy51.5Unverified
3IVISClassification Accuracy46.6Unverified
4UMAPClassification Accuracy41.3Unverified
#ModelMetricClaimedVerifiedStatus
1UDRNClassification Accuracy71.1Unverified
2QSClassification Accuracy68Unverified