SOTAVerified

Two-sample testing

In statistical hypothesis testing, a two-sample test is a test performed on the data of two random samples, each independently obtained from a different given population. The purpose of the test is to determine whether the difference between these two populations is statistically significant. The statistics used in two-sample tests can be used to solve many machine learning problems, such as domain adaptation, covariate shift and generative adversarial networks.

Papers

Showing 251275 of 338 papers

TitleStatusHype
Variable Selection in Maximum Mean Discrepancy for Interpretable Distribution Comparison0
Visual Scene Representations: Contrast, Scaling and Occlusion0
Wald-Kernel: Learning to Aggregate Information for Sequential Inference0
Wavelet based multi-scale shape features on arbitrary surfaces for cortical thickness discrimination0
Limits of Deepfake Detection: A Robust Estimation Viewpoint0
Weighted Sampling for Combined Model Selection and Hyperparameter Tuning0
Active Sequential Two-Sample Testing0
Adaptive Active Hypothesis Testing under Limited Information0
Adaptive Concentration Inequalities for Sequential Decision Problems0
Adaptive learning of density ratios in RKHS0
Adaptivity and Computation-Statistics Tradeoffs for Kernel and Distance based High Dimensional Two Sample Testing0
Weakly Supervised Instance Learning for Thyroid Malignancy Prediction from Whole Slide Cytopathology Images0
Advanced Tutorial: Label-Efficient Two-Sample Tests0
Adversarial learning for product recommendation0
Adversarially Robust Classification based on GLRT0
A Flexible Framework for Hypothesis Testing in High-dimensions0
A framework for paired-sample hypothesis testing for high-dimensional data0
A General Framework for Distributed Inference with Uncertain Models0
A Mean-Field Theory for Kernel Alignment with Random Features in Generative and Discriminative Models0
A More Powerful Two-Sample Test in High Dimensions using Random Projection0
A New Approach for Distributed Hypothesis Testing with Extensions to Byzantine-Resilience0
A New Approach to Distributed Hypothesis Testing and Non-Bayesian Learning: Improved Learning Rate and Byzantine-Resilience0
A New Framework for Distance and Kernel-based Metrics in High Dimensions0
An explainable deep vision system for animal classification and detection in trail-camera images with automatic post-deployment retraining0
Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing0
Show:102550
← PrevPage 11 of 14Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMD-DAvg accuracy98.5Unverified
#ModelMetricClaimedVerifiedStatus
1MMD-DAvg accuracy74.4Unverified
#ModelMetricClaimedVerifiedStatus
1MMD-DAvg accuracy65.9Unverified
#ModelMetricClaimedVerifiedStatus
1MMD-DAvg accuracy57.9Unverified
#ModelMetricClaimedVerifiedStatus
1MMD-DAvg accuracy91Unverified