SOTAVerified

Two-sample testing

In statistical hypothesis testing, a two-sample test is a test performed on the data of two random samples, each independently obtained from a different given population. The purpose of the test is to determine whether the difference between these two populations is statistically significant. The statistics used in two-sample tests can be used to solve many machine learning problems, such as domain adaptation, covariate shift and generative adversarial networks.

Papers

Showing 226250 of 338 papers

TitleStatusHype
Minimum Description Length Revisited0
Modelling and Quantifying Membership Information Leakage in Machine Learning0
Multi-level hypothesis testing for populations of heterogeneous networks0
Nearly Optimal Sample Size in Hypothesis Testing for High-Dimensional Regression0
Negative Results in Computer Vision: A Perspective0
Network two-sample test for block models0
Noiseless Privacy0
Nonmyopic View Planning for Active Object Detection0
Nonparametric Detection of Anomalous Data Streams0
Notes on Computational Hardness of Hypothesis Testing: Predictions using the Low-Degree Likelihood Ratio0
A review of Gaussian Markov models for conditional independence0
Online Rules for Control of False Discovery Rate and False Discovery Exceedance0
On Semiparametric Exponential Family Graphical Models0
On the Decreasing Power of Kernel and Distance based Nonparametric Hypothesis Tests in High Dimensions0
On the Exploration of Local Significant Differences For Two-Sample Test0
On the High-dimensional Power of Linear-time Kernel Two-Sample Testing under Mean-difference Alternatives0
On the Learnability of Concepts: With Applications to Comparing Word Embedding Algorithms0
On the Self-Similarity of Natural Stochastic Textures0
Optimal Algorithms for Augmented Testing of Discrete Distributions0
Optimal Nonparametric Inference via Deep Neural Network0
Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing0
Optimal Statistical Hypothesis Testing for Social Choice0
Optimal Tuning for Divide-and-conquer Kernel Ridge Regression with Massive Data0
Optional Stopping with Bayes Factors: a categorization and extension of folklore results, with an application to invariant situations0
PAC Quasi-automatizability of Resolution over Restricted Distributions0
Show:102550
← PrevPage 10 of 14Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMD-DAvg accuracy98.5Unverified
#ModelMetricClaimedVerifiedStatus
1MMD-DAvg accuracy74.4Unverified
#ModelMetricClaimedVerifiedStatus
1MMD-DAvg accuracy65.9Unverified
#ModelMetricClaimedVerifiedStatus
1MMD-DAvg accuracy57.9Unverified
#ModelMetricClaimedVerifiedStatus
1MMD-DAvg accuracy91Unverified