SOTAVerified

Active Learning

Active Learning is a paradigm in supervised machine learning which uses fewer training examples to achieve better optimization by iteratively training a predictor, and using the predictor in each iteration to choose the training examples which will increase its chances of finding better configurations and at the same time improving the accuracy of the prediction model

Source: Polystore++: Accelerated Polystore System for Heterogeneous Workloads

Papers

Showing 13011350 of 3073 papers

TitleStatusHype
Deep Kernel Methods Learn Better: From Cards to Process Optimization0
AL-iGAN: An Active Learning Framework for Tunnel Geological Reconstruction Based on TBM Operational Data0
Active and Semi-Supervised Learning in ASR: Benefits on the Acoustic and Language Models0
Deep Probabilistic Ensembles: Approximate Variational Inference through KL Regularization0
Deep reinforced active learning for multi-class image classification0
Deep Reinforcement Active Learning for Human-in-the-Loop Person Re-Identification0
Deep Submodular Peripteral Networks0
Deep Surrogate of Modular Multi Pump using Active Learning0
Deep Unsupervised Active Learning on Learnable Graphs0
DeepVisualInsight: Time-Travelling Visualization for Spatio-Temporal Causality of Deep Classification Training0
Confidence Estimation for Object Detection in Document Images0
Confidence Decision Trees via Online and Active Learning for Streaming (BIG) Data0
DEMAU: Decompose, Explore, Model and Analyse Uncertainties0
Active Learning from Peers0
Active Learning-based Isolation Forest (ALIF): Enhancing Anomaly Detection in Decision Support Systems0
Dependency-aware Maximum Likelihood Estimation for Active Learning0
Dependency Parsing with Partial Annotations: An Empirical Comparison0
Depression Symptoms Modelling from Social Media Text: A Semi-supervised Learning Approach0
Depth Uncertainty Networks for Active Learning0
ALLWAS: Active Learning on Language models in WASserstein space0
Designing and Contextualising Probes for African Languages0
Design of an Active Learning System with Human Correction for Content Analysis0
Adaptive Active Hypothesis Testing under Limited Information0
Detecting annotation noise in automatically labelled data0
Diverse mini-batch Active Learning0
Confidence Calibration for Convolutional Neural Networks Using Structured Dropout0
Confidence-based Active Learning Methods for Machine Translation0
Detecting Mitoses with a Convolutional Neural Network for MIDOG 2022 Challenge0
Detecting Repeating Objects using Patch Correlation Analysis0
ALPINE: Active Link Prediction using Network Embedding0
Deterministic Langevin Unconstrained Optimization with Normalizing Flows0
ActiveLab: Active Learning with Re-Labeling by Multiple Annotators0
Adapting Behaviour via Intrinsic Reward: A Survey and Empirical Study0
DIAGNOSE: Avoiding Out-of-distribution Data using Submodular Information Measures0
Cell Library Characterization for Composite Current Source Models Based on Gaussian Process Regression and Active Learning0
Dialog Policy Learning for Joint Clarification and Active Learning Queries0
Diameter-Based Active Learning0
Diameter-based Interactive Structure Discovery0
Differentiable Submodular Maximization0
ALVIN: Active Learning Via INterpolation0
Difficult Cases: From Data to Learning, and Back0
Active Learning Guided Fine-Tuning for enhancing Self-Supervised Based Multi-Label Classification of Remote Sensing Images0
Diffusion Active Learning: Towards Data-Driven Experimental Design in Computed Tomography0
Diffusion-based Deep Active Learning0
Diminishing Uncertainty within the Training Pool: Active Learning for Medical Image Segmentation0
DiRaC-I: Identifying Diverse and Rare Training Classes for Zero-Shot Learning0
Distribution-Dependent Sample Complexity of Large Margin Learning0
DIRECT: Deep Active Learning under Imbalance and Label Noise0
Confidence Adjusted Surprise Measure for Active Resourceful Trials (CA-SMART): A Data-driven Active Learning Framework for Accelerating Material Discovery under Resource Constraints0
AdaptiFont: Increasing Individuals' Reading Speed with a Generative Font Model and Bayesian Optimization0
Show:102550
← PrevPage 27 of 62Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TypiClustAccuracy93.2Unverified
2PT4ALAccuracy93.1Unverified
3Learning lossAccuracy91.01Unverified
4CoreGCNAccuracy90.7Unverified
5Core-setAccuracy89.92Unverified
6Random Baseline (Resnet18)Accuracy88.45Unverified
7Random Baseline (VGG16)Accuracy85.09Unverified