SOTAVerified

Active Learning

Active Learning is a paradigm in supervised machine learning which uses fewer training examples to achieve better optimization by iteratively training a predictor, and using the predictor in each iteration to choose the training examples which will increase its chances of finding better configurations and at the same time improving the accuracy of the prediction model

Source: Polystore++: Accelerated Polystore System for Heterogeneous Workloads

Papers

Showing 801825 of 3073 papers

TitleStatusHype
Unsupervised Learning of Distributional Properties can Supplement Human Labeling and Increase Active Learning Efficiency in Anomaly Detection0
OpenAL: An Efficient Deep Active Learning Framework for Open-Set Pathology Image ClassificationCode0
Active Learning for Video Classification with Frame Level Queries0
DADO -- Low-Cost Query Strategies for Deep Active Design Optimization0
Active Learning in Physics: From 101, to Progress, and Perspective0
Training Ensembles with Inliers and Outliers for Semi-supervised Active LearningCode0
For Women, Life, Freedom: A Participatory AI-Based Social Web Analysis of a Watershed Moment in Iran's Gender Struggles0
Active Learning with Contrastive Pre-training for Facial Expression RecognitionCode0
Understanding Uncertainty SamplingCode0
Optimal and Efficient Binary Questioning for Human-in-the-Loop Annotation0
Human in the AI loop via xAI and Active Learning for Visual Inspection0
Robust Surgical Tools Detection in Endoscopic Videos with Noisy Data0
REAL: A Representative Error-Driven Approach for Active LearningCode0
Revisiting Sample Size Determination in Natural Language UnderstandingCode0
ProbVLM: Probabilistic Adapter for Frozen Vision-Language ModelsCode1
Thompson sampling for improved exploration in GFlowNets0
Ticket-BERT: Labeling Incident Management Tickets with Language Models0
PCDAL: A Perturbation Consistency-Driven Active Learning Approach for Medical Image Segmentation and ClassificationCode0
Increasing Performance And Sample Efficiency With Model-agnostic Interactive Feature Attributions0
Large Language Models as Annotators: Enhancing Generalization of NLP Models at Minimal Cost0
BatchGFN: Generative Flow Networks for Batch Active LearningCode0
Exploring Data Redundancy in Real-world Image Classification through Data SelectionCode0
Are Good Explainers Secretly Human-in-the-Loop Active Learners?0
M-VAAL: Multimodal Variational Adversarial Active Learning for Downstream Medical Image Analysis TasksCode1
Multi-Task Consistency for Active Learning0
Show:102550
← PrevPage 33 of 123Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TypiClustAccuracy93.2Unverified
2PT4ALAccuracy93.1Unverified
3Learning lossAccuracy91.01Unverified
4CoreGCNAccuracy90.7Unverified
5Core-setAccuracy89.92Unverified
6Random Baseline (Resnet18)Accuracy88.45Unverified
7Random Baseline (VGG16)Accuracy85.09Unverified