SOTAVerified

Fairness

Papers

Showing 151200 of 5676 papers

TitleStatusHype
Addressing Shortcomings in Fair Graph Learning Datasets: Towards a New BenchmarkCode1
Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language ModelsCode1
Towards Fair Graph Anomaly Detection: Problem, Benchmark Datasets, and EvaluationCode1
Fair Resource Allocation in Multi-Task LearningCode1
Demographic Bias of Expert-Level Vision-Language Foundation Models in Medical ImagingCode1
TuneTables: Context Optimization for Scalable Prior-Data Fitted NetworksCode1
FedAA: A Reinforcement Learning Perspective on Adaptive Aggregation for Fair and Robust Federated LearningCode1
UOEP: User-Oriented Exploration Policy for Enhancing Long-Term User Experiences in Recommender SystemsCode1
A Closer Look at AUROC and AUPRC under Class ImbalanceCode1
New Job, New Gender? Measuring the Social Bias in Image Generation ModelsCode1
Quality-Diversity Generative Sampling for Learning with Synthetic DataCode1
Towards Fair Graph Federated Learning via Incentive MechanismsCode1
The Limits of Fair Medical Imaging AI In The WildCode1
Removing Biases from Molecular Representations via Information MaximizationCode1
Improving fairness for spoken language understanding in atypical speech with Text-to-SpeechCode1
Fair Abstractive Summarization of Diverse PerspectivesCode1
Federated Learning for Generalization, Robustness, Fairness: A Survey and BenchmarkCode1
Flames: Benchmarking Value Alignment of LLMs in ChineseCode1
Finetuning Text-to-Image Diffusion Models for FairnessCode1
Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMsCode1
FairSeg: A Large-Scale Medical Image Segmentation Dataset for Fairness Learning Using Segment Anything Model with Fair Error-Bound ScalingCode1
DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial IssuesCode1
fairret: a Framework for Differentiable Fairness Regularization TermsCode1
Causality-Inspired Fair Representation Learning for Multimodal RecommendationCode1
Data Optimization in Deep Learning: A SurveyCode1
MUSER: A Multi-View Similar Case Retrieval DatasetCode1
Adversarial Attacks on Fairness of Graph Neural NetworksCode1
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference LettersCode1
A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and EthicsCode1
FairTune: Optimizing Parameter Efficient Fine Tuning for Fairness in Medical Image AnalysisCode1
Beyond Skin Tone: A Multidimensional Measure of Apparent Skin ColorCode1
When to Learn What: Model-Adaptive Data Augmentation CurriculumCode1
Bias Propagation in Federated LearningCode1
Bias and Fairness in Large Language Models: A SurveyCode1
Unlocking Accuracy and Fairness in Differentially Private Image ClassificationCode1
Equitable Restless Multi-Armed Bandits: A General Framework Inspired By Digital HealthCode1
Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' AlignmentCode1
Elucidate Gender Fairness in Singing Voice TranscriptionCode1
Towards Fair Graph Neural Networks via Graph CounterfactualCode1
Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generatorsCode1
FairPrism: Evaluating Fairness-Related Harms in Text GenerationCode1
Improving Fairness in Deepfake DetectionCode1
CAPRI: Context-Aware Interpretable Point-of-Interest Recommendation FrameworkCode1
FFB: A Fair Fairness Benchmark for In-Processing Group Fairness MethodsCode1
Harvard Glaucoma Fairness: A Retinal Nerve Disease Dataset for Fairness Learning and Fair Identity NormalizationCode1
Unprocessing Seven Years of Algorithmic FairnessCode1
Towards Fair and Explainable AI using a Human-Centered AI ApproachCode1
Fair yet Asymptotically Equal Collaborative LearningCode1
Path-Specific Counterfactual Fairness for Recommender SystemsCode1
Multi-Objective Population Based TrainingCode1
Show:102550
← PrevPage 4 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)99.86Unverified
21D-CSNNPredictive Equality (age)97.8Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)96.87Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)98.97Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)98.45Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)98.68Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)99.31Unverified
#ModelMetricClaimedVerifiedStatus
1Neighbour LearningDegree of Bias (DoB)0.49Unverified
#ModelMetricClaimedVerifiedStatus
1Neighbour LearningDegree of Bias (DoB)6.26Unverified
#ModelMetricClaimedVerifiedStatus
1Neighbour LearningDegree of Bias (DoB)1.96Unverified