SOTAVerified

Fairness

Papers

Showing 201250 of 5676 papers

TitleStatusHype
BAD: BiAs Detection for Large Language Models in the context of candidate screeningCode1
From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP ModelsCode1
Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model RecommendationCode1
MoCA: Memory-Centric, Adaptive Execution for Multi-Tenant Deep Neural NetworksCode1
Statistical Inference for Fairness AuditingCode1
FLAC: Fairness-Aware Representation Learning by Suppressing Attribute-Class AssociationsCode1
Optimizing fairness tradeoffs in machine learning with multiobjective meta-modelsCode1
EvalRS 2023. Well-Rounded Recommender Systems For Real-World DeploymentsCode1
FairRec: Fairness Testing for Deep Recommender SystemsCode1
HRS-Bench: Holistic, Reliable and Scalable Benchmark for Text-to-Image ModelsCode1
VARS: Video Assistant Referee System for Automated Soccer Decision Making from Multiple ViewsCode1
Interpretable Unified Language CheckingCode1
GPT detectors are biased against non-native English writersCode1
Incremental Verification of Neural NetworksCode1
CFA: Class-wise Calibrated Fair Adversarial TrainingCode1
Predicting and Enhancing the Fairness of DNNs with the Curvature of Perceptual ManifoldsCode1
Better Understanding Differences in Attribution Methods via Systematic EvaluationsCode1
PFSL: Personalized & Fair Split Learning with Data & Label Privacy for thin clientsCode1
DualFair: Fair Representation Learning at Both Group and Individual Levels via Contrastive Self-supervisionCode1
Can ChatGPT Assess Human Personalities? A General Evaluation FrameworkCode1
A Closer Look at the Intervention Procedure of Concept Bottleneck ModelsCode1
SurvivalGAN: Generating Time-to-Event Data for Survival AnalysisCode1
Scalable Infomin LearningCode1
Efficiency 360: Efficient Vision TransformersCode1
Enhancing SMT-based Weighted Model Integration by Structure AwarenessCode1
Human-Centric Multimodal Machine Learning: Recent Advances and Testbed on AI-based RecruitmentCode1
Improving Recommendation Fairness via Data AugmentationCode1
Fair Diffusion: Instructing Text-to-Image Generation Models on FairnessCode1
Knowledge is Power, Understanding is Impact: Utility and Beyond Goals, Explanation Quality, and Fairness in Path Reasoning RecommendationCode1
Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network PruningCode1
Fair Scratch Tickets: Finding Fair Sparse Networks Without Weight TrainingCode1
Federated Domain Generalization With Generalization AdjustmentCode1
Explainable AI for Bioinformatics: Methods, Tools, and ApplicationsCode1
Efficient Conditionally Invariant Representation LearningCode1
Speeding Up Multi-Objective Hyperparameter Optimization by Task Similarity-Based Meta-Learning for the Tree-Structured Parzen EstimatorCode1
MixFairFace: Towards Ultimate Fairness via MixFair Adapter in Face RecognitionCode1
Dark patterns in e-commerce: a dataset and its baseline evaluationsCode1
Investigating Fairness Disparities in Peer Review: A Language Model Enhanced ApproachCode1
HERB: Measuring Hierarchical Regional Bias in Pre-trained Language ModelsCode1
Fair and Optimal Classification via Post-ProcessingCode1
Rethinking and Improving Robustness of Convolutional Neural Networks: a Shapley Value-based Approach in Frequency DomainCode1
Track2Vec: fairness music recommendation with a GPU-free customizable-driven frameworkCode1
Private and Reliable Neural Network InferenceCode1
MABEL: Attenuating Gender Bias using Textual Entailment DataCode1
Item-based Variational Auto-encoder for Fair Music RecommendationCode1
Mitigating Gradient Bias in Multi-objective Learning: A Provably Convergent Stochastic ApproachCode1
A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation, and Research ChallengesCode1
Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face RecognitionCode1
Prompting GPT-3 To Be ReliableCode1
BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text GenerationCode1
Show:102550
← PrevPage 5 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)99.86Unverified
21D-CSNNPredictive Equality (age)97.8Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)96.87Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)98.97Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)98.45Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)98.68Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)99.31Unverified
#ModelMetricClaimedVerifiedStatus
1Neighbour LearningDegree of Bias (DoB)0.49Unverified
#ModelMetricClaimedVerifiedStatus
1Neighbour LearningDegree of Bias (DoB)6.26Unverified
#ModelMetricClaimedVerifiedStatus
1Neighbour LearningDegree of Bias (DoB)1.96Unverified