SOTAVerified

Fairness

Papers

Showing 50515075 of 5676 papers

TitleStatusHype
Does Robustness Improve Fairness? Approaching Fairness with Word Substitution Robustness Methods for Text Classification0
Are There Exceptions to Goodhart's Law? On the Moral Justification of Fairness-Aware Machine Learning0
Does the Prompt-based Large Language Model Recognize Students' Demographics and Introduce Bias in Essay Scoring?0
Do Fairness Interventions Come at the Cost of Privacy: Evaluations for Binary Classifiers0
Doing Right by Not Doing Wrong in Human-Robot Collaboration0
Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings0
Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers0
Do LLM Agents Exhibit Social Behavior?0
Domain Adaptation meets Individual Fairness. And they get along0
Domain-Incremental Continual Learning for Mitigating Bias in Facial Expression and Action Unit Recognition0
Dominant Resource Fairness with Meta-Types0
Do Not Harm Protected Groups in Debiasing Language Representation Models0
Don't Forget What I did?: Assessing Client Contributions in Federated Learning0
Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection0
Don't Judge Me by My Face : An Indirect Adversarial Approach to Remove Sensitive Information From Multimodal Neural Representation in Asynchronous Job Video Interviews0
Don't Kill the Baby: The Case for AI in Arbitration0
Doubly Constrained Fair Clustering0
Doubly Fair Dynamic Pricing0
Doubly Robust Fusion of Many Treatments for Policy Learning0
Downstream Effects of Affirmative Action0
Downstream Fairness Caveats with Synthetic Healthcare Data0
DPBalance: Efficient and Fair Privacy Budget Scheduling for Federated Learning as a Service0
Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk Minimization Framework0
DR.GAP: Mitigating Bias in Large Language Models using Gender-Aware Prompting with Demonstration and Reasoning0
Drop Edges and Adapt: a Fairness Enforcing Fine-tuning for Graph Neural Networks0
Show:102550
← PrevPage 203 of 228Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)99.86Unverified
21D-CSNNPredictive Equality (age)97.8Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)96.87Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)98.97Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)98.45Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)98.68Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)99.31Unverified
#ModelMetricClaimedVerifiedStatus
1Neighbour LearningDegree of Bias (DoB)0.49Unverified
#ModelMetricClaimedVerifiedStatus
1Neighbour LearningDegree of Bias (DoB)6.26Unverified
#ModelMetricClaimedVerifiedStatus
1Neighbour LearningDegree of Bias (DoB)1.96Unverified