SOTAVerified

Fairness

Papers

Showing 47014750 of 5676 papers

TitleStatusHype
Self Reward Design with Fine-grained InterpretabilityCode0
Distributionally Robust Survival Analysis: A Novel Fairness Loss Without DemographicsCode0
Centralized Selection with Preferences in the Presence of BiasesCode0
HumaniBench: A Human-Centric Framework for Large Multimodal Models EvaluationCode0
Hire Me or Not? Examining Language Model's Behavior with Occupation AttributesCode0
A Reproducibility Study of Product-side Fairness in Bundle RecommendationCode0
Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imagingCode0
Censoring Representations with an AdversaryCode0
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AICode0
Unmasking Societal Biases in Respiratory Support for ICU Patients through Social Determinants of HealthCode0
AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic BiasCode0
Are Pretrained Multilingual Models Equally Fair Across Languages?Code0
How Biased are Your Features?: Computing Fairness Influence Functions with Global Sensitivity AnalysisCode0
Explainable Global Fairness Verification of Tree-Based ClassifiersCode0
Distributional Individual Fairness in ClusteringCode0
Towards Robust NLG Bias Evaluation with Syntactically-diverse PromptsCode0
How Do Fair Decisions Fare in Long-term Qualification?Code0
PrivFair: a Library for Privacy-Preserving Fairness AuditingCode0
Semantic Scheduling for LLM InferenceCode0
Explaining Bias in Deep Face Recognition via Image CharacteristicsCode0
Explaining Explanations: An Overview of Interpretability of Machine LearningCode0
How fair can we go in machine learning? Assessing the boundaries of fairness in decision treesCode0
A Causal Framework to Measure and Mitigate Non-binary Treatment DiscriminationCode0
Unsupervised bias discovery in medical image segmentationCode0
Explaining Neural Networks with ReasonsCode0
Montague semantics and modifier consistency measurement in neural language modelsCode0
"Explain it in the Same Way!" -- Model-Agnostic Group Fairness of Counterfactual ExplanationsCode0
How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text ClassificationCode0
Explanation-Guided Fairness Testing through Genetic AlgorithmCode0
Exploiting Biased Models to De-bias Text: A Gender-Fair Rewriting ModelCode0
Fairness in Rating Prediction by Awareness of Verbal and Gesture Quality of Public SpeechesCode0
How Knowledge Distillation Mitigates the Synthetic Gap in Fair Face RecognitionCode0
Ultra-marginal Feature Importance: Learning from Data with Causal GuaranteesCode0
Probabilistic Permutation Graph Search: Black-Box Optimization for Fairness in RankingCode0
The Better Angels of Machine Personality: How Personality Relates to LLM SafetyCode0
ABROCA Distributions For Algorithmic Bias Assessment: Considerations Around InterpretationCode0
How Robust is your Fair Model? Exploring the Robustness of Diverse Fairness StrategiesCode0
Probabilistic Verification of Neural Networks using Branch and BoundCode0
SensitiveNets: Learning Agnostic Representations with Application to Face ImagesCode0
Probably Approximate Shapley Fairness with Applications in Machine LearningCode0
Motley: Benchmarking Heterogeneity and Personalization in Federated LearningCode0
A Brief Tutorial on Sample Size Calculations for Fairness AuditsCode0
Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual FairnessCode0
The Bouncer Problem: Challenges to Remote ExplainabilityCode0
Towards Exploring Fairness in Visual Transformer based Natural and GAN Image Detection SystemsCode0
A Classification of Feedback Loops and Their Relation to Biases in Automated Decision-Making SystemsCode0
Procedural Fairness and Its Relationship with Distributive Fairness in Machine LearningCode0
Procedural Fairness in Machine LearningCode0
How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their VulnerabilitiesCode0
Procedural Fairness Through Decoupling Objectionable Data Generating ComponentsCode0
Show:102550
← PrevPage 95 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)99.86Unverified
21D-CSNNPredictive Equality (age)97.8Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)96.87Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)98.97Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)98.45Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)98.68Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)99.31Unverified
#ModelMetricClaimedVerifiedStatus
1Neighbour LearningDegree of Bias (DoB)0.49Unverified
#ModelMetricClaimedVerifiedStatus
1Neighbour LearningDegree of Bias (DoB)6.26Unverified
#ModelMetricClaimedVerifiedStatus
1Neighbour LearningDegree of Bias (DoB)1.96Unverified