SOTAVerified

Fairness

Papers

Showing 601650 of 5676 papers

TitleStatusHype
Marginal Fairness: Fair Decision-Making under Risk Measures0
Soft Weighted Machine Unlearning0
Smart Energy Guardian: A Hybrid Deep Learning Model for Detecting Fraudulent PV Generation0
The Staircase of Ethics: Probing LLM Value Priorities through Multi-Step Induction to Complex Moral Dilemmas0
Embracing Contradiction: Theoretical Inconsistency Will Not Impede the Road of Building Responsible AI Systems0
Will Large Language Models Transform Clinical Prediction?0
High-Fidelity Functional Ultrasound Reconstruction via A Visual Auto-Regressive Framework0
Evaluating the Performance of Nigerian Lecturers using Multilayer Perceptron0
Reasoning in Neurosymbolic AI0
On the Deployment of RIS-mounted UAV Networks0
Liouville PDE-based sliced-Wasserstein flow for fair regression0
Reconsidering Fairness Through Unawareness from the Perspective of Model Multiplicity0
AIDRIN 2.0: A Framework to Assess Data Readiness for AI0
Accuracy vs. Accuracy: Computational Tradeoffs Between Classification Rates and Utility0
Fairness under Competition0
A Generic Framework for Conformal FairnessCode0
Internal and External Impacts of Natural Language Processing Papers0
DISCO Balances the Scales: Adaptive Domain- and Difficulty-Aware Reinforcement Learning on Imbalanced Data0
Cultural Value Alignment in Large Language Models: A Prompt-based Analysis of Schwartz Values in Gemini, ChatGPT, and DeepSeek0
Evaluate Bias without Manual Test Sets: A Concept Representation Perspective for LLMs0
Distributionally Robust Federated Learning with Client Drift Minimization0
OpenEthics: A Comprehensive Ethical Evaluation of Open-Source Generative Large Language ModelsCode0
HAVA: Hybrid Approach to Value-Alignment through Reward Weighing for Reinforcement LearningCode0
Are the confidence scores of reviewers consistent with the review content? Evidence from top conference proceedings in AICode0
Mitigating Subgroup Disparities in Multi-Label Speech Emotion Recognition: A Pseudo-Labeling and Unsupervised Learning Approach0
Unlearning Algorithmic Biases over Graphs0
Explaining Neural Networks with ReasonsCode0
Algorithmic Hiring and Diversity: Reducing Human-Algorithm Similarity for Better Outcomes0
Enforcing Hard Linear Constraints in Deep Learning Models with Decision Rules0
DECASTE: Unveiling Caste Stereotypes in Large Language Models through Multi-Dimensional Bias Analysis0
Accuracy and Fairness of Facial Recognition Technology in Low-Quality Police Images: An Experiment With Synthetic Faces0
Adversarial Testing in LLMs: Insights into Decision-Making Vulnerabilities0
Continuous Fair SMOTE -- Fairness-Aware Stream Learning from Imbalanced Data0
Automated Bias Assessment in AI-Generated Educational Content Using CEAT FrameworkCode0
Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks0
Language Models That Walk the Talk: A Framework for Formal Fairness Certificates0
Seeing the Unseen: How EMoE Unveils Bias in Text-to-Image Diffusion Models0
Expert-Like Reparameterization of Heterogeneous Pyramid Receptive Fields in Efficient CNNs for Fair Medical Image Classification0
Power Allocation for Delay Optimization in Device-to-Device Networks: A Graph Reinforcement Learning Approach0
Enforcing Fairness Where It Matters: An Approach Based on Difference-of-Convex Constraints0
Teach2Eval: An Indirect Evaluation Method for LLM by Judging How It TeachesCode0
Interactional Fairness in LLM Multi-Agent Systems: An Evaluation Framework0
Attribution Projection Calculus: A Novel Framework for Causal Inference in Bayesian Networks0
Behind the Screens: Uncovering Bias in AI-Driven Video Interview Assessments Using Counterfactuals0
Improving Fairness in LLMs Through Testing-Time Adversaries0
HumaniBench: A Human-Centric Framework for Large Multimodal Models EvaluationCode0
Finding Counterfactual Evidences for Node ClassificationCode0
Equal is Not Always Fair: A New Perspective on Hyperspectral Representation Non-Uniformity0
MPMA: Preference Manipulation Attack Against Model Context Protocol0
Fairness-aware Anomaly Detection via Fair Projection0
Show:102550
← PrevPage 13 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)99.86Unverified
21D-CSNNPredictive Equality (age)97.8Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)96.87Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)98.97Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)98.45Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)98.68Unverified
#ModelMetricClaimedVerifiedStatus
11D-CSNNPredictive Equality (age)99.31Unverified
#ModelMetricClaimedVerifiedStatus
1Neighbour LearningDegree of Bias (DoB)0.49Unverified
#ModelMetricClaimedVerifiedStatus
1Neighbour LearningDegree of Bias (DoB)6.26Unverified
#ModelMetricClaimedVerifiedStatus
1Neighbour LearningDegree of Bias (DoB)1.96Unverified