SOTAVerified

Ethics

Papers

Showing 5175 of 832 papers

TitleStatusHype
Surgeons Awareness, Expectations, and Involvement with Artificial Intelligence: a Survey Pre and Post the GPT Era0
FG 2025 TrustFAA: the First Workshop on Towards Trustworthy Facial Affect Analysis: Advancing Insights of Fairness, Explainability, and Safety (TrustFAA)0
A Comprehensive Study on Medical Image Segmentation using Deep Neural Networks0
Multi Layered Autonomy and AI Ecologies in Robotic Art Installations0
Higher-Order Responsibility0
HADA: Human-AI Agent Decision Alignment Architecture0
Position: Olfaction Standardization is Essential for the Advancement of Embodied Artificial Intelligence0
Bottom-Up Perspectives on AI Governance: Insights from User Reviews of AI Products0
Responsible Data Stewardship: Generative AI and the Digital Waste Problem0
Ten Principles of AI Agent Economics0
My Answer Is NOT 'Fair': Mitigating Social Bias in Vision-Language Models via Fair and Biased Residuals0
When Ethics and Payoffs Diverge: LLM Agents in Morally Charged Social DilemmasCode0
Debate-to-Detect: Reformulating Misinformation Detection as a Real-World Debate with Large Language Models0
The Staircase of Ethics: Probing LLM Value Priorities through Multi-Step Induction to Complex Moral Dilemmas0
A Toolkit for Compliance, a Toolkit for Justice: Drawing on Cross-sectoral Expertise to Develop a Pro-justice EU AI Act Toolkit0
Internal and External Impacts of Natural Language Processing Papers0
A Participatory Strategy for AI Ethics in Education and Rehabilitation grounded in the Capability Approach0
AI vs. Human Judgment of Content Moderation: LLM-as-a-Judge and Ethics-Based Response Refusals0
Kaleidoscope Gallery: Exploring Ethics and Generative AI Through Art0
More-than-Human Storytelling: Designing Longitudinal Narrative Engagements with Generative AI0
Inter(sectional) Alia(s): Ambiguity in Voice Agent Identity via Intersectional Japanese Self-Referents0
Exploring Moral Exercises for Human Oversight of AI systems: Insights from Three Pilot Studies0
HumaniBench: A Human-Centric Framework for Large Multimodal Models EvaluationCode0
Ethics and Persuasion in Reinforcement Learning from Human Feedback: A Procedural Rhetorical Approach0
Clicking some of the silly options: Exploring Player Motivation in Static and Dynamic Educational Interactive Narratives0
Show:102550
← PrevPage 3 of 34Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1RuGPT-3 LargeAccuracy68.6Unverified
2RuGPT-3 MeduimAccuracy68.3Unverified
3RuGPT-3 SmallAccuracy55.5Unverified
4Human benchmarkAccuracy52.9Unverified
#ModelMetricClaimedVerifiedStatus
1Human benchmarkAccuracy67.6Unverified
2RuGPT-3 SmallAccuracy60.9Unverified
3RuGPT-3 LargeAccuracy44.9Unverified
4RuGPT-3 MediumAccuracy44.1Unverified