SOTAVerified

Ethics

Papers

Showing 125 of 832 papers

TitleStatusHype
RAFT: Reward rAnked FineTuning for Generative Foundation Model AlignmentCode5
TrustLLM: Trustworthiness in Large Language ModelsCode4
Visual Large Language Models for Generalized and Specialized ApplicationsCode3
How Can Recommender Systems Benefit from Large Language Models: A SurveyCode3
A Survey on Evaluation of Large Language ModelsCode3
Aligning AI With Shared Human ValuesCode2
Getting pwn'd by AI: Penetration Testing with Large Language ModelsCode2
JailbreakRadar: Comprehensive Assessment of Jailbreak Attacks Against LLMsCode2
Data-Centric Foundation Models in Computational Healthcare: A SurveyCode2
GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via CipherCode2
A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and LawCode2
On the State of NLP Approaches to Modeling Depression in Social Media: A Post-COVID-19 OutlookCode2
PsycoLLM: Enhancing LLM for Psychological Understanding and EvaluationCode2
Scaling Language Models: Methods, Analysis & Insights from Training GopherCode2
Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive SurveyCode1
Large Language Models to Identify Social Determinants of Health in Electronic Health RecordsCode1
Ego4D: Around the World in 3,000 Hours of Egocentric VideoCode1
Ethics Sheets for AI TasksCode1
MoralBench: Moral Evaluation of LLMsCode1
Can Machines Learn Morality? The Delphi ExperimentCode1
Deontological Ethics By Monotonicity Shape ConstraintsCode1
CATS: Conditional Adversarial Trajectory Synthesis for Privacy-Preserving Trajectory Data Publication Using Deep Learning ApproachesCode1
Brain tumor segmentation using synthetic MR images -- A comparison of GANs and diffusion modelsCode1
Ethics Sheet for Automatic Emotion Recognition and Sentiment AnalysisCode1
Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI BenchmarkCode1
Show:102550
← PrevPage 1 of 34Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1RuGPT-3 LargeAccuracy68.6Unverified
2RuGPT-3 MeduimAccuracy68.3Unverified
3RuGPT-3 SmallAccuracy55.5Unverified
4Human benchmarkAccuracy52.9Unverified
#ModelMetricClaimedVerifiedStatus
1Human benchmarkAccuracy67.6Unverified
2RuGPT-3 SmallAccuracy60.9Unverified
3RuGPT-3 LargeAccuracy44.9Unverified
4RuGPT-3 MediumAccuracy44.1Unverified