SOTAVerified

Ethics

Papers

Showing 125 of 832 papers

TitleStatusHype
RAFT: Reward rAnked FineTuning for Generative Foundation Model AlignmentCode5
TrustLLM: Trustworthiness in Large Language ModelsCode4
Visual Large Language Models for Generalized and Specialized ApplicationsCode3
A Survey on Evaluation of Large Language ModelsCode3
How Can Recommender Systems Benefit from Large Language Models: A SurveyCode3
On the State of NLP Approaches to Modeling Depression in Social Media: A Post-COVID-19 OutlookCode2
PsycoLLM: Enhancing LLM for Psychological Understanding and EvaluationCode2
A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and LawCode2
JailbreakRadar: Comprehensive Assessment of Jailbreak Attacks Against LLMsCode2
Data-Centric Foundation Models in Computational Healthcare: A SurveyCode2
GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via CipherCode2
Getting pwn'd by AI: Penetration Testing with Large Language ModelsCode2
Scaling Language Models: Methods, Analysis & Insights from Training GopherCode2
Aligning AI With Shared Human ValuesCode2
XTRUST: On the Multilingual Trustworthiness of Large Language ModelsCode1
Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive SurveyCode1
Language Model Alignment in Multilingual Trolley ProblemsCode1
MoralBench: Moral Evaluation of LLMsCode1
MedSafetyBench: Evaluating and Improving the Medical Safety of Large Language ModelsCode1
NewsBench: A Systematic Evaluation Framework for Assessing Editorial Capabilities of Large Language Models in Chinese JournalismCode1
E-EVAL: A Comprehensive Chinese K-12 Education Evaluation Benchmark for Large Language ModelsCode1
A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and EthicsCode1
CATS: Conditional Adversarial Trajectory Synthesis for Privacy-Preserving Trajectory Data Publication Using Deep Learning ApproachesCode1
Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and EthicsCode1
TeD-SPAD: Temporal Distinctiveness for Self-supervised Privacy-preservation for video Anomaly DetectionCode1
Show:102550
← PrevPage 1 of 34Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1RuGPT-3 LargeAccuracy68.6Unverified
2RuGPT-3 MeduimAccuracy68.3Unverified
3RuGPT-3 SmallAccuracy55.5Unverified
4Human benchmarkAccuracy52.9Unverified
#ModelMetricClaimedVerifiedStatus
1Human benchmarkAccuracy67.6Unverified
2RuGPT-3 SmallAccuracy60.9Unverified
3RuGPT-3 LargeAccuracy44.9Unverified
4RuGPT-3 MediumAccuracy44.1Unverified