EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models
Samuel J. Paech
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/eq-bench/eq-benchOfficialIn paperpytorch★ 417
Abstract
We introduce EQ-Bench, a novel benchmark designed to evaluate aspects of emotional intelligence in Large Language Models (LLMs). We assess the ability of LLMs to understand complex emotions and social interactions by asking them to predict the intensity of emotional states of characters in a dialogue. The benchmark is able to discriminate effectively between a wide range of models. We find that EQ-Bench correlates strongly with comprehensive multi-domain benchmarks like MMLU (Hendrycks et al., 2020) (r=0.97), indicating that we may be capturing similar aspects of broad intelligence. Our benchmark produces highly repeatable results using a set of 60 English-language questions. We also provide open-source code for an automated benchmarking pipeline at https://github.com/EQ-bench/EQ-Bench and a leaderboard at https://eqbench.com
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| EQ-Bench | OpenAI gpt-4-0613 | EQ-Bench Score | 62.52 | — | Unverified |
| EQ-Bench | migtissera/SynthIA-70B-v1.5 | EQ-Bench Score | 54.83 | — | Unverified |
| EQ-Bench | OpenAI gpt-4-0314 | EQ-Bench Score | 53.39 | — | Unverified |
| EQ-Bench | Qwen/Qwen-72B-Chat | EQ-Bench Score | 52.44 | — | Unverified |
| EQ-Bench | Anthropic Claude2 | EQ-Bench Score | 52.14 | — | Unverified |
| EQ-Bench | meta-llama/Llama-2-70b-chat-hf | EQ-Bench Score | 51.56 | — | Unverified |
| EQ-Bench | 01-ai/Yi-34B-Chat | EQ-Bench Score | 51.03 | — | Unverified |
| EQ-Bench | OpenAI gpt-3.5-0613 | EQ-Bench Score | 49.17 | — | Unverified |
| EQ-Bench | OpenAI gpt-3.5-turbo-0301 | EQ-Bench Score | 47.61 | — | Unverified |
| EQ-Bench | Open-Orca/Mistral-7B-OpenOrca | EQ-Bench Score | 44.4 | — | Unverified |
| EQ-Bench | Qwen/Qwen-14B-Chat | EQ-Bench Score | 43.76 | — | Unverified |
| EQ-Bench | OpenAI text-davinci-003 | EQ-Bench Score | 43.73 | — | Unverified |
| EQ-Bench | Intel/neural-chat-7b-v3-1 | EQ-Bench Score | 43.61 | — | Unverified |
| EQ-Bench | OpenAI text-davinci-002 | EQ-Bench Score | 39.44 | — | Unverified |
| EQ-Bench | openchat/openchat 3.5 | EQ-Bench Score | 37.08 | — | Unverified |
| EQ-Bench | lmsys/vicuna-33b-v1.3 | EQ-Bench Score | 36.52 | — | Unverified |
| EQ-Bench | meta-llama/Llama-2-13b-chat-hf | EQ-Bench Score | 33.02 | — | Unverified |
| EQ-Bench | lmsys/vicuna-13b-v1.1 | EQ-Bench Score | 32.85 | — | Unverified |
| EQ-Bench | meta-llama/Llama-2-7b-chat-hf | EQ-Bench Score | 25.43 | — | Unverified |
| EQ-Bench | Koala 13B | EQ-Bench Score | 24.92 | — | Unverified |
| EQ-Bench | lmsys/vicuna-7b-v1.1 | EQ-Bench Score | 22.24 | — | Unverified |
| EQ-Bench | OpenAI text-davinci-001 | EQ-Bench Score | 15.19 | — | Unverified |
| EQ-Bench | OpenAI ADA | EQ-Bench Score | 2.25 | — | Unverified |