SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 851860 of 1135 papers

TitleStatusHype
Instruction-Following Evaluation for Large Language ModelsCode5
MART: Improving LLM Safety with Multi-round Automatic Red-Teaming0
Generalization Analogies: A Testbed for Generalizing AI Oversight to Hard-To-Measure DomainsCode0
To See is to Believe: Prompting GPT-4V for Better Visual Instruction TuningCode2
WaterBench: Towards Holistic Evaluation of Watermarks for Large Language ModelsCode1
InfMLLM: A Unified Framework for Visual-Language TasksCode1
Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small ScorerCode1
DialMAT: Dialogue-Enabled Transformer with Moment-Based Adversarial TrainingCode0
u-LLaVA: Unifying Multi-Modal Tasks via Large Language ModelCode1
LLaVA-Plus: Learning to Use Tools for Creating Multimodal AgentsCode2
Show:102550
← PrevPage 86 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified