SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 821830 of 1135 papers

TitleStatusHype
Generative Parameter-Efficient Fine-TuningCode1
FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, ToxicityCode0
Contrastive Vision-Language Alignment Makes Efficient Instruction LearnerCode1
Text as Images: Can Multimodal Large Language Models Follow Printed Instructions in Pixels?Code1
Ranni: Taming Text-to-Image Diffusion for Accurate Instruction FollowingCode5
Releasing the CRaQAn (Coreference Resolution in Question-Answering): An open-source dataset and dataset creation methodology using instruction-following models0
MoDS: Model-oriented Data Selection for Instruction TuningCode1
Towards Vision Enhancing LLMs: Empowering Multimodal Knowledge Storage and Sharing in LLMs0
GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation0
GeoChat: Grounded Large Vision-Language Model for Remote SensingCode2
Show:102550
← PrevPage 83 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified