SOTAVerified

Models Alignment

Models Alignment is the process of ensuring that multiple models used in a machine learning system are consistent with each other and aligned with the goals of the system. This involves defining clear and consistent objectives for each model, identifying and addressing any inconsistencies or biases in the data used to train each model, testing and validating each model to ensure its accuracy, and ensuring that the predictions and decisions made by each model are consistent and aligned with the overall goals of the system.

Papers

Showing 110 of 21 papers

TitleStatusHype
A Survey of State of the Art Large Vision Language Models: Alignment, Benchmark, Evaluations and ChallengesCode4
A Survey on LLM-as-a-JudgeCode2
Re-basin via implicit Sinkhorn differentiationCode1
Answering Numerical Reasoning Questions in Table-Text Hybrid Contents with Graph-based Encoder and Tree-based DecoderCode1
Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' AlignmentCode1
VFA: Vision Frequency Analysis of Foundation Models and HumanCode0
Constructive Large Language Models Alignment with Diverse Feedback0
Competence-Based Analysis of Language Models0
Improving and Assessing the Fidelity of Large Language Models Alignment to Online Communities0
FlexHDR: Modelling Alignment and Exposure Uncertainties for Flexible HDR Imaging0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.