SOTAVerified

Models Alignment

Models Alignment is the process of ensuring that multiple models used in a machine learning system are consistent with each other and aligned with the goals of the system. This involves defining clear and consistent objectives for each model, identifying and addressing any inconsistencies or biases in the data used to train each model, testing and validating each model to ensure its accuracy, and ensuring that the predictions and decisions made by each model are consistent and aligned with the overall goals of the system.

Papers

Showing 110 of 21 papers

TitleStatusHype
Into the Unknown: From Structure to Disorder in Protein Function Prediction0
Large Means Left: Political Bias in Large Language Models Increases with Their Number of Parameters0
AKD : Adversarial Knowledge Distillation For Large Language Models Alignment on Coding tasks0
Stackelberg Game Preference Optimization for Data-Efficient Alignment of Language Models0
A Survey of State of the Art Large Vision Language Models: Alignment, Benchmark, Evaluations and ChallengesCode4
InfAlign: Inference-aware language model alignment0
A Survey on LLM-as-a-JudgeCode2
Magnetic Preference Optimization: Achieving Last-iterate Convergence for Language Model Alignment0
Negative-Prompt-driven Alignment for Generative Language Model0
VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-Language Models Alignment0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.