SOTAVerified

Language Modeling

Papers

Showing 70017025 of 14182 papers

TitleStatusHype
Who Writes the Review, Human or AI?0
Towards Ontology-Enhanced Representation Learning for Large Language ModelsCode0
SeamlessExpressiveLM: Speech Language Model for Expressive Speech-to-Speech Translation with Chain-of-Thought0
MindSemantix: Deciphering Brain Visual Experiences with a Brain-Language Model0
Nearest Neighbor Speculative Decoding for LLM Generation and Attribution0
X-VILA: Cross-Modality Alignment for Large Language Model0
Multi-Modal Generative Embedding Model0
Kotlin ML Pack: Technical Report0
Conveyor: Efficient Tool-aware LLM Serving with Tool Partial ExecutionCode0
LLaMA-Reg: Using LLaMA 2 for Unsupervised Medical Image Registration0
A Full-duplex Speech Dialogue Scheme Based On Large Language Models0
Contextual Position Encoding: Learning to Count What's Important0
Learning from Litigation: Graphs and LLMs for Retrieval and Reasoning in eDiscovery0
ChartFormer: A Large Vision Language Model for Converting Chart Images into Tactile Accessible SVGsCode0
IAPT: Instruction-Aware Prompt Tuning for Large Language Models0
Black-Box Detection of Language Model Watermarks0
Large Language Model-Driven Curriculum Design for Mobile NetworksCode0
A Context-Aware Approach for Enhancing Data Imputation with Pre-trained Language Models0
Don't Forget to Connect! Improving RAG with Graph-based Reranking0
Automated Real-World Sustainability Data Generation from Images of Buildings0
Unified Preference Optimization: Language Model Alignment Beyond the Preference Frontier0
Facilitating Holistic Evaluations with LLMs: Insights from Scenario-Based Experiments0
XL3M: A Training-free Framework for LLM Length Extension Based on Segment-wise Inference0
Pipette: Automatic Fine-grained Large Language Model Training Configurator for Real-World ClustersCode0
Semantic are Beacons: A Semantic Perspective for Unveiling Parameter-Efficient Fine-Tuning in Knowledge Learning0
Show:102550
← PrevPage 281 of 568Next →

No leaderboard results yet.