SOTAVerified

Open-Ended Question Answering

Open-ended questions are defined as those that simply pose the question, without imposing any constraints on the format of the response. This distinguishes them from questions with a predetermined answer format.

Papers

Showing 125 of 796 papers

TitleStatusHype
Chain-of-Discussion: A Multi-Model Framework for Complex Evidence-Based Question AnsweringCode4
Neptune: The Long Orbit to Benchmarking Long Video UnderstandingCode2
Automated Evaluation of Retrieval-Augmented Language Models with Task-Specific Exam GenerationCode2
Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language ModelsCode2
Legal Case Document Summarization: Extractive and Abstractive Methods and their EvaluationCode2
Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-trainingCode2
Language Models Can See: Plugging Visual Controls in Text GenerationCode2
M2I: From Factored Marginal Trajectory Prediction to Interactive PredictionCode2
GreaseLM: Graph REASoning Enhanced Language Models for Question AnsweringCode2
Would Mega-scale Datasets Further Enhance Spatiotemporal 3D CNNs?Code2
Leveraging Latent Features for Local ExplanationsCode2
O^2-Searcher: A Searching-based Agent Model for Open-Domain Open-Ended Question AnsweringCode1
Ranked Voting based Self-Consistency of Large Language ModelsCode1
FSPO: Few-Shot Preference Optimization of Synthetic Preference Data in LLMs Elicits Effective Personalization to Real UsersCode1
LLaSA: A Multimodal LLM for Human Activity Analysis Through Wearable and Smartphone SensorsCode1
SCAR: Efficient Instruction-Tuning for Large Language Models via Style Consistency-Aware Response RankingCode1
SciQAG: A Framework for Auto-Generated Science Question Answering Dataset with Fine-grained EvaluationCode1
BiMediX: Bilingual Medical Mixture of Experts LLMCode1
On Early Detection of Hallucinations in Factual Question AnsweringCode1
PRD: Peer Rank and Discussion Improve Large Language Model based EvaluationsCode1
Learning to Do or Learning While Doing: Reinforcement Learning and Bayesian Optimisation for Online Continuous TuningCode1
mPLM-Sim: Better Cross-Lingual Similarity and Transfer in Multilingual Pretrained Language ModelsCode1
When should we prefer Decision Transformers for Offline Reinforcement Learning?Code1
Improving Implicit Feedback-Based Recommendation through Multi-Behavior AlignmentCode1
Masked Structural Growth for 2x Faster Language Model Pre-trainingCode1
Show:102550
← PrevPage 1 of 32Next →

No leaderboard results yet.