SOTAVerified

Semantic Parsing

Semantic Parsing is the task of transducing natural language utterances into formal meaning representations. The target meaning representations can be defined according to a wide variety of formalisms. This include linguistically-motivated semantic representations that are designed to capture the meaning of any sentence such as λ-calculus or the abstract meaning representations. Alternatively, for more task-driven approaches to Semantic Parsing, it is common for meaning representations to represent executable programs such as SQL queries, robotic commands, smart phone instructions, and even general-purpose programming languages like Python and Java.

Source: Tranx: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation

Papers

Showing 426450 of 1202 papers

TitleStatusHype
From Treebank Parses to Episodic Logic and Commonsense Inference0
Fully Automatic Semantic MT Evaluation0
Constrained Semantic Forests for Improved Discriminative Semantic Parsing0
Constructing Large Proposition Databases0
PMB5: Gaining More Insight into Neural Semantic Parsing with Challenging Benchmarks0
GCN-Sem at SemEval-2019 Task 1: Semantic Parsing using Graph Convolutional and Recurrent Neural Networks0
Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing0
A Discriminative Graph-Based Parser for the Abstract Meaning Representation0
Generating Logical Forms from Graph Representations of Text and Entities0
Generating Syntactic Paraphrases0
Generating Synthetic Data for Task-Oriented Semantic Parsing with Hierarchical Representations0
GRILLBot: An Assistant for Real-World Tasks with Neural Semantic Parsing and Graph-Based Representations0
Active Learning for Multilingual Semantic Parser0
Did You Mean...? Confidence-based Trade-offs in Semantic Parsing0
German and French Neural Supertagging Experiments for LTAG Parsing0
GKR: the Graphical Knowledge Representation for semantic parsing0
Global Methods for Cross-lingual Semantic Role and Predicate Labelling0
Book Reviews: Ontology-Based Interpretation of Natural Language by Philipp Cimiano, Christina Unger and John McCrae0
A Survey on Complex Question Answering over Knowledge Base: Recent Advances and Challenges0
Grammar-based Neural Text-to-SQL Generation0
Grammar-Constrained Neural Semantic Parsing with LR Parsers0
DialSQL: Dialogue Based Structured Query Generation0
BME-UW at SRST-2019: Surface realization with Interpreted Regular Tree Grammars0
Graph Algebraic Combinatory Categorial Grammar0
A Comparison of the Events and Relations Across ACE, ERE, TAC-KBP, and FrameNet Annotation Standards0
Show:102550
← PrevPage 18 of 49Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ARTEMIS-DAAccuracy (Test)80.8Unverified
2SynTQA (Oracle)Test Accuracy77.5Unverified
3TabLaPAccuracy (Test)76.6Unverified
4SynTQA (GPT)Accuracy (Test)74.4Unverified
5Mix SCAccuracy (Test)73.6Unverified
6SynTQA (RF)Accuracy (Test)71.6Unverified
7CABINETAccuracy (Test)69.1Unverified
8NormTab+TabSQLifyAccuracy (Test)68.63Unverified
9Chain-of-TableAccuracy (Test)67.31Unverified
10Tab-PoTAccuracy (Test)66.78Unverified
#ModelMetricClaimedVerifiedStatus
1RESDSQL-3B + NatSQLAccuracy84.1Unverified
2code-davinci-002 175B (LEVER)Accuracy81.9Unverified
3RASAT+PICARDAccuracy75.5Unverified
4Graphix-3B + PICARDAccuracy74Unverified
5T5-3B + PICARDAccuracy71.9Unverified
6SADGA + GAPAccuracy70.1Unverified
7RATSQL + GAPAccuracy69.7Unverified
8RATSQL + Grammar-Augmented Pre-TrainingAccuracy69.6Unverified
9RATSQL + BERTAccuracy65.6Unverified
10Exact Set MatchingAccuracy19.7Unverified
#ModelMetricClaimedVerifiedStatus
1Dynamic Least-to-Most PromptingExact Match95Unverified
2LeARExact Match90.9Unverified
3T5-3B w/ Intermediate RepresentationsExact Match83.8Unverified
4Hierarchical Poset DecodingExact Match69Unverified
5Universal TransformerExact Match18.9Unverified
#ModelMetricClaimedVerifiedStatus
1ReaRevAccuracy76.4Unverified
2NSM+hAccuracy74.3Unverified
3CBR-KBQAAccuracy70Unverified
4STAGG (Yih et al., 2016)Accuracy63.9Unverified
5T5-11B (Raffel et al., 2020)Accuracy56.5Unverified
#ModelMetricClaimedVerifiedStatus
1CABINETDenotation accuracy (test)89.5Unverified
2TAPEX-Large (weak supervision)Denotation accuracy (test)89.5Unverified
3ReasTAP-Large (weak supervision)Denotation accuracy (test)89.2Unverified
4NL2SQL-BERTAccuracy89Unverified
5TAPAS-Large (weak supervision)Denotation accuracy (test)83.6Unverified
#ModelMetricClaimedVerifiedStatus
1PhraseTransformerAccuracy90.4Unverified
2TranxAccuracy86.2Unverified
3ASN (Rabinovich et al., 2017)Accuracy85.3Unverified
4ZH15 (Zhao and Huang, 2015)Accuracy84.2Unverified
#ModelMetricClaimedVerifiedStatus
1coarse2fineAccuracy88.2Unverified
2PhraseTransformerAccuracy87.9Unverified
3TranxAccuracy87.7Unverified
#ModelMetricClaimedVerifiedStatus
1PERIN + RobeCzechF192.36Unverified
2PERINF192.24Unverified
3HUJI-KUF158Unverified
#ModelMetricClaimedVerifiedStatus
1PERINF180.52Unverified
2HUJI-KUF145Unverified
#ModelMetricClaimedVerifiedStatus
1PERINF180.23Unverified
2HUJI-KUF152Unverified
#ModelMetricClaimedVerifiedStatus
1PERINF194.16Unverified
2HUJI-KUF163Unverified
#ModelMetricClaimedVerifiedStatus
1PERINF189.83Unverified
2HUJI-KUF162Unverified
#ModelMetricClaimedVerifiedStatus
1PERINF192.73Unverified
2HUJI-KUF180Unverified
#ModelMetricClaimedVerifiedStatus
1PERINF189.19Unverified
2HUJI-KUF154Unverified
#ModelMetricClaimedVerifiedStatus
1TAPEX-LargeDenotation Accuracy74.5Unverified
2TAPAS-LargeAccuracy67.2Unverified
#ModelMetricClaimedVerifiedStatus
1PERINF176.4Unverified
2HUJI-KUF173Unverified
#ModelMetricClaimedVerifiedStatus
1PERINF181.01Unverified
2HUJI-KUF175Unverified
#ModelMetricClaimedVerifiedStatus
1HSPEM66.18Unverified
#ModelMetricClaimedVerifiedStatus
1ReasonBERTRF1 Score41.3Unverified
#ModelMetricClaimedVerifiedStatus
1MeMCEExact40.3Unverified