SOTAVerified

Logic Pre-Training of Language Models

2021-09-29Unverified0· sign in to hype

Siru Ouyang, Zhuosheng Zhang, Hai Zhao

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Pre-trained language models (PrLMs) have been shown useful for enhancing a broad range of natural language understanding (NLU) tasks. However, the capacity for capturing logic relations in challenging NLU still remains a bottleneck even for state-of-the-art PrLM enhancement, which greatly stalled their reasoning abilities. Thus we propose logic pre-training of language models, leading to the logic reasoning ability equipped PrLM, Prophet. To let logic pre-training perform on a clear, accurate, and generalized knowledge basis, we introduce fact instead of the plain language unit in previous PrLMs. The fact is extracted through syntactic parsing in avoidance of unnecessary complex knowledge injection. Meanwhile, it enables training logic-aware models to be conducted on a more general language text. To explicitly guide the PrLM to capture logic relations, three pre-training objectives are introduced: 1) logical connectives masking to capture sentence-level logics, 2) logical structure completion to accurately capture facts from the original context, 3) logical path prediction on a logical graph to uncover global logic relationships among facts. We evaluate our model on a broad range of NLP and NLU tasks, including natural language inference, relation extraction, and machine reading comprehension with logical reasoning. Results show that the extracted fact and the newly introduced pre-training tasks can help Prophet achieve significant performance in all the downstream tasks, especially in logic reasoning related tasks.

Tasks

Reproductions