Understanding Natural Language with Commonsense Knowledge Representation, Reasoning, and Simulation PDF Download

Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Understanding Natural Language with Commonsense Knowledge Representation, Reasoning, and Simulation PDF full book. Access full book title Understanding Natural Language with Commonsense Knowledge Representation, Reasoning, and Simulation by Antoine Bosselut. Download full books in PDF and EPUB format.

Understanding Natural Language with Commonsense Knowledge Representation, Reasoning, and Simulation

Understanding Natural Language with Commonsense Knowledge Representation, Reasoning, and Simulation PDF Author: Antoine Bosselut
Publisher:
ISBN:
Category :
Languages : en
Pages : 154

Book Description
For machines to understand language, they must intuitively grasp the commonsense knowledge that underlies the situations they encounter in text. A simple statement such as it is raining immediately implies a bank of shared context for any human reader: they should bring an umbrella, roads will be slippery, increased traffic may make them late, rain boots are preferable to sandals, and many more. Language understanding systems must be able to robustly use this commonsense knowledge to make decisions or take actions. Observations of the world are always more rich and detailed than the information that is explicitly transmitted through language, and machines must be able to fill in remaining details with commonsense inferences. Recent advances in natural language processing have made considerable progress in identifying the commonsense implications of situations described in text. These methods generally involve training high-parameter language models on large language corpora and have shown marked improvement on a variety of benchmark end tasks in natural language understanding. However, these systems are brittle 0́3 often failing when presented with out-of-distribution inputs -- and uninterpretable -- incapable of providing insights into why these different inputs cause shifted behavior. Meanwhile, traditional approaches to natural language understanding, which focus on linking language to background knowledge from large ontologies, remain limited by their inability to scale to the situational diversity expressed through language. In this dissertation, we argue that for natural language understanding agents to function in less controlled test environments, they must learn to reason more explicitly about the commonsense knowledge underlying textual situations. In furtherance of these goals, we draw from both traditional symbolic and modern neural approaches to natural language understanding. We present four studies on learning commonsense representations from language, and integrating and reasoning about these representations in NLP systems to achieve more robust textual understanding.