• Media type: E-Book; Doctoral Thesis; Electronic Thesis
  • Title: Deep Learning Methods for Semantic Parsing and Question Answering over Knowledge Graphs
  • Contributor: Lukovnikov, Denis [Author]
  • imprint: Universitäts- und Landesbibliothek Bonn, 2022-05-20
  • Language: English
  • DOI: https://doi.org/20.500.11811/9810; https://doi.org/10.1145/3038912.3052675
  • Keywords: semantisches Parsen ; tree decoding ; tree transformer ; Verarbeitung natürlicher Sprache ; deep learning ; insertion decoding ; compositional generalization ; Wissensgraphen ; Frage-Antwort-System ; knowledge graphs ; semantic parsing ; transfer learning ; neuronale Netze ; question answering ; out-of-vocabulary generalization ; neural networks ; out-of-distribution detection
  • Origination:
  • Footnote: Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
  • Description: Recently, the advances in deep learning have lead to a surge in research on semantic parsing and question answering over knowledge graphs (KGQA). Significant improvements in these fields and in natural language processing (NLP) in general have been achieved thanks to the use and better understanding of training neural-networks-based models. Particularly important in training any model for any task is their generalization ability. While the generalization ability of machine learning models can be improved with general techniques (e.g. dropout), semantic parsing and KGQA present unique generalization challenges that have been a focal point of research in the field. Other important aspects when using machine learning are its computational efficiency and response time, as well as the ability to measure the reliability of the predictions on given inputs. In this thesis, we explore some questions regarding the generalization challenges in semantic parsing and KGQA. We also explore the tasks of out-of-distribution (OOD) detection for semantic parsing models, as well as the challenge of reducing the number of decoding steps. In particular, we investigate zero-shot or out-of-vocabulary generalization in KGQA with simple questions, which require only a single triple pattern to find the answers. Here, we are concerned with the ability to generalize to entities and relations that were not observed during training. Another question we investigate is the ability to detect compositionally OOD examples. Recent work has shown that standard neural semantic parsers fail to generalize to novel combinations of observed elements, which humans can easily do. While different works have investigated specialized inductive biases and training techniques, to the best of our knowledge, none have focused on detecting whether the inputs are compositionally OOD, which is the focus of our work. The third question we focus on is transfer learning in the context of KGQA, where we investigate its impact on both simple questions and more complex ...
  • Access State: Open Access
  • Rights information: Attribution - Share Alike (CC BY-SA)