Anmerkungen:
Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
Beschreibung:
Recently, the advances in deep learning have lead to a surge in research on semantic parsing and question answering over knowledge graphs (KGQA). Significant improvements in these fields and in natural language processing (NLP) in general have been achieved thanks to the use and better understanding of training neural-networks-based models. Particularly important in training any model for any task is their generalization ability. While the generalization ability of machine learning models can be improved with general techniques (e.g. dropout), semantic parsing and KGQA present unique generalization challenges that have been a focal point of research in the field. Other important aspects when using machine learning are its computational efficiency and response time, as well as the ability to measure the reliability of the predictions on given inputs. In this thesis, we explore some questions regarding the generalization challenges in semantic parsing and KGQA. We also explore the tasks of out-of-distribution (OOD) detection for semantic parsing models, as well as the challenge of reducing the number of decoding steps. In particular, we investigate zero-shot or out-of-vocabulary generalization in KGQA with simple questions, which require only a single triple pattern to find the answers. Here, we are concerned with the ability to generalize to entities and relations that were not observed during training. Another question we investigate is the ability to detect compositionally OOD examples. Recent work has shown that standard neural semantic parsers fail to generalize to novel combinations of observed elements, which humans can easily do. While different works have investigated specialized inductive biases and training techniques, to the best of our knowledge, none have focused on detecting whether the inputs are compositionally OOD, which is the focus of our work. The third question we focus on is transfer learning in the context of KGQA, where we investigate its impact on both simple questions and more complex ...