• Medientyp: E-Book; Elektronische Hochschulschrift; Dissertation
  • Titel: Knowledge Graph Question Answering with Generative Language Models
  • Beteiligte: Banerjee, Debayan [Verfasser:in]
  • Erschienen: Staats- und Universitätsbibliothek Hamburg Carl von Ossietzky, 2024-01-01
  • Sprache: Englisch
  • DOI: https://doi.org/10.1145/3477495.3531841; https://doi.org/10.18653/v1/2023.findings-acl.774; https://doi.org/10.1007/978-3-031-33455-9_17
  • Schlagwörter: Knowledge Graphs ; Question Answering ; 54.72: Künstliche Intelligenz ; Großes Sprachmodell ; Generative Language Models ; Frage-Antwort-System ; Wissensgraph
  • Entstehung:
  • Anmerkungen: Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
  • Beschreibung: A Knowledge Graph (KG) is a data structure that stores information about the world in the form of nodes and edges. The nodes represent people, places, things etc., while the edges store the relationships between the nodes. The nodes are also known as entities, while the edges are known as relations or predicates. Several popular search engines today make use of such KGs in the background. Some well-known and freely available KGs are DBpedia and Wikidata. One way to access information from a KG is through Question Answering. For example, web-based search engines today give people the ability to type their questions and receive answers. Unfortunately, the current state of search engines leaves much to be desired in the complexity of the questions that a user may type. Current search engines work best when the search term is a keyword or a set of words. Processing complete sentences, with complex logical rules, is still an open problem. One large step in the direction of language understanding has been the arrival of pre-trained Language Models, such as BERT. Such models have been trained on large amounts of text corpus, and surprisingly, some variants of these models, such as T5 and BART, develop a remarkable ability to generate text, the likes of which are difficult to distinguish from that produced by a human author. These models are also called generative Language Models and are a central focus of this thesis. Given a question by a user, how does one fetch an answer from the KG? This task is commonly known as Knowledge Graph Question Answering (KGQA). One of the techniques is to convert the user's question to a logical form, or a structured query. One popular query language the reader might be familiar with is SQL. SQL, though, is appropriate for relational databases. In the KG world, the analogue would be a language called SPARQL. The task of converting the natural language text to a logical form is known as semantic parsing. To be able to execute a SPARQL query on a KG, the SPARQL schema must be valid, e.g., ...
  • Zugangsstatus: Freier Zugang
  • Rechte-/Nutzungshinweise: Namensnennung (CC BY) Namensnennung (CC BY)