• Media type: Doctoral Thesis; Electronic Thesis; E-Book
  • Title: Representing and reconstructing general non-rigid objects with neural models
  • Contributor: Tretschk, Edith [Author]
  • Published: Saarländische Universitäts- und Landesbibliothek, 2023
  • Language: English
  • DOI: https://doi.org/10.22028/D291-41650
  • Origination:
  • Footnote: Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
  • Description: Digitizing the real world is a wide problem area at the intersection of computer vision and computer graphics and, lately, machine learning. Despite a lot of effort, creating virtual clones of real-world objects remains an unsolved scientific challenge. Still, it is of great interest as it enables interactions with the environment in augmented reality, digital clones for virtual reality, and consistent visual effects. While human-centered approaches are already advanced, the handling of general deformable objects is far less explored and the topic of this thesis. To digitize an object, it first needs to be reconstructed from sensor observations and then represented in a suitable manner for downstream tasks. Many classical methods have explored these closely related areas. However, these reconstruction methods still fall short of practical applicability, and representing general deformable objects is unduly limited by hand-crafted priors. Over the past decade, neural techniques have led to great advancement in both areas. Meshes have become accessible to deep learning thanks to graph convolutions, graphics representations have expanded to include coordinate-based neural networks, and the entire reconstruction field has been revolutionized by neural radiance fields. This thesis contributes to both areas. In the first part, it focuses on representing deformations and geometry. In particular, it introduces a low-dimensional deformation model. Unlike prior work that hand-crafts these for specific categories, it can be trained for any general non- rigid object category via mesh auto-encoding using graph convolutions. Furthermore, by integrating insights from classical deformation modeling, it avoids artifacts common to prior work, which is purely learning-based. Next, coordinate-based networks model geometry at infinite resolution but they do not generalize due to their global representation. This thesis makes them generalizable, thereby making these new models much easier to apply to general objects where training ...
  • Access State: Open Access