• Medientyp: E-Book; Dissertation; Sonstige Veröffentlichung; Elektronische Hochschulschrift
  • Titel: Methods for efficient resource utilization in statistical machine learning algorithms
  • Beteiligte: Kotthaus, Helena [VerfasserIn]
  • Erschienen: Eldorado - Repositorium der TU Dortmund, 2018-01-01
  • Sprache: Englisch
  • DOI: https://doi.org/10.17877/DE290R-18928
  • Schlagwörter: Embedded systems ; Black-box optimization ; Model-based optimization ; Resource optimization ; Hyperparameter tuning ; Parallelization ; Performance analysis ; Memory optimization ; Machine learning ; Resource-aware scheduling ; Model selection ; Resource-constraint systems ; R language ; Profiling
  • Entstehung:
  • Anmerkungen: Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
  • Beschreibung: In recent years, statistical machine learning has emerged as a key technique for tackling problems that elude a classic algorithmic approach. One such problem, with a major impact on human life, is the analysis of complex biomedical data. Solving this problem in a fast and efficient manner is of major importance, as it enables, e.g., the prediction of the efficacy of different drugs for therapy selection. While achieving the highest possible prediction quality appears desirable, doing so is often simply infeasible due to resource constraints. Statistical learning algorithms for predicting the health status of a patient or for finding the best algorithm configuration for the prediction require an excessively high amount of resources. Furthermore, these algorithms are often implemented with no awareness of the underlying system architecture, which leads to sub-optimal resource utilization. This thesis presents methods for efficient resource utilization of statistical learning applications. The goal is to reduce the resource demands of these algorithms to meet a given time budget while simultaneously preserving the prediction quality. As a first step, the resource consumption characteristics of learning algorithms are analyzed, as well as their scheduling on underlying parallel architectures, in order to develop optimizations that enable these algorithms to scale to larger problem sizes. For this purpose, new profiling mechanisms are incorporated into a holistic profiling framework. The results show that one major contributor to the resource issues is memory consumption. To overcome this obstacle, a new optimization based on dynamic sharing of memory is developed that speeds up computation by several orders of magnitude in situations when available main memory is the bottleneck, leading to swapping out memory. One important application that can be applied for automated parameter tuning of learning algorithms is model-based optimization. Within a huge search space, algorithm configurations are evaluated to find the ...
  • Zugangsstatus: Freier Zugang
  • Rechte-/Nutzungshinweise: Urheberrechtsschutz