Memon, Mohammad Shahbaz
[Author];
Riedel, Morris
[Author];
Memon, Ahmed
[Author];
Koeritz, Chris
[Author];
Grimshaw, Andrew
[Author];
Neukirchen, Helmut
[Author]
Enabling Scalable Data Processing and Management through Standards-based Job Execution and the Global Federated File System
You can manage bookmarks using lists, please log in to your user account for this.
Media type:
E-Article
Title:
Enabling Scalable Data Processing and Management through Standards-based Job Execution and the Global Federated File System
Contributor:
Memon, Mohammad Shahbaz
[Author];
Riedel, Morris
[Author];
Memon, Ahmed
[Author];
Koeritz, Chris
[Author];
Grimshaw, Andrew
[Author];
Neukirchen, Helmut
[Author]
Published in:Scalable computing 17(2), 115-128 (2016). doi:10.12694/scpe.v17i2.1160
Language:
English
DOI:
https://doi.org/10.12694/scpe.v17i2.1160
ISSN:
1895-1767
Origination:
Footnote:
Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
Description:
Emerging challenges for scientific communities are to efficiently process big data obtained by experimentation and computational simulations. Supercomputing architectures are available to support scalable and high performant processing environment, but many of the existing algorithm implementations are still unable to cope with its architectural complexity. One approach is to have innovative technologies that effectively use these resources and also deal with geographically dispersed large datasets. Those technologies should be accessible in a way that data scientists who are running data intensive computations do not have to deal with technical intricacies of the underling execution system. Our work primarily focuses on providing data scientists with transparent access to these resources in order to easily analyze data. Impact of our work is given by describing how we enabled access to multiple high performance computing resources through an open standards-based middleware that takes advantage of a unified data management provided by the the Global Federated File System. Our architectural design and its associated implementation is validated by a usecase that requires massivley parallel DBSCAN outlier detection on a 3D point clouds dataset.