You can manage bookmarks using lists, please log in to your user account for this.
Media type:
E-Article
Title:
DeepCOMBI: explainable artificial intelligence for the analysis and discovery in genome-wide association studies
Contributor:
Mieth, Bettina;
Rozier, Alexandre;
Rodriguez, Juan Antonio;
Höhne, Marina M C;
Görnitz, Nico;
Müller, Klaus-Robert
Published:
Oxford University Press (OUP), 2021
Published in:
NAR Genomics and Bioinformatics, 3 (2021) 3
Language:
English
DOI:
10.1093/nargab/lqab065
ISSN:
2631-9268
Origination:
Footnote:
Description:
AbstractDeep learning has revolutionized data science in many fields by greatly improving prediction performances in comparison to conventional approaches. Recently, explainable artificial intelligence has emerged as an area of research that goes beyond pure prediction improvement by extracting knowledge from deep learning methodologies through the interpretation of their results. We investigate such explanations to explore the genetic architectures of phenotypes in genome-wide association studies. Instead of testing each position in the genome individually, the novel three-step algorithm, called DeepCOMBI, first trains a neural network for the classification of subjects into their respective phenotypes. Second, it explains the classifiers’ decisions by applying layer-wise relevance propagation as one example from the pool of explanation techniques. The resulting importance scores are eventually used to determine a subset of the most relevant locations for multiple hypothesis testing in the third step. The performance of DeepCOMBI in terms of power and precision is investigated on generated datasets and a 2007 study. Verification of the latter is achieved by validating all findings with independent studies published up until 2020. DeepCOMBI is shown to outperform ordinary raw P-value thresholding and other baseline methods. Two novel disease associations (rs10889923 for hypertension, rs4769283 for type 1 diabetes) were identified.