• Media type: E-Article
  • Title: Contrast phase recognition in liver computer tomography using deep learning
  • Contributor: Rocha, Bruno Aragão; Ferreira, Lorena Carneiro; Vianna, Luis Gustavo Rocha; Ferreira, Luma Gallacio Gomes; Ciconelle, Ana Claudia Martins; Da Silva Noronha, Alex; Cortez Filho, João Martins; Nogueira, Lucas Salume Lima; Leite, Jean Michel Rocha Sampaio; da Silva Filho, Maurício Ricardo Moreira; da Costa Leite, Claudia; de Maria Felix, Marcelo; Gutierrez, Marco Antônio; Nomura, Cesar Higa; Cerri, Giovanni Guido; Carrilho, Flair José; Ono, Suzane Kioko
  • Published: Springer Science and Business Media LLC, 2022
  • Published in: Scientific Reports, 12 (2022) 1
  • Language: English
  • DOI: 10.1038/s41598-022-24485-y
  • ISSN: 2045-2322
  • Origination:
  • Footnote:
  • Description: AbstractHepatocellular carcinoma (HCC) has become the 4th leading cause of cancer-related deaths, with high social, economical and health implications. Imaging techniques such as multiphase computed tomography (CT) have been successfully used for diagnosis of liver tumors such as HCC in a feasible and accurate way and its interpretation relies mainly on comparing the appearance of the lesions in the different contrast phases of the exam. Recently, some researchers have been dedicated to the development of tools based on machine learning (ML) algorithms, especially by deep learning techniques, to improve the diagnosis of liver lesions in imaging exams. However, the lack of standardization in the naming of the CT contrast phases in the DICOM metadata is a problem for real-life deployment of machine learning tools. Therefore, it is important to correctly identify the exam phase based only on the image and not on the exam metadata, which is unreliable. Motivated by this problem, we successfully created an annotation platform and implemented a convolutional neural network (CNN) to automatically identify the CT scan phases in the HCFMUSP database in the city of São Paulo, Brazil. We improved this algorithm with hyperparameter tuning and evaluated it with cross validation methods. Comparing its predictions with the radiologists annotation, it achieved an accuracy of 94.6%, 98% and 100% in the testing dataset for the slice, volume and exam evaluation, respectively.
  • Access State: Open Access