• Media type: E-Article
  • Title: Tumor Response Evaluation Using iRECIST: Feasibility and Reliability of Manual Versus Software-Assisted Assessments
  • Contributor: Ristow, Inka; Well, Lennart; Wiese, Nis Jesper; Warncke, Malte; Tintelnot, Joseph; Karimzadeh, Amir; Koehler, Daniel; Adam, Gerhard; Bannas, Peter; Sauer, Markus
  • Published: MDPI AG, 2024
  • Published in: Cancers, 16 (2024) 5, Seite 993
  • Language: English
  • DOI: 10.3390/cancers16050993
  • ISSN: 2072-6694
  • Keywords: Cancer Research ; Oncology
  • Origination:
  • Footnote:
  • Description: Objectives: To compare the feasibility and reliability of manual versus software-assisted assessments of computed tomography scans according to iRECIST in patients undergoing immune-based cancer treatment. Methods: Computed tomography scans of 30 tumor patients undergoing cancer treatment were evaluated by four independent radiologists at baseline (BL) and two follow-ups (FU), resulting in a total of 360 tumor assessments (120 each at BL/FU1/FU2). After image interpretation, tumor burden and response status were either calculated manually or semi-automatically as defined by software, respectively. The reading time, calculated sum of longest diameter (SLD), and tumor response (e.g., “iStable Disease”) were determined for each assessment. After complete data collection, a consensus reading among the four readers was performed to establish a reference standard for the correct response assignments. The reading times, error rates, and inter-reader agreement on SLDs were statistically compared between the manual versus software-assisted approaches. Results: The reading time was significantly longer for the manual versus software-assisted assessments at both follow-ups (median [interquartile range] FU1: 4.00 min [2.17 min] vs. 2.50 min [1.00 min]; FU2: 3.75 min [1.88 min] vs. 2.00 min [1.50 min]; both p < 0.001). Regarding reliability, 2.5% of all the response assessments were incorrect at FU1 (3.3% manual; 0% software-assisted), which increased to 5.8% at FU2 (10% manual; 1.7% software-assisted), demonstrating higher error rates for manual readings. Quantitative SLD inter-reader agreement was inferior for the manual compared to the software-assisted assessments at both FUs (FU1: ICC = 0.91 vs. 0.93; FU2: ICC = 0.75 vs. 0.86). Conclusions: Software-assisted assessments may facilitate the iRECIST response evaluation of cancer patients in clinical routine by decreasing the reading time and reducing response misclassifications.
  • Access State: Open Access