Deep hybrid neural-like P systems for multiorgan segmentation in head and neck CT/MR images

  • Xue, Jie
  • Wang, Yuan
  • Kong, Deting
  • Wu, Feiyang
  • Yin, Anjie
  • Qu, Jianhua
  • Liu, Xiyu
Expert Systems with Applications 2021 Journal Article, cited 0 times
Automatic segmentation of organs-at-risk (OARs) of the head and neck, such as the brainstem, the left and right parotid glands, mandible, optic chiasm, and the left and right optic nerves, are crucial when formulating radiotherapy plans. However, there are difficulties due to (1) the small sizes of these organs (especially the optic chiasm and optic nerves) and (2) the different positions and phenotypes of the OARs. In this paper, we propose a novel, automatic multiorgan segmentation algorithm based on a new hybrid neural-like P system, to alleviate the above challenges. The new P system possesses the joint advantages of cell-like and neural-like P systems and includes new structures and rules, allowing it to solve more real-world problems in parallelism. In the new P system, effective ensemble convolutional neural networks (CNNs) are implemented with different initializations simultaneously to perform pixel-wise segmentations of OARs, which can obtain more effective features and leverage the strength of ensemble learning. Evaluations on three public datasets show the effectiveness and robustness of the proposed algorithm for accurate OARs segmentation in various image modalities.

Building a X-ray Database for Mammography on Vietnamese Patients and automatic Detecting ROI Using Mask-RCNN

  • Thang, Nguyen Duc
  • Dung, Nguyen Viet
  • Duc, Tran Vinh
  • Nguyen, Anh
  • Nguyen, Quang H.
  • Anh, Nguyen Tu
  • Cuong, Nguyen Ngoc
  • Linh, Le Tuan
  • Hanh, Bui My
  • Phu, Phan Huy
  • Phuong, Nguyen Hoang
2021 Book Section, cited 0 times
This paper describes the method of building a X-ray database for Mammography on Vietnamese patients that we collected at Hanoi Medical University Hospital. This dataset has 4664 images (Dicom) corresponding to 1161 standard patients with uniform distribution according to BIRAD from 0 to 5. This paper also presents the method of detecting Region of Interest (ROI) in mammogram based on Mask R-CNN architecture. The method of determining the ROI for accuracy mAP@0.5 = 0.8109 and the accuracy of classification BIRAD levels is 58.44%.

Ensemble of Convolutional Neural Networks for the Detection of Prostate Cancer in Multi-parametric MRI Scans

  • Nguyen, Quang H.
  • Gong, Mengnan
  • Liu, Tao
  • Youheng, Ou Yang
  • Nguyen, Binh P.
  • Chua, Matthew Chin Heng
2021 Book Section, cited 0 times
Prostate MP-MRI scan is a non-invasive method of detecting early stage prostate cancer which is increasing in popularity. However, this imaging modality requires highly skilled radiologists to interpret the images which incurs significant time and cost. Convolutional neural networks may alleviate the workload of radiologists by discriminating between prostate tumor positive scans and negative ones, allowing radiologists to focus their attention on a subset of scans that are neither clearly positive nor negative. The major challenges of such a system are speed and accuracy. In order to address these two challenges, a new approach using ensemble learning of convolutional neural networks (CNNs) was proposed in this paper, which leverages different imaging modalities including T2 weight, B-value, ADC and Ktrans in a multi-parametric MRI clinical dataset with 330 samples of 204 patients for training and evaluation. The results of prostate tumor identification will display benign or malignant based on extracted features by the individual CNN models in seconds. The ensemble of the four individual CNN models for different image types improves the prediction accuracy to 92% with sensitivity at 94.28% and specificity at 86.67% among given 50 test samples. The proposed framework potentially provides rapid classification in high-volume quantitative prostate tumor samples.

Prognostic relevance of CSF and peri-tumoral edema volumes in glioblastoma

  • Mummareddy, Nishit
  • Salwi, Sanjana R
  • Kumar, Nishant Ganesh
  • Zhao, Zhiguo
  • Ye, Fei
  • Le, Chi H
  • Mobley, Bret C
  • Thompson, Reid C
  • Chambless, Lola B
  • Mistry, Akshitkumar M
Journal of Clinical Neuroscience 2021 Journal Article, cited 0 times

Algorithms applied to spatially registered multi-parametric MRI for prostate tumor volume measurement

  • Mayer, Rulon
  • Simone, Charles B
  • II, Baris Turkbey
  • Choyke, Peter
Quantitative Imaging in Medicine and Surgery 2021 Journal Article, cited 0 times

Quantitative integration of radiomic and genomic data improves survival prediction of low-grade glioma patients [J]

  • Ma, Chen
  • Yao, Zhihao
  • Zhang, Qinran
  • Zou, Xiufen
Mathematical Biosciences and Engineering 2021 Journal Article, cited 0 times

Fast automated detection of COVID-19 from medical images using convolutional neural networks

  • Liang, Shuang
  • Liu, Huixiang
  • Gu, Yu
  • Guo, Xiuhua
  • Li, Hongjun
  • Li, Li
  • Wu, Zhiyuan
  • Liu, Mengyang
  • Tao, Lixin
Communications Biology 2021 Journal Article, cited 0 times

Evaluation of brain tumor using brain MRI with modified-moth-flame algorithm and Kapur’s thresholding: a study

  • Kadry, Seifedine
  • Rajinikanth, V
  • Raja, N Sri Madhava
  • Hemanth, D Jude
  • Hannon, Naeem MS
  • Raj, Alex Noel Joseph
Evolutionary Intelligence 2021 Journal Article, cited 0 times

Relationship between visceral adipose tissue and genetic mutations (VHL and KDM5C) in clear cell renal cell carcinoma

  • Greco, Federico
  • Mallio, Carlo Augusto
La radiologia medica 2021 Journal Article, cited 0 times

Interpretable Machine Learning Model for Locoregional Relapse Prediction in Oropharyngeal Cancers

  • Giraud, Paul
  • Giraud, Philippe
  • Nicolas, Eliot
  • Boisselier, Pierre
  • Alfonsi, Marc
  • Rives, Michel
  • Bardet, Etienne
  • Calugaru, Valentin
  • Noel, Georges
  • Chajon, Enrique
Cancers 2021 Journal Article, cited 0 times

Extraction of Cancer Section from 2D Breast MRI Slice Using Brain Strom Optimization

  • Elanthirayan, R
  • Kubra, K Sakeenathul
  • Rajinikanth, V
  • Raja, N Sri Madhava
  • Satapathy, Suresh Chandra
2021 Book Section, cited 0 times

Mammography and breast tomosynthesis simulator for virtual clinical trials

  • Badal, Andreu
  • Sharma, Diksha
  • Graff, Christian G.
  • Zeng, Rongping
  • Badano, Aldo
Computer Physics Communications 2021 Journal Article, cited 0 times
Computer modeling and simulations are increasingly being used to predict the clinical performance of x-ray imaging devices in silico, and to generate synthetic patient images for training and testing of machine learning algorithms. We present a detailed description of the computational models implemented in the open source GPU-accelerated Monte Carlo x-ray imaging simulation code MC-GPU. This code, originally developed to simulate radiography and computed tomography, has been extended to replicate a commercial full-field digital mammography and digital breast tomosynthesis (DBT) device. The code was recently used to image 3000 virtual breast models with the aim of reproducing in silico a clinical trial used in support of the regulatory approval of DBT as a replacement of mammography for breast cancer screening. The updated code implements a more realistic x-ray source model (extended 3D focal spot, tomosynthesis acquisition trajectory, tube motion blurring) and an improved detector model (direct-conversion Selenium detector with depth-of-interaction effects, fluorescence tracking, electronic noise and anti-scatter grid). The software uses a high resolution voxelized geometry model to represent the breast anatomy. To reduce the GPU memory requirements, the code stores the voxels in memory within a binary tree structure. The binary tree is an efficient compression mechanism because many voxels with the same composition are combined in common tree branches while preserving random access to the phantom composition at any location. A delta scattering ray-tracing algorithm which does not require computing ray-voxel interfaces is used to minimize memory access. Multiple software verification and validation steps intended to establish the credibility of the implemented computational models are reported. The software verification was done using a digital quality control phantom and an ideal pinhole camera. The validation was performed reproducing standard bench testing experiments used in clinical practice and comparing with experimental measurements. A sensitivity study intended to assess the robustness of the simulated results to variations in some of the input parameters was performed using an in silico clinical trial pipeline with simulated lesions and mathematical observers. We show that MC-GPU is able to simulate x-ray projections that incorporate many of the sources of variability found in clinical images, and that the simulated results are robust to some uncertainty in the input parameters. Limitations of the implemented computational models are discussed. Program summary Program title: MCGPU_VICTRE CPC Library link to program files: Licensing provisions: CC0 1.0 Programming language: C (with NVIDIA CUDA extensions) Nature of problem: The health risks associated with ionizing radiation impose a limit to the amount of clinical testing that can be done with x-ray imaging devices. In addition, radiation dose cannot be directly measured inside the body. For these reasons, a computational replica of an x-ray imaging device that simulates radiographic images of synthetic anatomical phantoms is of great value for device evaluation. The simulated radiographs and dosimetric estimates can be used for system design and optimization, task-based evaluation of image quality, machine learning software training, and in silico imaging trials. Solution method: Computational models of a mammography x-ray source and detector have been implemented. X-ray transport through matter is simulated using Monte Carlo methods customized for parallel execution in multiple Graphics Processing Units. The input patient anatomy is represented by voxels, which are efficiently stored in the video memory using a new binary tree structure compression mechanism.

The image biomarker standardization initiative: standardized quantitative radiomics for high-throughput image-based phenotyping

  • Zwanenburg, Alex
  • Vallieres, Martin
  • Abdalah, Mahmoud A
  • Aerts, Hugo J W L
  • Andrearczyk, Vincent
  • Apte, Aditya
  • Ashrafinia, Saeed
  • Bakas, Spyridon
  • Beukinga, Roelof J
  • Boellaard, Ronald
  • Bogowicz, Marta
  • Boldrini, Luca
  • Buvat, Irene
  • Cook, Gary J R
  • Davatzikos, Christos
  • Depeursinge, Adrien
  • Desseroit, Marie-Charlotte
  • Dinapoli, Nicola
  • Dinh, Cuong Viet
  • Echegaray, Sebastian
  • El Naqa, Issam
  • Fedorov, Andriy Y
  • Gatta, Roberto
  • Gillies, Robert J
  • Goh, Vicky
  • Gotz, Michael
  • Guckenberger, Matthias
  • Ha, Sung Min
  • Hatt, Mathieu
  • Isensee, Fabian
  • Lambin, Philippe
  • Leger, Stefan
  • Leijenaar, Ralph T H
  • Lenkowicz, Jacopo
  • Lippert, Fiona
  • Losnegard, Are
  • Maier-Hein, Klaus H
  • Morin, Olivier
  • Muller, Henning
  • Napel, Sandy
  • Nioche, Christophe
  • Orlhac, Fanny
  • Pati, Sarthak
  • Pfaehler, Elisabeth A G
  • Rahmim, Arman
  • Rao, Arvind U K
  • Scherer, Jonas
  • Siddique, Muhammad Musib
  • Sijtsema, Nanna M
  • Socarras Fernandez, Jairo
  • Spezi, Emiliano
  • Steenbakkers, Roel J H M
  • Tanadini-Lang, Stephanie
  • Thorwarth, Daniela
  • Troost, Esther G C
  • Upadhaya, Taman
  • Valentini, Vincenzo
  • van Dijk, Lisanne V
  • van Griethuysen, Joost
  • van Velden, Floris H P
  • Whybra, Philip
  • Richter, Christian
  • Lock, Steffen
Radiology 2020 Journal Article, cited 247 times

Prognostic value of baseline [18F]-fluorodeoxyglucose positron emission tomography parameters MTV, TLG and asphericity in an international multicenter cohort of nasopharyngeal carcinoma patients

  • Zschaeck, S.
  • Li, Y.
  • Lin, Q.
  • Beck, M.
  • Amthauer, H.
  • Bauersachs, L.
  • Hajiyianni, M.
  • Rogasch, J.
  • Ehrhardt, V. H.
  • Kalinauskaite, G.
  • Weingartner, J.
  • Hartmann, V.
  • van den Hoff, J.
  • Budach, V.
  • Stromberger, C.
  • Hofheinz, F.
PLoS One 2020 Journal Article, cited 1 times
PURPOSE: [18F]-fluorodeoxyglucose (FDG) positron emission tomography (PET) parameters have shown prognostic value in nasopharyngeal carcinomas (NPC), mostly in monocenter studies. The aim of this study was to assess the prognostic impact of standard and novel PET parameters in a multicenter cohort of patients. METHODS: The established PET parameters metabolic tumor volume (MTV), total lesion glycolysis (TLG) and maximal standardized uptake value (SUVmax) as well as the novel parameter tumor asphericity (ASP) were evaluated in a retrospective multicenter cohort of 114 NPC patients with FDG-PET staging, treated with (chemo)radiation at 8 international institutions. Uni- and multivariable Cox regression and Kaplan-Meier analysis with respect to overall survival (OS), event-free survival (EFS), distant metastases-free survival (FFDM), and locoregional control (LRC) was performed for clinical and PET parameters. RESULTS: When analyzing metric PET parameters, ASP showed a significant association with EFS (p = 0.035) and a trend for OS (p = 0.058). MTV was significantly associated with EFS (p = 0.026), OS (p = 0.008) and LRC (p = 0.012) and TLG with LRC (p = 0.019). TLG and MTV showed a very high correlation (Spearman's rho = 0.95), therefore TLG was subesequently not further analysed. Optimal cutoff values for defining high and low risk groups were determined by maximization of the p-value in univariate Cox regression considering all possible cutoff values. Generation of stable cutoff values was feasible for MTV (p<0.001), ASP (p = 0.023) and combination of both (MTV+ASP = occurrence of one or both risk factors, p<0.001) for OS and for MTV regarding the endpoints OS (p<0.001) and LRC (p<0.001). In multivariable Cox (age >55 years + one binarized PET parameter), MTV >11.1ml (hazard ratio (HR): 3.57, p<0.001) and ASP > 14.4% (HR: 3.2, p = 0.031) remained prognostic for OS. MTV additionally remained prognostic for LRC (HR: 4.86 p<0.001) and EFS (HR: 2.51 p = 0.004). Bootstrapping analyses showed that a combination of high MTV and ASP improved prognostic value for OS compared to each single variable significantly (p = 0.005 and p = 0.04, respectively). When using the cohort from China (n = 57 patients) for establishment of prognostic parameters and all other patients for validation (n = 57 patients), MTV could be successfully validated as prognostic parameter regarding OS, EFS and LRC (all p-values <0.05 for both cohorts). CONCLUSIONS: In this analysis, PET parameters were associated with outcome of NPC patients. MTV showed a robust association with OS, EFS and LRC. Our data suggest that combination of MTV and ASP may potentially further improve the risk stratification of NPC patients.

Age-related copy number variations and expression levels of F-box protein FBXL20 predict ovarian cancer prognosis

  • Zheng, S.
  • Fu, Y.
Transl Oncol 2020 Journal Article, cited 0 times
About 70% of ovarian cancer (OvCa) cases are diagnosed at advanced stages (stage III/IV) with only 20-40% of them survive over 5years after diagnosis. A reliably screening marker could enable a paradigm shift in OvCa early diagnosis and risk stratification. Age is one of the most significant risk factors for OvCa. Older women have much higher rates of OvCa diagnosis and poorer clinical outcomes. In this article, we studied the correlation between aging and genetic alterations in The Cancer Genome Atlas Ovarian Cancer dataset. We demonstrated that copy number variations (CNVs) and expression levels of the F-Box and Leucine-Rich Repeat Protein 20 (FBXL20), a substrate recognizing protein in the SKP1-Cullin1-F-box-protein E3 ligase, can predict OvCa overall survival, disease-free survival and progression-free survival. More importantly, FBXL20 copy number loss predicts the diagnosis of OvCa at a younger age, with over 60% of patients in that subgroup have OvCa diagnosed at age less than 60years. Clinicopathological studies further demonstrated malignant histological and radiographical features associated with elevated FBXL20 expression levels. This study has thus identified a potential biomarker for OvCa prognosis.

Spline curve deformation model with prior shapes for identifying adhesion boundaries between large lung tumors and tissues around lungs in CT images

  • Zhang, Xin
  • Wang, Jie
  • Yang, Ying
  • Wang, Bing
  • Gu, Lixu
Med Phys 2020 Journal Article, cited 0 times
PURPOSE: Automated segmentation of lung tumors attached to anatomic structures such as the chest wall or mediastinum remains a technical challenge because of the similar Hounsfield units of these structures. To address this challenge, we propose herein a spline curve deformation model that combines prior shapes to correct large spatially contiguous errors (LSCEs) in input shapes derived from image-appearance cues.The model is then used to identify the adhesion boundaries between large lung tumors and tissue around the lungs. METHODS: The deformation of the whole curve is driven by the transformation of the control points (CPs) of the spline curve, which are influenced by external and internal forces. The external force drives the model to fit the positions of the non-LSCEs of the input shapes while the internal force ensures the local similarity of the displacements of the neighboring CPs. The proposed model corrects the gross errors in the lung input shape caused by large lung tumors, where the initial lung shape for the model is inferred from the training shapes by shape group-based sparse prior information and the input lung shape is inferred by adaptive-thresholding-based segmentation followed by morphological refinement. RESULTS: The accuracy of the proposed model is verified by applying it to images of lungs with either moderate large-sized (ML) tumors or giant large-sized (GL) tumors. The quantitative results in terms of the averages of the dice similarity coefficient (DSC) and the Jaccard similarity index (SI) are 0.982 +/- 0.006 and 0.965 +/- 0.012 for segmentation of lungs adhered by ML tumors, and 0.952 +/- 0.048 and 0.926 +/- 0.059 for segmentation of lungs adhered by GL tumors, which give 0.943 +/- 0.021 and 0.897 +/- 0.041 for segmentation of the ML tumors, and 0.907 +/- 0.057 and 0.888 +/- 0.091 for segmentation of the GL tumors, respectively. In addition, the bidirectional Hausdorff distances are 5.7 +/- 1.4 and 11.3 +/- 2.5 mm for segmentation of lungs with ML and GL tumors, respectively. CONCLUSIONS: When combined with prior shapes, the proposed spline curve deformation can deal with large spatially consecutive errors in object shapes obtained from image-appearance information. We verified this method by applying it to the segmentation of lungs with large tumors adhered to the tissue around the lungs and the large tumors. Both the qualitative and quantitative results are more accurate and repeatable than results obtained with current state-of-the-art techniques.

Generalizing Deep Learning for Medical Image Segmentation to Unseen Domains via Deep Stacked Transformation

  • Zhang, Ling
  • Xu, Daguang
  • Xu, Ziyue
  • Wang, Xiaosong
  • Yang, Dong
  • Sanford, Thomas
  • Harmon, Stephanie
  • Turkbey, Baris
  • Wood, Bradford J
  • Roth, Holger
  • Myronenko, Andriy
IEEE Trans Med Imaging 2020 Journal Article, cited 0 times
Recent advances in deep learning for medical image segmentation demonstrate expert-level accuracy. However, application of these models in clinically realistic environments can result in poor generalization and decreased accuracy, mainly due to the domain shift across different hospitals, scanner vendors, imaging protocols, and patient populations etc. Common transfer learning and domain adaptation techniques are proposed to address this bottleneck. However, these solutions require data (and annotations) from the target domain to retrain the model, and is therefore restrictive in practice for widespread model deployment. Ideally, we wish to have a trained (locked) model that can work uniformly well across unseen domains without further training. In this paper, we propose a deep stacked transformation approach for domain generalization. Specifically, a series of n stacked transformations are applied to each image during network training. The underlying assumption is that the “expected” domain shift for a specific medical imaging modality could be simulated by applying extensive data augmentation on a single source domain, and consequently, a deep model trained on the augmented “big” data (BigAug) could generalize well on unseen domains. We exploit four surprisingly effective, but previously understudied, image-based characteristics for data augmentation to overcome the domain generalization problem. We train and evaluate the BigAug model (with n = 9 transformations) on three different 3D segmentation tasks (prostate gland, left atrial, left ventricle) covering two medical imaging modalities (MRI and ultrasound) involving eight publicly available challenge datasets. The results show that when training on relatively small dataset (n=10~32 volumes, depending on the size of the available datasets) from a single source domain: (i) BigAug models degrade an average of 11% (Dice score change) from source to unseen domain, substantially better than conventional augmentation (degrading 39%) and CycleGAN-based domain adaptation method (degrading 25%), (ii) BigAug is better than “shallower" stacked transforms (i.e. those with fewer transforms) on unseen domains and demonstrates modest improvement to conventional augmentation on the source domain, (iii) after training with BigAug on one source domain, performance on an unseen domain is similar to training a model from scratch on that domain when using the same number of training samples. When training on large datasets (n=465 volumes) with BigAug, (iv) application to unseen domains reaches the performance of state-of-the-art fully supervised models that are trained and tested on their source domains. These findings establish a strong benchmark for the study of domain generalization in medical imaging, and can be generalized to the design of highly robust deep segmentation models for clinical deployment.

Convection enhanced delivery of anti-angiogenic and cytotoxic agents in combination therapy against brain tumour

  • Zhan, W.
Eur J Pharm Sci 2020 Journal Article, cited 0 times
Convection enhanced delivery is an effective alternative to routine delivery methods to overcome the blood brain barrier. However, its treatment efficacy remains disappointing in clinic owing to the rapid drug elimination in tumour tissue. In this study, multiphysics modelling is employed to investigate the combination delivery of anti-angiogenic and cytotoxic drugs from the perspective of intratumoural transport. Simulations are based on a 3-D realistic brain tumour model that is reconstructed from patient magnetic resonance images. The tumour microvasculature is targeted by bevacizumab, and six cytotoxic drugs are included, as doxorubicin, carmustine, cisplatin, fluorouracil, methotrexate and paclitaxel. The treatment efficacy is evaluated in terms of the distribution volume where the drug concentration is above the corresponding LD90. Results demonstrate that the infusion of bevacizumab can slightly improve interstitial fluid flow, but is significantly efficient in reducing the fluid loss from the blood circulatory system to inhibit the concentration dilution. As the transport of bevacizumab is dominated by convection, its spatial distribution and anti-angiogenic effectiveness present high sensitivity to the directional interstitial fluid flow. Infusing bevacizumab could enhance the delivery outcomes of all the six drugs, however, the degree of enhancement differs. The delivery of doxorubicin can be improved most, whereas, the impacts on methotrexate and paclitaxel are limited. Fluorouracil could cover the comparable distribution volume as paclitaxel in the combination therapy for effective cell killing. Results obtain in this study could be a guide for the design of this co-delivery treatment.

Effects of Focused-Ultrasound-and-Microbubble-Induced Blood-Brain Barrier Disruption on Drug Transport under Liposome-Mediated Delivery in Brain Tumour: A Pilot Numerical Simulation Study

  • Zhan, Wenbo
Pharmaceutics 2020 Journal Article, cited 0 times

The prognostic value of CT radiomic features from primary tumours and pathological lymphnodes in head and neck cancer patients

  • Zhai, Tiantian
2020 Thesis, cited 0 times
Head and neck cancer (HNC) is responsible for about 0.83 million new cancer cases and 0.43 million cancer deaths worldwide every year. Around 30%-50% of patients with locally advanced HNC experience treatment failures, predominantly occurring at the site of the primary tumor, followed by regional failures and distant metastases. In order to optimize treatment strategy, the overall aim of this thesis is to identify the patients who are at high risk of treatment failures. We developed and externally validated a series of models on the different patterns of failure to predict the risk of local failures, regional failures, distant metastasis and individual nodal failures in HNC patients. New type of radiomic features based on the CT image were included in our modelling analysis, and we firstly showed that the radiomic features improved the prognostic performance of the models containing clinical factors significantly. Our studies provide clinicians new tools to predict the risk of treatment failures. This may support optimization of treatment strategy of this disease, and subsequently improve the patient survival rate.

Predictive Modeling for Voxel-Based Quantification of Imaging-Based Subtypes of Pancreatic Ductal Adenocarcinoma (PDAC): A Multi-Institutional Study

  • Zaid, Mohamed
  • Widmann, Lauren
  • Dai, Annie
  • Sun, Kevin
  • Zhang, Jie
  • Zhao, Jun
  • Hurd, Mark W
  • Varadhachary, Gauri R
  • Wolff, Robert A
  • Maitra, Anirban
Cancers 2020 Journal Article, cited 0 times


  • Homayoon Yektai
  • Mohammad Manthouri
Biomedical Engineering: Applications, Basis and Communications 2020 Journal Article, cited 0 times
Lung cancer is one of the dangerous diseases that cause huge cancer death worldwide. Early detection of lung cancer is the only possible way to improve a patient’s chance for survival. This study presents an innovative automated diagnosis classification method for Computed Tomography (CT) images of lungs. In this paper, the CT scan of lung images was analyzed with the multiscale convolution. The entire lung is segmented from the CT images and the parameters are calculated from the segmented image. The use of image processing techniques and identifying patterns in the detection of lung cancer from CT images reduces human errors in detecting tumors, and speeds up diagnosis time. Artificial Neural Network (ANN) has been widely used to detect lung cancer, and has significantly reduced the percentage of errors. Therefore, in this paper, Convolution Neural Network (CNN), which is the most effective method, is used for the detection of various types of cancers. This study presents a Multiscale Convolutional Neural Network (MCNN) approach for the classification of tumors. Based on the structure of MCNN, which presents CT picture to several deep convolutional neural networks with different size and resolutions, the classical handcrafted features extraction step is avoided. The proposed approach gives better classification rates than the classical state of the art methods allowing a safer Computer-Aided Diagnosis of pleural cancer. This study reaches a diagnosis accuracy of 93.7±0.3 using multiscale convolution technique, which reveals the efficiency of the proposed method.

CT images with expert manual contours of thoracic cancer for benchmarking auto-segmentation accuracy

  • Yang, J.
  • Veeraraghavan, H.
  • van Elmpt, W.
  • Dekker, A.
  • Gooding, M.
  • Sharp, G.
Med Phys 2020 Journal Article, cited 0 times
PURPOSE: Automatic segmentation offers many benefits for radiotherapy treatment planning; however, the lack of publicly available benchmark datasets limits the clinical use of automatic segmentation. In this work, we present a well-curated computed tomography (CT) dataset of high-quality manually drawn contours from patients with thoracic cancer that can be used to evaluate the accuracy of thoracic normal tissue auto-segmentation systems. ACQUISITION AND VALIDATION METHODS: Computed tomography scans of 60 patients undergoing treatment simulation for thoracic radiotherapy were acquired from three institutions: MD Anderson Cancer Center, Memorial Sloan Kettering Cancer Center, and the MAASTRO clinic. Each institution provided CT scans from 20 patients, including mean intensity projection four-dimensional CT (4D CT), exhale phase (4D CT), or free-breathing CT scans depending on their clinical practice. All CT scans covered the entire thoracic region with a 50-cm field of view and slice spacing of 1, 2.5, or 3 mm. Manual contours of left/right lungs, esophagus, heart, and spinal cord were retrieved from the clinical treatment plans. These contours were checked for quality and edited if necessary to ensure adherence to RTOG 1106 contouring guidelines. DATA FORMAT AND USAGE NOTES: The CT images and RTSTRUCT files are available in DICOM format. The regions of interest were named according to the nomenclature recommended by American Association of Physicists in Medicine Task Group 263 as Lung_L, Lung_R, Esophagus, Heart, and SpinalCord. This dataset is available on The Cancer Imaging Archive (funded by the National Cancer Institute) under Lung CT Segmentation Challenge 2017 ( POTENTIAL APPLICATIONS: This dataset provides CT scans with well-delineated manually drawn contours from patients with thoracic cancer that can be used to evaluate auto-segmentation systems. Additional anatomies could be supplied in the future to enhance the existing library of contours.

Research of Multimodal Medical Image Fusion Based on Parameter-Adaptive Pulse-Coupled Neural Network and Convolutional Sparse Representation

  • Xia, J.
  • Lu, Y.
  • Tan, L.
Comput Math Methods Med 2020 Journal Article, cited 0 times
Visual effects of medical image have a great impact on clinical assistant diagnosis. At present, medical image fusion has become a powerful means of clinical application. The traditional medical image fusion methods have the problem of poor fusion results due to the loss of detailed feature information during fusion. To deal with it, this paper proposes a new multimodal medical image fusion method based on the imaging characteristics of medical images. In the proposed method, the non-subsampled shearlet transform (NSST) decomposition is first performed on the source images to obtain high-frequency and low-frequency coefficients. The high-frequency coefficients are fused by a parameter-adaptive pulse-coupled neural network (PAPCNN) model. The method is based on parameter adaptive and optimized connection strength beta adopted to promote the performance. The low-frequency coefficients are merged by the convolutional sparse representation (CSR) model. The experimental results show that the proposed method solves the problems of difficult parameter setting and poor detail preservation of sparse representation during image fusion in traditional PCNN algorithms, and it has significant advantages in visual effect and objective indices compared with the existing mainstream fusion algorithms.

Three-Plane–assembled Deep Learning Segmentation of Gliomas

  • Wu, Shaocheng
  • Li, Hongyang
  • Quang, Daniel
  • Guan, Yuanfang
Radiology: Artificial Intelligence 2020 Journal Article, cited 0 times
An accurate and fast deep learning approach developed for automatic segmentation of brain glioma on multimodal MRI scans achieved Sørensen–Dice scores of 0.80, 0.83, and 0.91 for enhancing tumor, tumor core, and whole tumor, respectively. Purpose To design a computational method for automatic brain glioma segmentation of multimodal MRI scans with high efficiency and accuracy. Materials and Methods The 2018 Multimodal Brain Tumor Segmentation Challenge (BraTS) dataset was used in this study, consisting of routine clinically acquired preoperative multimodal MRI scans. Three subregions of glioma—the necrotic and nonenhancing tumor core, the peritumoral edema, and the contrast-enhancing tumor—were manually labeled by experienced radiologists. Two-dimensional U-Net models were built using a three-plane–assembled approach to segment three subregions individually (three-region model) or to segment only the whole tumor (WT) region (WT-only model). The term three-plane–assembled means that coronal and sagittal images were generated by reformatting the original axial images. The model performance for each case was evaluated in three classes: enhancing tumor (ET), tumor core (TC), and WT. Results On the internal unseen testing dataset split from the 2018 BraTS training dataset, the proposed models achieved mean Sørensen–Dice scores of 0.80, 0.84, and 0.91, respectively, for ET, TC, and WT. On the BraTS validation dataset, the proposed models achieved mean 95% Hausdorff distances of 3.1 mm, 7.0 mm, and 5.0 mm, respectively, for ET, TC, and WT and mean Sørensen–Dice scores of 0.80, 0.83, and 0.91, respectively, for ET, TC, and WT. On the BraTS testing dataset, the proposed models ranked fourth out of 61 teams. The source code is available at Conclusion This deep learning method consistently segmented subregions of brain glioma with high accuracy, efficiency, reliability, and generalization ability on screening images from a large population, and it can be efficiently implemented in clinical practice to assist neuro-oncologists or radiologists. Supplemental material is available for this article.

Classification of Lung Nodules Based on Deep Residual Networks and Migration Learning

  • Wu, Panpan
  • Sun, Xuanchao
  • Zhao, Ziping
  • Wang, Haishuai
  • Pan, Shirui
  • Schuller, Bjorn
Comput Intell Neurosci 2020 Journal Article, cited 0 times
The classification process of lung nodule detection in a traditional computer-aided detection (CAD) system is complex, and the classification result is heavily dependent on the performance of each step in lung nodule detection, causing low classification accuracy and high false positive rate. In order to alleviate these issues, a lung nodule classification method based on a deep residual network is proposed. Abandoning traditional image processing methods and taking the 50-layer ResNet network structure as the initial model, the deep residual network is constructed by combining residual learning and migration learning. The proposed approach is verified by conducting experiments on the lung computed tomography (CT) images from the publicly available LIDC-IDRI database. An average accuracy of 98.23% and a false positive rate of 1.65% are obtained based on the ten-fold cross-validation method. Compared with the conventional support vector machine (SVM)-based CAD system, the accuracy of our method improved by 9.96% and the false positive rate decreased by 6.95%, while the accuracy improved by 1.75% and 2.42%, respectively, and the false positive rate decreased by 2.07% and 2.22%, respectively, in contrast to the VGG19 model and InceptionV3 convolutional neural networks. The experimental results demonstrate the effectiveness of our proposed method in lung nodule classification for CT images.

Dosiomics improves prediction of locoregional recurrence for intensity modulated radiotherapy treated head and neck cancer cases

  • Wu, A.
  • Li, Y.
  • Qi, M.
  • Lu, X.
  • Jia, Q.
  • Guo, F.
  • Dai, Z.
  • Liu, Y.
  • Chen, C.
  • Zhou, L.
  • Song, T.
Oral Oncol 2020 Journal Article, cited 0 times
OBJECTIVES: To investigate whether dosiomics can benefit to IMRT treated patient's locoregional recurrences (LR) prediction through a comparative study on prediction performance inspection between radiomics methods and that integrating dosiomics in head and neck cancer cases. MATERIALS AND METHODS: A cohort of 237 patients with head and neck cancer from four different institutions was obtained from The Cancer Imaging Archive and utilized to train and validate the radiomics-only prognostic model and integrate the dosiomics prognostic model. For radiomics, the radiomics features were initially extracted from images, including CTs and PETs, and selected on the basis of their concordance index (CI) values, then condensed via principle component analysis. Lastly, multivariate Cox proportional hazards regression models were constructed with class-imbalance adjustment as the LR prediction models by inputting those condensed features. For dosiomics integration model establishment, the initial features were similar, but with additional 3-dimensional dose distribution from radiation treatment plans. The CI and the Kaplan-Meier curves with log-rank analysis were used to assess and compare these models. RESULTS: Observed from the independent validation dataset, the CI of the model for dosiomics integration (0.66) was significantly different from that for radiomics (0.59) (Wilcoxon test, p=5.9x10(-31)). The integrated model successfully classified the patients into high- and low-risk groups (log-rank test, p=2.5x10(-02)), whereas the radiomics model was not able to provide such classification (log-rank test, p=0.37). CONCLUSION: Dosiomics can benefit in predicting the LR in IMRT-treated patients and should not be neglected for related investigations.

Determining patient abdomen thickness from a single digital radiograph with a computational model: clinical results from a proof of concept study

  • Worrall, M.
  • Vinnicombe, S.
  • Sutton, D.
Br J Radiol 2020 Journal Article, cited 0 times
OBJECTIVE: A computational model has been created to estimate the abdominal thickness of a patient following an X-ray examination; its intended application is assisting with patient dose audit of paediatric X-ray examinations. This work evaluates the accuracy of the computational model in a clinical setting for adult patients undergoing anteroposterior (AP) abdomen X-ray examinations. METHODS: The model estimates patient thickness using the radiographic image, the exposure factors with which the image was acquired, a priori knowledge of the characteristics of the X-ray unit and detector and the results of extensive Monte Carlo simulation of patient examinations. For 20 patients undergoing AP abdominal X-ray examinations, the model was used to estimate the patient thickness; these estimates were compared against a direct measurement made at the time of the examination. RESULTS: Estimates of patient thickness made using the model were on average within +/-5.8% of the measured thickness. CONCLUSION: The model can be used to accurately estimate the thickness of a patient undergoing an AP abdominal X-ray examination where the patient's size falls within the range of the size of patients used to create the computational model. ADVANCES IN KNOWLEDGE: This work demonstrates that it is possible to accurately estimate the AP abdominal thickness of an adult patient using the digital X-ray image and a computational model.

Quantifying the incremental value of deep learning: Application to lung nodule detection

  • Warsavage, Theodore Jr
  • Xing, Fuyong
  • Baron, Anna E
  • Feser, William J
  • Hirsch, Erin
  • Miller, York E
  • Malkoski, Stephen
  • Wolf, Holly J
  • Wilson, David O
  • Ghosh, Debashis
PLoS One 2020 Journal Article, cited 0 times
We present a case study for implementing a machine learning algorithm with an incremental value framework in the domain of lung cancer research. Machine learning methods have often been shown to be competitive with prediction models in some domains; however, implementation of these methods is in early development. Often these methods are only directly compared to existing methods; here we present a framework for assessing the value of a machine learning model by assessing the incremental value. We developed a machine learning model to identify and classify lung nodules and assessed the incremental value added to existing risk prediction models. Multiple external datasets were used for validation. We found that our image model, trained on a dataset from The Cancer Imaging Archive (TCIA), improves upon existing models that are restricted to patient characteristics, but it was inconclusive about whether it improves on models that consider nodule features. Another interesting finding is the variable performance on different datasets, suggesting population generalization with machine learning models may be more challenging than is often considered.

Semi-supervised mp-MRI data synthesis with StitchLayer and auxiliary distance maximization

  • Wang, Zhiwei
  • Lin, Yi
  • Cheng, Kwang-Ting Tim
  • Yang, Xin
Medical Image Analysis 2020 Journal Article, cited 0 times

Novel convolutional neural network architecture for improved pulmonary nodule classification on computed tomography

  • Wang, Yi
  • Zhang, Hao
  • Chae, Kum Ju
  • Choi, Younhee
  • Jin, Gong Yong
  • Ko, Seok-Bum
Multidimensional Systems and Signal Processing 2020 Journal Article, cited 0 times
Computed tomography (CT) is widely used to locate pulmonary nodules for preliminary diagnosis of the lung cancer. However, due to high visual similarities between malignant (cancer) and benign (non-cancer) nodules, distinguishing malignant from malign nodules is not an easy task for a thoracic radiologist. In this paper, a novel convolutional neural network (ConvNet) architecture is proposed to classify the pulmonary nodules as either benign or malignant. Due to the high variance of nodule characteristics in CT scans, such as size and shape, a multi-path, multi-scale architecture is proposed and applied in the proposed ConvNet to improve the classification performance. The multi-scale method utilizes filters with different sizes to more effectively extracted nodule features from local regions, and the multi-path architecture combines features extracted from different ConvNet layers thereby enhancing the nodule features with respect to global regions. The proposed ConvNet is trained and evaluated on the LUNGx Challenge database, and achieves a sensitivity of 0.887 and a specificity of 0.924 with an area under the curve (AUC) of 0.948. The proposed ConvNet achieves a 14% AUC improvement compared to the state-of-the-art unsupervised learning approach. The proposed ConvNet also outperforms the other state-of-the-art ConvNets explicitly designed for pulmonary nodule classification. For clinical usage, the proposed ConvNet could potentially assist the radiologists to make diagnostic decisions in CT screening.

A prognostic analysis method for non-small cell lung cancer based on the computed tomography radiomics

  • Wang, Xu
  • Duan, Huihong
  • Li, Xiaobing
  • Ye, Xiaodan
  • Huang, Gang
  • Nie, Shengdong
Phys Med Biol 2020 Journal Article, cited 0 times
In order to assist doctors in arranging the postoperative treatments and re-examinations for non-small cell lung cancer (NSCLC) patients, this study was initiated to explore a prognostic analysis method for NSCLC based on computed tomography (CT) radiomics. The data of 173 NSCLC patients were collected retrospectively and the clinically meaningful 3-year survival was used as the predictive limit to predict the patient's prognosis survival time range. Firstly, lung tumors were segmented and the radiomics features were extracted. Secondly, the feature weighting algorithm was used to screen and optimize the extracted original feature data. Then, the selected feature data combining with the prognosis survival of patients were used to train machine learning classification models. Finally, a prognostic survival prediction model and radiomics prognostic factors were obtained to predict the prognosis survival time range of NSCLC patients. The classification accuracy rate under cross-validation was up to 88.7% in the prognosis survival analysis model. When verifying on an independent data set, the model also yielded a high prediction accuracy which is up to 79.6%. Inverse different moment, lobulation sign and angular second moment were NSCLC prognostic factors based on radiomics. This study proved that CT radiomics features could effectively assist doctors to make more accurate prognosis survival prediction for NSCLC patients, so as to help doctors to optimize treatment and re-examination for NSCLC patients to extend their survival time.

A multi-objective radiomics model for the prediction of locoregional recurrence in head and neck squamous cell cancer

  • Wang, K.
  • Zhou, Z.
  • Wang, R.
  • Chen, L.
  • Zhang, Q.
  • Sher, D.
  • Wang, J.
Med Phys 2020 Journal Article, cited 0 times
PURPOSE: Locoregional recurrence (LRR) is the predominant pattern of relapse after nonsurgical treatment of head and neck squamous cell cancer (HNSCC). Therefore, accurately identifying patients with HNSCC who are at high risk for LRR is important for optimizing personalized treatment plans. In this work, we developed a multi-classifier, multi-objective, and multi-modality (mCOM) radiomics-based outcome prediction model for HNSCC LRR. METHODS: In mCOM, we considered sensitivity and specificity simultaneously as the objectives to guide the model optimization. We used multiple classifiers, comprising support vector machine (SVM), discriminant analysis (DA), and logistic regression (LR), to build the model. We used features from multiple modalities as model inputs, comprising clinical parameters and radiomics feature extracted from X-ray computed tomography (CT) images and positron emission tomography (PET) images. We proposed a multi-task multi-objective immune algorithm (mTO) to train the mCOM model and used an evidential reasoning (ER)-based method to fuse the output probabilities from different classifiers and modalities in mCOM. We evaluated the effectiveness of the developed method using a retrospective public pretreatment HNSCC dataset downloaded from The Cancer Imaging Archive (TCIA). The input for our model included radiomics features extracted from pretreatment PET and CT using an open source radiomics software and clinical characteristics such as sex, age, stage, primary disease site, human papillomavirus (HPV) status, and treatment paradigm. In our experiment, 190 patients from two institutions were used for model training while the remaining 87 patients from the other two institutions were used for testing. RESULTS: When we built the predictive model using features from single modality, the multi-classifier (MC) models achieved better performance over the models built with the three base-classifiers individually. When we built the model using features from multiple modalities, the proposed method achieved area under the receiver operating characteristic curve (AUC) values of 0.76 for the radiomics-only model, and 0.77 for the model built with radiomics and clinical features, which is significantly higher than the AUCs of models built with single-modality features. The statistical analysis was performed using MATLAB software. CONCLUSIONS: Comparisons with other methods demonstrated the efficiency of the mTO algorithm and the superior performance of the proposed mCOM model for predicting HNSCC LRR.

Deep learning based image reconstruction algorithm for limited-angle translational computed tomography

  • Wang, Jiaxi
  • Liang, Jun
  • Cheng, Jingye
  • Guo, Yumeng
  • Zeng, Li
PLoS One 2020 Journal Article, cited 0 times

Auto‐segmentation of organs at risk for head and neck radiotherapy planning: from atlas‐based to deep learning methods

  • Vrtovec, Tomaž
  • Močnik, Domen
  • Strojan, Primož
  • Pernuš, Franjo
  • Ibragimov, Bulat
Medical physics 2020 Journal Article, cited 2 times

Efficient CT Image Reconstruction in a GPU Parallel Environment

  • Valencia Pérez, Tomas A
  • Hernández López, Javier M
  • Moreno-Barbosa, Eduardo
  • de Celis Alonso, Benito
  • Palomino Merino, Martin R
  • Castaño Meneses, Victor M
Tomography 2020 Journal Article, cited 0 times
Computed tomography is nowadays an indispensable tool in medicine used to diagnose multiple diseases. In clinical and emergency room environments, the speed of acquisition and information processing are crucial. CUDA is a software architecture used to work with NVIDIA graphics processing units. In this paper a methodology to accelerate tomographic image reconstruction based on maximum likelihood expectation maximization iterative algorithm and combined with the use of graphics processing units programmed in CUDA framework is presented. Implementations developed here are used to reconstruct images with clinical use. Timewise, parallel versions showed improvement with respect to serial implementations. These differences reached, in some cases, 2 orders of magnitude in time while preserving image quality. The image quality and reconstruction times were not affected significantly by the addition of Poisson noise to projections. Furthermore, our implementations showed good performance when compared with reconstruction methods provided by commercial software. One of the goals of this work was to provide a fast, portable, simple, and cheap image reconstruction system, and our results support the statement that the goal was achieved.

Brain extraction on MRI scans in presence of diffuse glioma: Multi-institutional performance evaluation of deep learning methods and robust modality-agnostic training

  • Thakur, S.
  • Doshi, J.
  • Pati, S.
  • Rathore, S.
  • Sako, C.
  • Bilello, M.
  • Ha, S. M.
  • Shukla, G.
  • Flanders, A.
  • Kotrotsou, A.
  • Milchenko, M.
  • Liem, S.
  • Alexander, G. S.
  • Lombardo, J.
  • Palmer, J. D.
  • LaMontagne, P.
  • Nazeri, A.
  • Talbar, S.
  • Kulkarni, U.
  • Marcus, D.
  • Colen, R.
  • Davatzikos, C.
  • Erus, G.
  • Bakas, S.
Neuroimage 2020 Journal Article, cited 0 times
Brain extraction, or skull-stripping, is an essential pre-processing step in neuro-imaging that has a direct impact on the quality of all subsequent processing and analyses steps. It is also a key requirement in multi-institutional collaborations to comply with privacy-preserving regulations. Existing automated methods, including Deep Learning (DL) based methods that have obtained state-of-the-art results in recent years, have primarily targeted brain extraction without considering pathologically-affected brains. Accordingly, they perform sub-optimally when applied on magnetic resonance imaging (MRI) brain scans with apparent pathologies such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. In this study, we present a comprehensive performance evaluation of recent deep learning architectures for brain extraction, training models on mpMRI scans of pathologically-affected brains, with a particular focus on seeking a practically-applicable, low computational footprint approach, generalizable across multiple institutions, further facilitating collaborations. We identified a large retrospective multi-institutional dataset of n=3340 mpMRI brain tumor scans, with manually-inspected and approved gold-standard segmentations, acquired during standard clinical practice under varying acquisition protocols, both from private institutional data and public (TCIA) collections. To facilitate optimal utilization of rich mpMRI data, we further introduce and evaluate a novel ''modality-agnostic training'' technique that can be applied using any available modality, without need for model retraining. Our results indicate that the modality-agnostic approach(1) obtains accurate results, providing a generic and practical tool for brain extraction on scans with brain tumors.

Is an analytical dose engine sufficient for intensity modulated proton therapy in lung cancer?

  • Teoh, Suliana
  • Fiorini, Francesca
  • George, Ben
  • Vallis, Katherine A
  • Van den Heuvel, Frank
Br J Radiol 2020 Journal Article, cited 0 times
OBJECTIVE: To identify a subgroup of lung cancer plans where the analytical dose calculation (ADC) algorithm may be clinically acceptable compared to Monte Carlo (MC) dose calculation in intensity modulated proton therapy (IMPT). METHODS: Robust-optimised IMPT plans were generated for 20 patients to a dose of 70 Gy (relative biological effectiveness) in 35 fractions in Raystation. For each case, four plans were generated: three with ADC optimisation using the pencil beam (PB) algorithm followed by a final dose calculation with the following algorithms: PB (PB-PB), MC (PB-MC) and MC normalised to prescription dose (PB-MC scaled). A fourth plan was generated where MC optimisation and final dose calculation was performed (MC-MC). Dose comparison and gamma analysis (PB-PB vs PB-MC) at two dose thresholds were performed: 20% (D20) and 99% (D99) with PB-PB plans as reference. RESULTS: Overestimation of the dose to 99% and mean dose of the clinical target volume was observed in all PB-MC compared to PB-PB plans (median: 3.7 Gy(RBE) (5%) (range: 2.3 to 6.9 Gy(RBE)) and 1.8 Gy(RBE) (3%) (0.5 to 4.6 Gy(RBE))). PB-MC scaled plans resulted in significantly higher CTVD2 compared to PB-PB (median difference: -4 Gy(RBE) (-6%) (-5.3 to -2.4 Gy(RBE)), p </= .001). The overall median gamma pass rates (3%-3 mm) at D20 and D99 were 93.2% (range:62.2-97.5%) and 71.3 (15.4-92.0%). On multivariate analysis, presence of mediastinal disease and absence of range shifters were significantly associated with high gamma pass rates. Median D20 and D99 pass rates with these predictors were 96.0% (95.3-97.5%) and 85.4% (75.1-92.0%). MC-MC achieved similar target coverage and doses to OAR compared to PB-PB plans. CONCLUSION: In the presence of mediastinal involvement and absence of range shifters Raystation ADC may be clinically acceptable in lung IMPT. Otherwise, MC algorithm would be recommended to ensure accuracy of treatment plans. ADVANCES IN KNOWLEDGE: Although MC algorithm is more accurate compared to ADC in lung IMPT, ADC may be clinically acceptable where there is mediastinal involvement and absence of range shifters.

Staging of clear cell renal cell carcinoma using random forest and support vector machine

  • Talaat, D.
  • Zada, F.
  • Kadry, R.
2020 Conference Paper, cited 0 times
Abstract. Kidney cancer is one of the deadliest types of cancer affecting the human body. It’s regarded as the seventh most common type of cancer affecting men and the ninth affecting women. Early diagnosis of kidney cancer can improve the survival rates for many patients. Clear cell renal cell carcinoma (ccRCC) accounts for 90% of renal cancers. Although the exact cause of the kidney cancer is still unknown, early diagnosis can help patients get the proper treatment at the proper time. In this paper, a novel semi-automated model is proposed for early detection and staging of clear cell renal cell carcinoma. The proposed model consists of three phases: segmentation, feature extraction, and classification. The first phase is image segmentation phase where images were masked to segment the kidney lobes. Then the masked images were fed into watershed algorithm to extract tumor from the kidney. The second phase is feature extraction phase where gray level co-occurrence matrix (GLCM) method was integrated with normal statistical method to extract the feature vectors from the segmented images. The last phase is the classification phase where the resulted feature vectors were introduced to random forest (RF) and support vector machine (SVM) classifiers. Experiments have been carried out to validate the effectiveness of the proposed model using TCGA-KRIC dataset which contains 228 CT scans of ccRCC patients where 150 scans were used for learning and 78 for validation. The proposed model showed an outstanding improvement of 15.12% for accuracy from the previous work.

Development and Validation of a Modified Three-Dimensional U-Net Deep-Learning Model for Automated Detection of Lung Nodules on Chest CT Images From the Lung Image Database Consortium and Japanese Datasets

  • Suzuki, K.
  • Otsuka, Y.
  • Nomura, Y.
  • Kumamaru, K. K.
  • Kuwatsuru, R.
  • Aoki, S.
Acad Radiol 2020 Journal Article, cited 0 times
RATIONALE AND OBJECTIVES: A more accurate lung nodule detection algorithm is needed. We developed a modified three-dimensional (3D) U-net deep-learning model for the automated detection of lung nodules on chest CT images. The purpose of this study was to evaluate the accuracy of the developed modified 3D U-net deep-learning model. MATERIALS AND METHODS: In this Health Insurance Portability and Accountability Act-compliant, Institutional Review Board-approved retrospective study, the 3D U-net based deep-learning model was trained using the Lung Image Database Consortium and Image Database Resource Initiative dataset. For internal model validation, we used 89 chest CT scans that were not used for model training. For external model validation, we used 450 chest CT scans taken at an urban university hospital in Japan. Each case included at least one nodule of >5 mm identified by an experienced radiologist. We evaluated model accuracy using the competition performance metric (CPM) (average sensitivity at 1/8, 1/4, 1/2, 1, 2, 4, and 8 false-positives per scan). The 95% confidence interval (CI) was computed by bootstrapping 1000 times. RESULTS: In the internal validation, the CPM was 94.7% (95% CI: 89.1%-98.6%). In the external validation, the CPM was 83.3% (95% CI: 79.4%-86.1%). CONCLUSION: The modified 3D U-net deep-learning model showed high performance in both internal and external validation.

Radiomics for glioblastoma survival analysis in pre-operative MRI: exploring feature robustness, class boundaries, and machine learning techniques

  • Suter, Y.
  • Knecht, U.
  • Alao, M.
  • Valenzuela, W.
  • Hewer, E.
  • Schucht, P.
  • Wiest, R.
  • Reyes, M.
Cancer Imaging 2020 Journal Article, cited 0 times
BACKGROUND: This study aims to identify robust radiomic features for Magnetic Resonance Imaging (MRI), assess feature selection and machine learning methods for overall survival classification of Glioblastoma multiforme patients, and to robustify models trained on single-center data when applied to multi-center data. METHODS: Tumor regions were automatically segmented on MRI data, and 8327 radiomic features extracted from these regions. Single-center data was perturbed to assess radiomic feature robustness, with over 16 million tests of typical perturbations. Robust features were selected based on the Intraclass Correlation Coefficient to measure agreement across perturbations. Feature selectors and machine learning methods were compared to classify overall survival. Models trained on single-center data (63 patients) were tested on multi-center data (76 patients). Priors using feature robustness and clinical knowledge were evaluated. RESULTS: We observed a very large performance drop when applying models trained on single-center on unseen multi-center data, e.g. a decrease of the area under the receiver operating curve (AUC) of 0.56 for the overall survival classification boundary at 1 year. By using robust features alongside priors for two overall survival classes, the AUC drop could be reduced by 21.2%. In contrast, sensitivity was 12.19% lower when applying a prior. CONCLUSIONS: Our experiments show that it is possible to attain improved levels of robustness and accuracy when models need to be applied to unseen multi-center data. The performance on multi-center data of models trained on single-center data can be increased by using robust features and introducing prior knowledge. For successful model robustification, tailoring perturbations for robustness testing to the target dataset is key.

ROI-based feature learning for efficient true positive prediction using convolutional neural network for lung cancer diagnosis

  • Suresh, Supriya
  • Mohan, Subaji
Neural Computing and Applications 2020 Journal Article, cited 0 times

Multisite Technical and Clinical Performance Evaluation of Quantitative Imaging Biomarkers from 3D FDG PET Segmentations of Head and Neck Cancer Images

  • Smith, Brian J
  • Buatti, John M
  • Bauer, Christian
  • Ulrich, Ethan J
  • Ahmadvand, Payam
  • Budzevich, Mikalai M
  • Gillies, Robert J
  • Goldgof, Dmitry
  • Grkovski, Milan
  • Hamarneh, Ghassan
  • Kinahan, Paul E
  • Muzi, John P
  • Muzi, Mark
  • Laymon, Charles M
  • Mountz, James M
  • Nehmeh, Sadek
  • Oborski, Matthew J
  • Zhao, Binsheng
  • Sunderland, John J
  • Beichel, Reinhard R
Tomography 2020 Journal Article, cited 1 times
Quantitative imaging biomarkers (QIBs) provide medical image-derived intensity, texture, shape, and size features that may help characterize cancerous tumors and predict clinical outcomes. Successful clinical translation of QIBs depends on the robustness of their measurements. Biomarkers derived from positron emission tomography images are prone to measurement errors owing to differences in image processing factors such as the tumor segmentation method used to define volumes of interest over which to calculate QIBs. We illustrate a new Bayesian statistical approach to characterize the robustness of QIBs to different processing factors. Study data consist of 22 QIBs measured on 47 head and neck tumors in 10 positron emission tomography/computed tomography scans segmented manually and with semiautomated methods used by 7 institutional members of the NCI Quantitative Imaging Network. QIB performance is estimated and compared across institutions with respect to measurement errors and power to recover statistical associations with clinical outcomes. Analysis findings summarize the performance impact of different segmentation methods used by Quantitative Imaging Network members. Robustness of some advanced biomarkers was found to be similar to conventional markers, such as maximum standardized uptake value. Such similarities support current pursuits to better characterize disease and predict outcomes by developing QIBs that use more imaging information and are robust to different processing factors. Nevertheless, to ensure reproducibility of QIB measurements and measures of association with clinical outcomes, errors owing to segmentation methods need to be reduced.

Brain tumor segmentation approach based on the extreme learning machine and significantly fast and robust fuzzy C-means clustering algorithms running on Raspberry Pi hardware

  • ŞİŞİK, Fatih
  • Sert, Eser
Medical Hypotheses 2020 Journal Article, cited 0 times
Automatic decision support systems have gained importance in health sector in recent years. In parallel with recent developments in the fields of artificial intelligence and image processing, embedded systems are also used in decision support systems for tumor diagnosis. Extreme learning machine (ELM), is a recently developed, quick and efficient algorithm which can quickly and flawlessly diagnose tumors using machine learning techniques. Similarly, significantly fast and robust fuzzy C-means clustering algorithm (FRFCM) is a novel and fast algorithm which can display a high performance. In the present study, a brain tumor segmentation approach is proposed based on extreme learning machine and significantly fast and robust fuzzy C-means clustering algorithms (BTS-ELM-FRFCM) running on Raspberry Pi (PRI) hardware. The present study mainly aims to introduce a new segmentation system hardware containing new algorithms and offering a high level of accuracy the health sector. PRI’s are useful mobile devices due to their cost-effectiveness and satisfying hardware. 3200 training images were used to train ELM in the present study. 20 pieces of MRI images were used for testing process. Figure of merid (FOM), Jaccard similarity coefficient (JSC) and Dice indexes were used in order to evaluate the performance of the proposed approach. In addition, the proposed method was compared with brain tumor segmentation based on support vector machine (BTS-SVM), brain tumor segmentation based on fuzzy C-means (BTS-FCM) and brain tumor segmentation based on self-organizing maps and k-means (BTS-SOM). The statistical analysis on FOM, JSC and Dice results obtained using four different approaches indicated that BTS-ELM-FRFCM displayed the highest performance. Thus, it can be concluded that the embedded system designed in the present study can perform brain tumor segmentation with a high accuracy rate.

Unsupervised domain adaptation with adversarial learning for mass detection in mammogram

  • Shen, Rongbo
  • Yao, Jianhua
  • Yan, Kezhou
  • Tian, Kuan
  • Jiang, Cheng
  • Zhou, Ke
Neurocomputing 2020 Journal Article, cited 0 times
Many medical image datasets have been collected without proper annotations for deep learning training. In this paper, we propose a novel unsupervised domain adaptation framework with adversarial learning to minimize the annotation efforts. Our framework employs a task specific network, i.e., fully convolutional network (FCN), for spatial density prediction. Moreover, we employ a domain discriminator, in which adversarial learning is adopted to align the less-annotated target domain features with the well-annotated source domain features in the feature space. We further propose a novel training strategy for the adversarial learning by coupling data from source and target domains and alternating the subnet updates. We employ the public CBIS-DDSM dataset as the source domain, and perform two sets of experiments on two target domains (i.e., the public INbreast dataset and a self-collected dataset), respectively. Experimental results suggest consistent and comparable performance improvement over the state-of-the-art methods. Our proposed training strategy is also proved to converge much faster.

Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data

  • Sheller, Micah J
  • Edwards, Brandon
  • Reina, G Anthony
  • Martin, Jason
  • Pati, Sarthak
  • Kotrotsou, Aikaterini
  • Milchenko, Mikhail
  • Xu, Weilin
  • Marcus, Daniel
  • Colen, Rivka R
  • Bakas, Spyridon
Sci RepScientific reports 2020 Journal Article, cited 4 times
Several studies underscore the potential of deep learning in identifying complex patterns, leading to diagnostic and prognostic biomarkers. Identifying sufficiently large and diverse datasets, required for training, is a significant challenge in medicine and can rarely be found in individual institutions. Multi-institutional collaborations based on centrally-shared patient data face privacy and ownership challenges. Federated learning is a novel paradigm for data-private multi-institutional collaborations, where model-learning leverages all available data without sharing data between institutions, by distributing the model-training to the data-owners and aggregating their results. We show that federated learning among 10 institutions results in models reaching 99% of the model quality achieved with centralized data, and evaluate generalizability on data from institutions outside the federation. We further investigate the effects of data distribution across collaborating institutions on model quality and learning patterns, indicating that increased access to data through data private multi-institutional collaborations can benefit model quality more than the errors introduced by the collaborative method. Finally, we compare with other collaborative-learning approaches demonstrating the superiority of federated learning, and discuss practical implementation considerations. Clinical adoption of federated learning is expected to lead to models trained on datasets of unprecedented size, hence have a catalytic impact towards precision/personalized medicine.

An efficient denoising of impulse noise from MRI using adaptive switching modified decision based unsymmetric trimmed median filter

  • Sheela, C. Jaspin Jeba
  • Suganthi, G.
Biomedical Signal Processing and Control 2020 Journal Article, cited 0 times

Prediction of Molecular Mutations in Diffuse Low-Grade Gliomas using MR Imaging Features

  • Shboul, Zeina A
  • Chen, James
  • M Iftekharuddin, Khan
Sci RepScientific reports 2020 Journal Article, cited 0 times
Diffuse low-grade gliomas (LGG) have been reclassified based on molecular mutations, which require invasive tumor tissue sampling. Tissue sampling by biopsy may be limited by sampling error, whereas non-invasive imaging can evaluate the entirety of a tumor. This study presents a non-invasive analysis of low-grade gliomas using imaging features based on the updated classification. We introduce molecular (MGMT methylation, IDH mutation, 1p/19q co-deletion, ATRX mutation, and TERT mutations) prediction methods of low-grade gliomas with imaging. Imaging features are extracted from magnetic resonance imaging data and include texture features, fractal and multi-resolution fractal texture features, and volumetric features. Training models include nested leave-one-out cross-validation to select features, train the model, and estimate model performance. The prediction models of MGMT methylation, IDH mutations, 1p/19q co-deletion, ATRX mutation, and TERT mutations achieve a test performance AUC of 0.83 +/- 0.04, 0.84 +/- 0.03, 0.80 +/- 0.04, 0.70 +/- 0.09, and 0.82 +/- 0.04, respectively. Furthermore, our analysis shows that the fractal features have a significant effect on the predictive performance of MGMT methylation IDH mutations, 1p/19q co-deletion, and ATRX mutations. The performance of our prediction methods indicates the potential of correlating computed imaging features with LGG molecular mutations types and identifies candidates that may be considered potential predictive biomarkers of LGG molecular classification.

An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor

  • Sharif, Muhammad
  • Amin, Javaria
  • Raza, Mudassar
  • Yasmin, Mussarat
  • Satapathy, Suresh Chandra
Pattern Recognition Letters 2020 Journal Article, cited 0 times
Tumor in brain is a major cause of death in human beings. If not treated properly and timely, there is a high chance of it to become malignant. Therefore, brain tumor detection at an initial stage is a significant requirement. In this work, initially the skull is removed through brain surface extraction (BSE) method. The skull removed image is then fed to particle swarm optimization (PSO) to achieve better segmentation. In the next step, Local binary patterns (LBP) and deep features of segmented images are extracted and genetic algorithm (GA) is applied for best features selection. Finally, artificial neural network (ANN) and other classifiers are utilized to classify the tumor grades. The publicly available complex brain datasets such as RIDER and BRATS 2018 Challenge are utilized for evaluation of method and attained 99% maximum accuracy. The results are also compared with existing methods which evident that the presented technique provided improved outcomes which are clear proof of its effectiveness and novelty.

Improved pulmonary lung nodules risk stratification in computed tomography images by fusing shape and texture features in a machine-learning paradigm

  • Sahu, Satya Prakash
  • Londhe, Narendra D.
  • Verma, Shrish
  • Singh, Bikesh K.
  • Banchhor, Sumit Kumar
International Journal of Imaging Systems and Technology 2020 Journal Article, cited 0 times
Abstract Lung cancer is one of the most deadly cancer in both men and women. Accurate and early diagnosis of pulmonary lung nodules is critical. This study presents an accurate computer-aided diagnosis (CADx) system for risk stratification of pulmonary nodules in computed tomography (CT) lung images by fusing shape and texture-based features in a machine-learning (ML) based paradigm. A database with 114 (28 high-risk) patients acquired from Lung Image Database Consortium (LIDC) is used in this study. After nodule segmentation using K-means clustering, features based on shape and texture attributes are extracted. Seven different filter and wrapper-based feature selection techniques are used for dominant feature selection. Lastly, the classification of nodules is performed by a support vector machine using six different kernel functions. The classification results are evaluated using 10-fold cross-validation and hold-out data division protocols. The performance of the proposed system is evaluated using accuracy, sensitivity, specificity, and the area under receiver operating characteristics (AUC). Using 30 dominant features from the pool of shape and texture-based features, the proposed system achieves the highest classification accuracy and AUC of 89% and 0.92, respectively. The proposed ML-based system showed an improvement in risk stratification accuracy by fusing shape and texture-based features.

A Hybrid Approach for 3D Lung Segmentation in CT Images Using Active Contour and Morphological Operation

  • Sahu, Satya Praksh
  • Kamble, Bhawna
2020 Book Section, cited 0 times
Lung segmentation is the initial step for detection and diagnosis for lung-related abnormalities and disease. In CAD system for lung cancer, this step traces the boundary for the pulmonary region from thorax in CT images. It decreases the overhead for a further step in CAD system by reducing the space for finding the ROIs. The major issue and challenging task for the segmentation is the inclusion of juxtapleural nodules in the segmented lungs. The chapter attempts 3D lung segmentation of CT images using active contour and morphological operations. The major steps in the proposed approach contain: preprocessing through various techniques, Otsu's thresholding for the binarizing the image; morphological operations are applied for elimination of undesired region and, finally, active contour for the segmentation of the lungs in 3D. For experiment, 10 subjects are taken from the public dataset of LIDC-IDRI. The proposed method achieved accuracies 0.979 Jaccard's similarity index value, 0.989 Dice similarity coefficient, and 0.073 volume overlap error when compared to ground truth.

A manifold learning regularization approach to enhance 3D CT image-based lung nodule classification

  • Ren, Y.
  • Tsai, M. Y.
  • Chen, L.
  • Wang, J.
  • Li, S.
  • Liu, Y.
  • Jia, X.
  • Shen, C.
Int J Comput Assist Radiol Surg 2020 Journal Article, cited 2 times
PURPOSE: Diagnosis of lung cancer requires radiologists to review every lung nodule in CT images. Such a process can be very time-consuming, and the accuracy is affected by many factors, such as experience of radiologists and available diagnosis time. To address this problem, we proposed to develop a deep learning-based system to automatically classify benign and malignant lung nodules. METHODS: The proposed method automatically determines benignity or malignancy given the 3D CT image patch of a lung nodule to assist diagnosis process. Motivated by the fact that real structure among data is often embedded on a low-dimensional manifold, we developed a novel manifold regularized classification deep neural network (MRC-DNN) to perform classification directly based on the manifold representation of lung nodule images. The concise manifold representation revealing important data structure is expected to benefit the classification, while the manifold regularization enforces strong, but natural constraints on network training, preventing over-fitting. RESULTS: The proposed method achieves accurate manifold learning with reconstruction error of ~ 30 HU on real lung nodule CT image data. In addition, the classification accuracy on testing data is 0.90 with sensitivity of 0.81 and specificity of 0.95, which outperforms state-of-the-art deep learning methods. CONCLUSION: The proposed MRC-DNN facilitates an accurate manifold learning approach for lung nodule classification based on 3D CT images. More importantly, MRC-DNN suggests a new and effective idea of enforcing regularization for network training, possessing the potential impact to a board range of applications.

An unsupervised semi-automated pulmonary nodule segmentation method based on enhanced region growing

  • Ren, He
  • Zhou, Lingxiao
  • Liu, Gang
  • Peng, Xueqing
  • Shi, Weiya
  • Xu, Huilin
  • Shan, Fei
  • Liu, Lei
Quantitative Imaging in Medicine and Surgery 2020 Journal Article, cited 0 times

Analyzing MRI scans to detect glioblastoma tumor using hybrid deep belief networks

  • Reddy, Annapareddy V. N.
  • Krishna, Ch Phani
  • Mallick, Pradeep Kumar
  • Satapathy, Sandeep Kumar
  • Tiwari, Prayag
  • Zymbler, Mikhail
  • Kumar, Sachin
Journal of Big Data 2020 Journal Article, cited 0 times
Glioblastoma (GBM) is a stage 4 malignant tumor in which a large portion of tumor cells are reproducing and dividing at any moment. These tumors are life threatening and may result in partial or complete mental and physical disability. In this study, we have proposed a classification model using hybrid deep belief networks (DBN) to classify magnetic resonance imaging (MRI) for GBM tumor. DBN is composed of stacked restricted Boltzmann machines (RBM). DBN often requires a large number of hidden layers that consists of large number of neurons to learn the best features from the raw image data. Hence, computational and space complexity is high and requires a lot of training time. The proposed approach combines DTW with DBN to improve the efficiency of existing DBN model. The results are validated using several statistical parameters. Statistical validation verifies that the combination of DTW and DBN outperformed the other classifiers in terms of training time, space complexity and classification accuracy.

Deeply supervised U‐Net for mass segmentation in digital mammograms

  • Ravitha Rajalakshmi, N.
  • Vidhyapriya, R.
  • Elango, N.
  • Ramesh, Nikhil
International Journal of Imaging Systems and Technology 2020 Journal Article, cited 0 times
Mass detection is a critical process in the examination of mammograms. The shape and texture of the mass are key parameters used in the diagnosis of breast cancer. To recover the shape of the mass, semantic segmentation is found to be more useful rather than mere object detection (or) localization. The main challenges involved in the mass segmentation include: (a) low signal to noise ratio (b) indiscernible mass boundaries, and (c) more false positives. These problems arise due to the significant overlap in the intensities of both the normal parenchymal region and the mass region. To address these challenges, deeply supervised U‐Net model (DS U‐Net) coupled with dense conditional random fields (CRFs) is proposed. Here, the input images are preprocessed using CLAHE and a modified encoder‐decoder‐based deep learning model is used for segmentation. In general, the encoder captures the textual information of various regions in an input image, whereas the decoder recovers the spatial location of the desired region of interest. The encoder‐decoder‐based models lack the ability to recover the non‐conspicuous and spiculated mass boundaries. In the proposed work, deep supervision is integrated with a popular encoder‐decoder model (U‐Net) to improve the attention of the network toward the boundary of the suspicious regions. The final segmentation map is also created as a linear combination of the intermediate feature maps and the output feature map. The dense CRF is then used to fine‐tune the segmentation map for the recovery of definite edges. The DS U‐Net with dense CRF is evaluated on two publicly available benchmark datasets CBIS‐DDSM and INBREAST. It provides a dice score of 82.9% for CBIS‐DDSM and 79% for INBREAST.

Imaging Signature of 1p/19q Co-deletion Status Derived via Machine Learning in Lower Grade Glioma

  • Rathore, Saima
  • Chaddad, Ahmad
  • Bukhari, Nadeem Haider
  • Niazi, Tamim
2020 Book Section, cited 0 times
We present a new approach to quantify the co-deletion of chromosomal arms 1p/19q status in lower grade glioma (LGG). Though the surgical biopsy followed by fluorescence in-situ hybridization test is the gold standard currently to identify mutational status for diagnosis and treatment planning, there are several imaging studies to predict the same. Our study aims to determine the 1p/19q mutational status of LGG non-invasively by advanced pattern analysis using multi-parametric MRI. The publicly available dataset at TCIA was used. T1-W and T2-W MRIs of a total 159 patients with grade-II and grade-III glioma, who had biopsy proven 1p/19q status consisting either no deletion (n = 57) or co-deletion (n = 102), were used in our study. We quantified the imaging profile of these tumors by extracting diverse imaging features, including the tumor’s spatial distribution pattern, volumetric, texture, and intensity distribution measures. We integrated these diverse features via support vector machines, to construct an imaging signature of 1p/19q, which was evaluated in independent discovery (n = 85) and validation (n = 74) cohorts, and compared with the 1p/19q status obtained through fluorescence in-situ hybridization test. The classification accuracy on complete, discovery and replication cohorts was 86.16%, 88.24%, and 85.14%, respectively. The classification accuracy when the model developed on training cohort was applied on unseen replication set was 82.43%. Non-invasive prediction of 1p/19q status from MRIs would allow improved treatment planning for LGG patients without the need of surgical biopsies and would also help in potentially monitoring the dynamic mutation changes during the course of the treatment.

Comparison of iterative parametric and indirect deep learning-based reconstruction methods in highly undersampled DCE-MR Imaging of the breast

  • Rastogi, A.
  • Yalavarthy, P. K.
Med Phys 2020 Journal Article, cited 0 times
PURPOSE: To compare the performance of iterative direct and indirect parametric reconstruction methods with indirect deep learning-based reconstruction methods in estimating tracer-kinetic parameters from highly undersampled DCE-MR Imaging breast data and provide a systematic comparison of the same. METHODS: Estimation of tracer-kinetic parameters using indirect methods from undersampled data requires to reconstruct the anatomical images initially by solving an inverse problem. This reconstructed images gets utilized in turn to estimate the tracer-kinetic parameters. In direct estimation, the parameters are estimated without reconstructing the anatomical images. Both problems are ill-posed and are typically solved using prior-based regularization or using deep learning. In this study, for indirect estimation, two deep learning-based reconstruction frameworks namely, ISTA-Net(+) and MODL, were utilized. For direct and indirect parametric estimation, sparsity inducing priors (L1 and Total-Variation) and limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm as solver was deployed. The performance of these techniques were compared systematically in estimation of vascular permeability ( K trans ) from undersampled DCE-MRI breast data using Patlak as pharmaco-kinetic model. The experiments involved retrospective undersampling of the data 20x, 50x, and 100x and compared the results using PSNR, nRMSE, SSIM, and Xydeas metrics. The K trans maps estimated from fully sampled data were utilized as ground truth. The developed code was made available as open-source for enthusiastic users. RESULTS: The reconstruction methods performance was evaluated using ten patients breast data (five patients each for training and testing). Consistent with other studies, the results indicate that direct parametric reconstruction methods provide improved performance compared to the indirect parameteric reconstruction methods. The results also indicate that for 20x undersampling, deep learning-based methods performs better or at par with direct estimation in terms of PSNR, SSIM, and nRMSE. However, for higher undersampling rates (50x and 100x) direct estimation performs better in all metrics. For all undersampling rates, direct reconstruction performed better in terms of Xydeas metric, which indicated fidelity in magnitude and orientation of edges. CONCLUSION: Deep learning-based indirect techniques perform at par with direct estimation techniques for lower undersampling rates in the breast DCE-MR imaging. At higher undersampling rates, they are not able to provide much needed generalization. Direct estimation techniques are able to provide more accurate results than both deep learning- and parametric-based indirect methods in these high undersampling scenarios.

Lung Cancer Diagnosis and Treatment Using AI and Mobile Applications

  • Rajesh, P.
  • Murugan, A.
  • Muruganantham, B.
  • Ganesh Kumar, S.
International Journal of Interactive Mobile Technologies (iJIM) 2020 Journal Article, cited 0 times
Cancer has become very common in this evolving world. Technology advancements, increased radiations have made cancer a common syndrome. Various types of cancers like Skin Cancer, Breast Cancer, Prostate Cancer, Blood Cancer, Colorectal cancer, Kidney Cancer and Lung Cancer exist. Among these various types of cancers, the mortality rate is high in lung cancer which is tough to diagnose and can be diagnosed only in advanced stages. Small cell lung cancer and non-small cell lung cancer are the two types in which non-small cell lung cancer (NSCLC) is the most common type which makes up to 80 to 85 percent of all cases [1].Digital Image Processing and Artificial Intelligence advancements has helped lot in medical image analysis and Computer Aided Diagnosis(CAD). Numerous research is carried out in this field to improve the detection and prediction of the cancerous tissues. In current methods, traditional image processing techniques is applied for image processing, noise removal and feature extraction. There are few good approaches that applies Artificial Intelligence and produce better results. However, no research has achieved 100% accuracy in nodule detection, early detection of cancerous nodules nor faster processing methods. Application of Artificial Intelligence techniques like Machine Learning, Deep Learning is very minimal and limited. In this paper [Figure 1], we have applied Artificial intelligence techniques to process CT (Computed Tomography) Scan image for data collection and data model training. The DICOM image data is saved as numpy file with all medical information extracted from the files for training. With the trained data we apply deep learning for noise removal and feature extraction. We can process huge volume of medical images for data collection, image processing, detection and prediction of nodules. The patient is made well aware of the disease and enabled with their health tracking using various mobile applications made available in the online stores for iOS and Android mobile devices.

A Clinical System for Non-invasive Blood-Brain Barrier Opening Using a Neuronavigation-Guided Single-Element Focused Ultrasound Transducer

  • Pouliopoulos, Antonios N
  • Wu, Shih-Ying
  • Burgess, Mark T
  • Karakatsani, Maria Eleni
  • Kamimura, Hermes A S
  • Konofagou, Elisa E
Ultrasound Med Biol 2020 Journal Article, cited 3 times
Focused ultrasound (FUS)-mediated blood-brain barrier (BBB) opening is currently being investigated in clinical trials. Here, we describe a portable clinical system with a therapeutic transducer suitable for humans, which eliminates the need for in-line magnetic resonance imaging (MRI) guidance. A neuronavigation-guided 0.25-MHz single-element FUS transducer was developed for non-invasive clinical BBB opening. Numerical simulations and experiments were performed to determine the characteristics of the FUS beam within a human skull. We also validated the feasibility of BBB opening obtained with this system in two non-human primates using U.S. Food and Drug Administration (FDA)-approved treatment parameters. Ultrasound propagation through a human skull fragment caused 44.4 +/- 1% pressure attenuation at a normal incidence angle, while the focal size decreased by 3.3 +/- 1.4% and 3.9 +/- 1.8% along the lateral and axial dimension, respectively. Measured lateral and axial shifts were 0.5 +/- 0.4 mm and 2.1 +/- 1.1 mm, while simulated shifts were 0.1 +/- 0.2 mm and 6.1 +/- 2.4 mm, respectively. A 1.5-MHz passive cavitation detector transcranially detected cavitation signals of Definity microbubbles flowing through a vessel-mimicking phantom. T1-weighted MRI confirmed a 153 +/- 5.5 mm(3) BBB opening in two non-human primates at a mechanical index of 0.4, using Definity microbubbles at the FDA-approved dose for imaging applications, without edema or hemorrhage. In conclusion, we developed a portable system for non-invasive BBB opening in humans, which can be achieved at clinically relevant ultrasound exposures without the need for in-line MRI guidance. The proposed FUS system may accelerate the adoption of non-invasive FUS-mediated therapies due to its fast application, low cost and portability.

Liver Segmentation in CT with MRI Data: Zero-Shot Domain Adaptation by Contour Extraction and Shape Priors

  • Pham, D. D.
  • Dovletov, G.
  • Pauli, J.
2020 Conference Paper, cited 0 times
In this work we address the problem of domain adaptation for segmentation tasks with deep convolutional neural networks. We focus on managing the domain shift from MRI to CT volumes on the example of 3D liver segmentation. Domain adaptation between modalities is particularly of practical importance, as different hospital departments usually tend to use different imaging modalities and protocols in their clinical routine. Thus, training a model with source data from one department may not be sufficient for application in another institution. Most adaptation strategies make use of target domain samples and often additionally incorporate the corresponding ground truths from the target domain during the training process. In contrast to these approaches, we investigate the possibility of training our model solely on source domain data sets, i.e. we apply zero-shot domain adaptation. To compensate the missing target domain data, we use prior knowledge about both modalities to steer the model towards more general features during the training process. We particularly make use of fixed Sobel kernels to enhance contour information and apply anatomical priors, learned separately by a convolutional autoencoder. Although we completely discard including the target domain in the training process, our proposed approach improves a vanilla U-Net implementation drastically and yields promising segmentation results.

Peritumoral and intratumoral radiomic features predict survival outcomes among patients diagnosed in lung cancer screening

  • Perez-Morales, J.
  • Tunali, I.
  • Stringfield, O.
  • Eschrich, S. A.
  • Balagurunathan, Y.
  • Gillies, R. J.
  • Schabath, M. B.
Sci RepScientific reports 2020 Journal Article, cited 0 times
The National Lung Screening Trial (NLST) demonstrated that screening with low-dose computed tomography (LDCT) is associated with a 20% reduction in lung cancer mortality. One potential limitation of LDCT screening is overdiagnosis of slow growing and indolent cancers. In this study, peritumoral and intratumoral radiomics was used to identify a vulnerable subset of lung patients associated with poor survival outcomes. Incident lung cancer patients from the NLST were split into training and test cohorts and an external cohort of non-screen detected adenocarcinomas was used for further validation. After removing redundant and non-reproducible radiomics features, backward elimination analyses identified a single model which was subjected to Classification and Regression Tree to stratify patients into three risk-groups based on two radiomics features (NGTDM Busyness and Statistical Root Mean Square [RMS]). The final model was validated in the test cohort and the cohort of non-screen detected adenocarcinomas. Using a radio-genomics dataset, Statistical RMS was significantly associated with FOXF2 gene by both correlation and two-group analyses. Our rigorous approach generated a novel radiomics model that identified a vulnerable high-risk group of early stage patients associated with poor outcomes. These patients may require aggressive follow-up and/or adjuvant therapy to mitigate their poor outcomes.

Efficient CT Image Reconstruction in a GPU Parallel Environment

  • Pérez, Tomás A Valencia
  • López, Javier M Hernández
  • Moreno-Barbosa, Eduardo
  • de Celis Alonso, Benito
  • Merino, Martín R Palomino
  • Meneses, Victor M Castaño
Tomography 2020 Journal Article, cited 0 times

Automated lung cancer diagnosis using three-dimensional convolutional neural networks

  • Perez, Gustavo
  • Arbelaez, Pablo
Med Biol Eng ComputMed Biol Eng Comput 2020 Journal Article, cited 0 times
Lung cancer is the deadliest cancer worldwide. It has been shown that early detection using low-dose computer tomography (LDCT) scans can reduce deaths caused by this disease. We present a general framework for the detection of lung cancer in chest LDCT images. Our method consists of a nodule detector trained on the LIDC-IDRI dataset followed by a cancer predictor trained on the Kaggle DSB 2017 dataset and evaluated on the IEEE International Symposium on Biomedical Imaging (ISBI) 2018 Lung Nodule Malignancy Prediction test set. Our candidate extraction approach is effective to produce accurate candidates with a recall of 99.6%. In addition, our false positive reduction stage classifies successfully the candidates and increases precision by a factor of 2000. Our cancer predictor obtained a ROC AUC of 0.913 and was ranked 1st place at the ISBI 2018 Lung Nodule Malignancy Prediction challenge. Graphical abstract.

A method of rapid quantification of patient-specific organ doses for CT using deep-learning-based multi-organ segmentation and GPU-accelerated Monte Carlo dose computing

  • Peng, Z.
  • Fang, X.
  • Yan, P.
  • Shan, H.
  • Liu, T.
  • Pei, X.
  • Wang, G.
  • Liu, B.
  • Kalra, M. K.
  • Xu, X. G.
Med Phys 2020 Journal Article, cited 0 times
PURPOSE: One technical barrier to patient-specific computed tomography (CT) dosimetry has been the lack of computational tools for the automatic patient-specific multi-organ segmentation of CT images and rapid organ dose quantification. When previous CT images are available for the same body region of the patient, the ability to obtain patient-specific organ doses for CT - in a similar manner as radiation therapy treatment planning - will open the door to personalized and prospective CT scan protocols. This study aims to demonstrate the feasibility of combining deep-learning algorithms for automatic segmentation of multiple radiosensitive organs from CT images with the GPU-based Monte Carlo rapid organ dose calculation. METHODS: A deep convolutional neural network (CNN) based on the U-Net for organ segmentation is developed and trained to automatically delineate multiple radiosensitive organs from CT images. Two databases are used: The lung CT segmentation challenge 2017 (LCTSC) dataset that contains 60 thoracic CT scan patients, each consisting of five segmented organs, and the Pancreas-CT (PCT) dataset, which contains 43 abdominal CT scan patients each consisting of eight segmented organs. A fivefold cross-validation method is performed on both sets of data. Dice similarity coefficients (DSCs) are used to evaluate the segmentation performance against the ground truth. A GPU-based Monte Carlo dose code, ARCHER, is used to calculate patient-specific CT organ doses. The proposed method is evaluated in terms of relative dose errors (RDEs). To demonstrate the potential improvement of the new method, organ dose results are compared against those obtained for population-average patient phantoms used in an off-line dose reporting software, VirtualDose, at Massachusetts General Hospital. RESULTS: The median DSCs are found to be 0.97 (right lung), 0.96 (left lung), 0.92 (heart), 0.86 (spinal cord), 0.76 (esophagus) for the LCTSC dataset, along with 0.96 (spleen), 0.96 (liver), 0.95 (left kidney), 0.90 (stomach), 0.87 (gall bladder), 0.80 (pancreas), 0.75 (esophagus), and 0.61 (duodenum) for the PCT dataset. Comparing with organ dose results from population-averaged phantoms, the new patient-specific method achieved smaller absolute RDEs (mean +/- standard deviation) for all organs: 1.8% +/- 1.4% (vs 16.0% +/- 11.8%) for the lung, 0.8% +/- 0.7% (vs 34.0% +/- 31.1%) for the heart, 1.6% +/- 1.7% (vs 45.7% +/- 29.3%) for the esophagus, 0.6% +/- 1.2% (vs 15.8% +/- 12.7%) for the spleen, 1.2% +/- 1.0% (vs 18.1% +/- 15.7%) for the pancreas, 0.9% +/- 0.6% (vs 20.0% +/- 15.2%) for the left kidney, 1.7% +/- 3.1% (vs 19.1% +/- 9.8%) for the gallbladder, 0.3% +/- 0.3% (vs 24.2% +/- 18.7%) for the liver, and 1.6% +/- 1.7% (vs 19.3% +/- 13.6%) for the stomach. The trained automatic segmentation tool takes <5 s per patient for all 103 patients in the dataset. The Monte Carlo radiation dose calculations performed in parallel to the segmentation process using the GPU-accelerated ARCHER code take <4 s per patient to achieve <0.5% statistical uncertainty in all organ doses for all 103 patients in the database. CONCLUSION: This work shows the feasibility to perform combined automatic patient-specific multi-organ segmentation of CT images and rapid GPU-based Monte Carlo dose quantification with clinically acceptable accuracy and efficiency.

MRI and CT Identify Isocitrate Dehydrogenase (IDH)-Mutant Lower-Grade Gliomas Misclassified to 1p/19q Codeletion Status with Fluorescence in Situ Hybridization

  • Patel, Sohil H
  • Batchala, Prem P
  • Mrachek, E Kelly S
  • Lopes, Maria-Beatriz S
  • Schiff, David
  • Fadul, Camilo E
  • Patrie, James T
  • Jain, Rajan
  • Druzgal, T Jason
  • Williams, Eli S
Radiology 2020 Journal Article, cited 0 times
Background Fluorescence in situ hybridization (FISH) is a standard method for 1p/19q codeletion testing in diffuse gliomas but occasionally renders erroneous results. Purpose To determine whether MRI/CT analysis identifies isocitrate dehydrogenase (IDH)-mutant gliomas misassigned to 1p/19q codeletion status with FISH. Materials and Methods Data in patients with IDH-mutant lower-grade gliomas (World Health Organization grade II/III) and 1p/19q codeletion status determined with FISH that were accrued from January 1, 2010 to October 1, 2017, were included in this retrospective study. Two neuroradiologist readers analyzed the pre-resection MRI findings (and CT findings, when available) to predict 1p/19q status (codeleted or noncodeleted) and provided a prediction confidence score (1 = low, 2 = moderate, 3 = high). Percentage concordance between the consensus neuroradiologist 1p/19q prediction and the FISH result was calculated. For gliomas where (a) consensus neuroradiologist 1p/19q prediction differed from the FISH result and (b) consensus neuroradiologist confidence score was 2 or greater, further 1p/19q testing was performed with chromosomal microarray analysis (CMA). Nine control specimens were randomly chosen from the remaining study sample for CMA. Percentage concordance between FISH and CMA among the CMA-tested cases was calculated. Results A total of 112 patients (median age, 38 years [interquartile range, 31–51 years]; 57 men) were evaluated (112 gliomas). Percentage concordance between the consensus neuroradiologist 1p/19q prediction and the FISH result was 84.8% (95 of 112; 95% confidence interval: 76.8%, 90.9%). Among the 17 neuroradiologist-FISH discordances, there were nine gliomas associated with a consensus neuroradiologist confidence score of 2 or greater. In six (66.7%) of these nine gliomas, the 1p/19q codeletion status as determined with CMA disagreed with the FISH result and agreed with the consensus neuroradiologist prediction. For the nine control specimens, there was 100% agreement between CMA and FISH for 1p/19q determination. Conclusion MRI and CT analysis can identify diffuse gliomas misassigned to 1p/19q codeletion status with fluorescence in situ hybridization (FISH). Further molecular testing should be considered for gliomas with discordant neuroimaging and FISH results.

Radiomics risk score may be a potential imaging biomarker for predicting survival in isocitrate dehydrogenase wild-type lower-grade gliomas

  • Park, C. J.
  • Han, K.
  • Kim, H.
  • Ahn, S. S.
  • Choi, Y. S.
  • Park, Y. W.
  • Chang, J. H.
  • Kim, S. H.
  • Jain, R.
  • Lee, S. K.
Eur Radiol 2020 Journal Article, cited 0 times
OBJECTIVES: Isocitrate dehydrogenase wild-type (IDHwt) lower-grade gliomas of histologic grades II and III follow heterogeneous clinical outcomes, which necessitates risk stratification. We aimed to evaluate whether radiomics from MRI would allow prediction of overall survival in patients with IDHwt lower-grade gliomas and to investigate the added prognostic value of radiomics over clinical features. METHODS: Preoperative MRIs of 117 patients with IDHwt lower-grade gliomas from January 2007 to February 2018 were retrospectively analyzed. The external validation cohort consisted of 33 patients from The Cancer Genome Atlas. A total of 182 radiomic features were extracted. Radiomics risk scores (RRSs) for overall survival were derived from the least absolute shrinkage and selection operator (LASSO) and elastic net. Multivariable Cox regression analyses, including clinical features and RRSs, were performed. The integrated areas under the receiver operating characteristic curves (iAUCs) from models with and without RRSs were calculated for comparisons. The prognostic value of RRS was assessed in the validation cohort. RESULTS: The RRS derived from LASSO and elastic net independently predicted survival with hazard ratios of 9.479 (95% confidence interval [CI], 3.220-27.847) and 6.148 (95% CI, 3.009-12.563), respectively. Those RRSs enhanced model performance for predicting overall survival (iAUC increased to 0.780-0.797 from 0.726), which was externally validated. The RRSs stratified IDHwt lower-grade gliomas in the validation cohort with significantly different survival. CONCLUSION: Radiomics has the potential for noninvasive risk stratification and can improve prediction of overall survival in patients with IDHwt lower-grade gliomas when integrated with clinical features. KEY POINTS: * Isocitrate dehydrogenase wild-type lower-grade gliomas with histologic grades II and III follow heterogeneous clinical outcomes, which necessitates further risk stratification. * Radiomics risk scores derived from MRI independently predict survival even after incorporating strong clinical prognostic features (hazard ratios 6.148-9.479). * Radiomics risk scores derived from MRI have the potential to improve survival prediction when added to clinical features (integrated areas under the receiver operating characteristic curves increased from 0.726 to 0.780-0.797).

An expert system for brain tumor detection: Fuzzy C-means with super resolution and convolutional neural network with extreme learning machine

  • Ozyurt, F.
  • Sert, E.
  • Avci, D.
Med Hypotheses 2020 Journal Article, cited 10 times
Super-resolution, which is one of the trend issues of recent times, increases the resolution of the images to higher levels. Increasing the resolution of a vital image in terms of the information it contains such as brain magnetic resonance image (MRI), makes the important information in the MRI image more visible and clearer. Thus, it is provided that the borders of the tumors in the related image are found more successfully. In this study, brain tumor detection based on fuzzy C-means with super-resolution and convolutional neural networks with extreme learning machine algorithms (SR-FCM-CNN) approach has been proposed. The aim of this study has been segmented the tumors in high performance by using Super Resolution Fuzzy-C-Means (SR-FCM) approach for tumor detection from brain MR images. Afterward, feature extraction and pretrained SqueezeNet architecture from convolutional neural network (CNN) architectures and classification process with extreme learning machine (ELM) were performed. In the experimental studies, it has been determined that brain tumors have been better segmented and removed using SR-FCM method. Using the SquezeeNet architecture, features were extracted from a smaller neural network model with fewer parameters. In the proposed method, 98.33% accuracy rate has been detected in the diagnosis of segmented brain tumors using SR-FCM. This rate is greater 10% than the rate of recognition of brain tumors segmented with fuzzy C-means (FCM) without SR.

Optothermal tissue response for laser-based investigation of thyroid cancer

  • Okebiorun, Michael O.
  • ElGohary, Sherif H.
Informatics in Medicine Unlocked 2020 Journal Article, cited 0 times
To characterize thyroid cancer imaging-based detection, we implemented a simulation of the optical and thermal response in an optical investigation of thyroid cancer. We employed the 3D Monte Carlo method and the bio-heat equation to determine the fluence and temperature distribution via the Molecular Optical Simulation Environment (MOSE) with a Finite element (FE) simulator. The optothermal effect of a neck surface-based source is also compared to a trachea-based source. Results show fluence and temperature distribution in a realistic 3D neck model with both endogenous and hypothetical tissue-specific exogenous contrast agents. It also reveals that the trachea illumination has a factor of ten better absorption and temperature change than the neck-surface illumination, and tumor-specific exogenous contrast agents have a relatively higher absorption and temperature change in the tumors, which could be assistive to clinicians and researchers to improve and better understand the region's response to laser-based diagnosis.

Modified fast adaptive scatter kernel superposition (mfASKS) correction and its dosimetric impact on CBCT‐based proton therapy dose calculation

  • Nomura, Yusuke
  • Xu, Qiong
  • Peng, Hao
  • Takao, Seishin
  • Shimizu, Shinichi
  • Xing, Lei
  • Shirato, Hiroki
Medical physics 2020 Journal Article, cited 0 times

Image Quality Evaluation in Computed Tomography Using Super-resolution Convolutional Neural Network

  • Nm, Kibok
  • Cho, Jeonghyo
  • Lee, Seungwan
  • Kim, Burnyoung
  • Yim, Dobin
  • Lee, Dahye
Journal of the Korean Society of Radiology 2020 Journal Article, cited 0 times
High-quality computed tomography (CT) images enable precise lesion detection and accurate diagnosis. A lot of studies have been performed to improve CT image quality while reducing radiation dose. Recently, deep learning-based techniques for improving CT image quality have been developed and show superior performance compared to conventional techniques. In this study, a super-resolution convolutional neural network (SRCNN) model was used to improve the spatial resolution of CT images, and image quality according to the hyperparameters, which determine the performance of the SRCNN model, was evaluated in order to verify the effect of hyperparameters on the SRCNN model. Profile, structural similarity (SSIM), peak signal-to-noise ratio (PSNR), and full-width at half-maximum (FWHM) were measured to evaluate the performance of the SRCNN model. The results showed that the performance of the SRCNN model was improved with an increase of the numbers of epochs and training sets, and the learning rate needed to be optimized for obtaining acceptable image quality. Therefore, the SRCNN model with optimal hyperparameters is able to improve CT image quality.

Homological radiomics analysis for prognostic prediction in lung cancer patients

  • Ninomiya, Kenta
  • Arimura, Hidetaka
Physica Medica 2020 Journal Article, cited 0 times

Integrative analysis of cross-modal features for the prognosis prediction of clear cell renal cell carcinoma

  • Ning, Zhenyuan
  • Pan, Weihao
  • Chen, Yuting
  • Xiao, Qing
  • Zhang, Xinsen
  • Luo, Jiaxiu
  • Wang, Jian
  • Zhang, Yuan
Bioinformatics 2020 Journal Article, cited 0 times
MOTIVATION: As a highly heterogeneous disease, clear cell renal cell carcinoma (ccRCC) has quite variable clinical behaviors. The prognostic biomarkers play a crucial role in stratifying patients suffering from ccRCC to avoid over- and under-treatment. Researches based on hand-crafted features and single-modal data have been widely conducted to predict the prognosis of ccRCC. However, these experience-dependent methods, neglecting the synergy among multimodal data, have limited capacity to perform accurate prediction. Inspired by complementary information among multimodal data and the successful application of convolutional neural networks (CNNs) in medical image analysis, a novel framework was proposed to improve prediction performance. RESULTS: We proposed a cross-modal feature-based integrative framework, in which deep features extracted from computed tomography/histopathological images by using CNNs were combined with eigengenes generated from functional genomic data, to construct a prognostic model for ccRCC. Results showed that our proposed model can stratify high- and low-risk subgroups with significant difference (P-value < 0.05) and outperform the predictive performance of those models based on single-modality features in the independent testing cohort [C-index, 0.808 (0.728-0.888)]. In addition, we also explored the relationship between deep image features and eigengenes, and make an attempt to explain deep image features from the view of genomic data. Notably, the integrative framework is available to the task of prognosis prediction of other cancer with matched multimodal data. AVAILABILITY AND IMPLEMENTATION: from=singlemessage. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

Efficacy evaluation of 2D, 3D U-Net semantic segmentation and atlas-based segmentation of normal lungs excluding the trachea and main bronchi

  • Nemoto, Takafumi
  • Futakami, Natsumi
  • Yagi, Masamichi
  • Kumabe, Atsuhiro
  • Takeda, Atsuya
  • Kunieda, Etsuo
  • Shigematsu, Naoyuki
Journal of Radiation Research 2020 Journal Article, cited 0 times
This study aimed to examine the efficacy of semantic segmentation implemented by deep learning and to confirm whether this method is more effective than a commercially dominant auto-segmentation tool with regards to delineating normal lung excluding the trachea and main bronchi. A total of 232 non-small-cell lung cancer cases were examined. The computed tomography (CT) images of these cases were converted from Digital Imaging and Communications in Medicine (DICOM) Radiation Therapy (RT) formats to arrays of 32 x 128 x 128 voxels and input into both 2D and 3D U-Net, which are deep learning networks for semantic segmentation. The number of training, validation and test sets were 160, 40 and 32, respectively. Dice similarity coefficients (DSCs) of the test set were evaluated employing Smart Segmentation Knowledge Based Contouring (Smart segmentation is an atlas-based segmentation tool), as well as the 2D and 3D U-Net. The mean DSCs of the test set were 0.964 [95% confidence interval (CI), 0.960-0.968], 0.990 (95% CI, 0.989-0.992) and 0.990 (95% CI, 0.989-0.991) with Smart segmentation, 2D and 3D U-Net, respectively. Compared with Smart segmentation, both U-Nets presented significantly higher DSCs by the Wilcoxon signed-rank test (P < 0.01). There was no difference in mean DSC between the 2D and 3D U-Net systems. The newly-devised 2D and 3D U-Net approaches were found to be more effective than a commercial auto-segmentation tool. Even the relatively shallow 2D U-Net which does not require high-performance computational resources was effective enough for the lung segmentation. Semantic segmentation using deep learning was useful in radiation treatment planning for lung cancers.

Adding features from the mathematical model of breast cancer to predict the tumour size

  • Nave, OPhir
International Journal of Computer Mathematics: Computer Systems Theory 2020 Journal Article, cited 0 times
In this study, we combine a theoretical mathematical model with machine learning (ML) to predict tumour sizes in breast cancer. Our study is based on clinical data from 1869 women of various ages with breast cancer. To accurately predict tumour size for each woman individually, we solved our customized mathematical model for each woman, then added the solution vector of the dynamic variables in the model (in machine learning language, these are called features) to the clinical data and used a variety of machine learning algorithms. We compared the results obtained with and without the mathematical model and showed that by adding specific features from the mathematical model we were able to better predict tumour size for each woman.

Regularized Three-Dimensional Generative Adversarial Nets for Unsupervised Metal Artifact Reduction in Head and Neck CT Images

  • Nakao, Megumi
  • Imanishi, Keiho
  • Ueda, Nobuhiro
  • Imai, Yuichiro
  • Kirita, Tadaaki
  • Matsuda, Tetsuya
IEEE Access 2020 Journal Article, cited 1 times
The reduction of metal artifacts in computed tomography (CT) images, specifically for strongartifacts generated from multiple metal objects, is a challenging issue in medical imaging research. Althoughthere have been some studies on supervised metal artifact reduction through the learning of synthesizedartifacts, it is difficult for simulated artifacts to cover the complexity of the real physical phenomenathat may be observed in X-ray propagation. In this paper, we introduce metal artifact reduction methodsbased on an unsupervised volume-to-volume translation learned from clinical CT images. We constructthree-dimensional adversarial nets with a regularized loss function designed for metal artifacts from multipledental fillings. The results of experiments using a CT volume database of 361 patients demonstrate that theproposed framework has an outstanding capacity to reduce strong artifacts and to recover underlying missingvoxels, while preserving the anatomical features of soft tissues and tooth structures from the original images.

Reciprocal change in Glucose metabolism of Cancer and Immune Cells mediated by different Glucose Transporters predicts Immunotherapy response

  • Na, Kwon Joong
  • Choi, Hongyoon
  • Oh, Ho Rim
  • Kim, Yoon Ho
  • Lee, Sae Bom
  • Jung, Yoo Jin
  • Koh, Jaemoon
  • Park, Samina
  • Lee, Hyun Joo
  • Jeon, Yoon Kyung
  • Chung, Doo Hyun
  • Paeng, Jin Chul
  • Park, In Kyu
  • Kang, Chang Hyun
  • Cheon, Gi Jeong
  • Kang, Keon Wook
  • Lee, Dong Soo
  • Kim, Young Tae
THERANOSTICS 2020 Journal Article, cited 0 times
The metabolic properties of tumor microenvironment (TME) are dynamically dysregulated to achieve immune escape and promote cancer cell survival. However, in vivo properties of glucose metabolism in cancer and immune cells are poorly understood and their clinical application to development of a biomarker reflecting immune functionality is still lacking. Methods: We analyzed RNA-seq and fluorodeoxyglucose (FDG) positron emission tomography profiles of 63 lung squamous cell carcinoma (LUSC) specimens to correlate FDG uptake, expression of glucose transporters (GLUT) by RNA-seq and immune cell enrichment score (ImmuneScore). Single cell RNA-seq analysis in five lung cancer specimens was performed. We tested the GLUT3/GLUT1 ratio, the GLUT-ratio, as a surrogate representing immune metabolic functionality by investigating the association with immunotherapy response in two melanoma cohorts. Results: ImmuneScore showed a negative correlation with GLUT1 (r = -0.70, p < 0.01) and a positive correlation with GLUT3 (r = 0.39, p < 0.01) in LUSC. Single-cell RNA-seq showed GLUT1 and GLUT3 were mostly expressed in cancer and immune cells, respectively. In immune-poor LUSC, FDG uptake was positively correlated with GLUT1 (r = 0.27, p = 0.04) and negatively correlated with ImmuneScore (r = -0.28, p = 0.04). In immune-rich LUSC, FDG uptake was positively correlated with both GLUT3 (r = 0.78, p = 0.01) and ImmuneScore (r = 0.58, p = 0.10). The GLUT-ratio was higher in anti-PD1 responders than nonresponders (p = 0.08 for baseline; p = 0.02 for on-treatment) and associated with a progression-free survival in melanoma patients who treated with anti-CTLA4 (p = 0.04). Conclusions: Competitive uptake of glucose by cancer and immune cells in TME could be mediated by differential GLUT expression in these cells.

A shallow convolutional neural network predicts prognosis of lung cancer patients in multi-institutional computed tomography image datasets

  • Mukherjee, Pritam
  • Zhou, Mu
  • Lee, Edward
  • Schicht, Anne
  • Balagurunathan, Yoganand
  • Napel, Sandy
  • Gillies, Robert
  • Wong, Simon
  • Thieme, Alexander
  • Leung,Ann
  • Gevaert, Olivier
Nature Machine Intelligence 2020 Journal Article, cited 0 times
Lung cancer is the most common fatal malignancy in adults worldwide, and non-small-cell lung cancer (NSCLC) accounts for 85% of lung cancer diagnoses. Computed tomography is routinely used in clinical practice to determine lung cancer treatment and assess prognosis. Here, we developed LungNet, a shallow convolutional neural network for predicting outcomes of patients with NSCLC. We trained and evaluated LungNet on four independent cohorts of patients with NSCLC from four medical centres: Stanford Hospital (n = 129), H. Lee Moffitt Cancer Center and Research Institute (n = 185), MAASTRO Clinic (n = 311) and Charité – Universitätsmedizin, Berlin (n = 84). We show that outcomes from LungNet are predictive of overall survival in all four independent survival cohorts as measured by concordance indices of 0.62, 0.62, 0.62 and 0.58 on cohorts 1, 2, 3 and 4, respectively. Furthermore, the survival model can be used, via transfer learning, for classifying benign versus malignant nodules on the Lung Image Database Consortium (n = 1,010), with improved performance (AUC = 0.85) versus training from scratch (AUC = 0.82). LungNet can be used as a non-invasive predictor for prognosis in patients with NSCLC and can facilitate interpretation of computed tomography images for lung cancer stratification and prognostication.

CT-based Radiomic Signatures for Predicting Histopathologic Features in Head and Neck Squamous Cell Carcinoma

  • Mukherjee, Pritam
  • Cintra, Murilo
  • Huang, Chao
  • Zhou, Mu
  • Zhu, Shankuan
  • Colevas, A Dimitrios
  • Fischbein, Nancy
  • Gevaert, Olivier
Radiol Imaging Cancer 2020 Journal Article, cited 0 times
Purpose: To determine the performance of CT-based radiomic features for noninvasive prediction of histopathologic features of tumor grade, extracapsular spread, perineural invasion, lymphovascular invasion, and human papillomavirus status in head and neck squamous cell carcinoma (HNSCC). Materials and Methods: In this retrospective study, which was approved by the local institutional ethics committee, CT images and clinical data from patients with pathologically proven HNSCC from The Cancer Genome Atlas (n = 113) and an institutional test cohort (n = 71) were analyzed. A machine learning model was trained with 2131 extracted radiomic features to predict tumor histopathologic characteristics. In the model, principal component analysis was used for dimensionality reduction, and regularized regression was used for classification. Results: The trained radiomic model demonstrated moderate capability of predicting HNSCC features. In the training cohort and the test cohort, the model achieved a mean area under the receiver operating characteristic curve (AUC) of 0.75 (95% confidence interval [CI]: 0.68, 0.81) and 0.66 (95% CI: 0.45, 0.84), respectively, for tumor grade; a mean AUC of 0.64 (95% CI: 0.55, 0.62) and 0.70 (95% CI: 0.47, 0.89), respectively, for perineural invasion; a mean AUC of 0.69 (95% CI: 0.56, 0.81) and 0.65 (95% CI: 0.38, 0.87), respectively, for lymphovascular invasion; a mean AUC of 0.77 (95% CI: 0.65, 0.88) and 0.67 (95% CI: 0.15, 0.80), respectively, for extracapsular spread; and a mean AUC of 0.71 (95% CI: 0.29, 1.0) and 0.80 (95% CI: 0.65, 0.92), respectively, for human papillomavirus status. Conclusion: Radiomic CT models have the potential to predict characteristics typically identified on pathologic assessment of HNSCC.Supplemental material is available for this article.(c) RSNA, 2020.

Prediction of Non-small Cell Lung Cancer Histology by a Deep Ensemble of Convolutional and Bidirectional Recurrent Neural Network

  • Moitra, Dipanjan
  • Mandal, Rakesh Kumar
Journal of Digital Imaging 2020 Journal Article, cited 0 times

Classification of non-small cell lung cancer using one-dimensional convolutional neural network

  • Moitra, Dipanjan
  • Kr. Mandal, Rakesh
Expert Systems with Applications 2020 Journal Article, cited 0 times
Non-Small Cell Lung Cancer (NSCLC) is a major lung cancer type. Proper diagnosis depends mainly on tumor staging and grading. Pathological prognosis often faces problems because of the limited availability of tissue samples. Machine learning methods may play a vital role in such cases. 2D or 3D Deep Neural Networks (DNNs) has been the predominant technology in this domain. Contemporary studies tried to classify NSCLC tumors as benign or malignant. The application of 1D CNN in automated staging and grading of NSCLC is not very frequent. The aim of the present study is to develop a 1D CNN model for automated staging and grading of NSCLC. The updated NSCLC Radiogenomics Collection from The Cancer Imaging Archive (TCIA) was used in the study. The segmented tumor images were fed into a hybrid feature detection and extraction model (MSER-SURF). The extracted features were clubbed with the clinical TNM stage and histopathological grade information and fed into the 1D CNN model. The performance of the proposed CNN model was satisfactory. The accuracy and ROC-AUC score were higher than the other leading machine learning methods. The study also did well compared to state-of-the-art studies. The proposed model shows that 1D CNN is equally useful in NSCLC prediction like a conventional 2D/3D CNN model. The model may further be refined by carrying out experiments with varied hyper-parameters. Further studies may be conducted by considering semi-supervised or unsupervised learning techniques.

Brain image classification by the combination of different wavelet transforms and support vector machine classification

  • Mishra, Shailendra Kumar
  • Deepthi, V. Hima
Journal of Ambient Intelligence and Humanized Computing 2020 Journal Article, cited 0 times
The human brain is the primary organ, and it is located in the centre of the nervous system in the human body. The abnormal cells in the brain are known as a brain tumor. The tumor in the brain does not spread to the other parts of the human body. Early diagnosis of brain tumor is required. In this work, an efficient technique is presented for magnetic resonance imaging (MRI) brain image classification using different wavelet transforms like discrete wavelet transform (DWT), stationary wavelet transform (SWT) and dual tree M-band wavelet transform (DMWT) for feature extraction and selection of coefficients and support vector machine classifier is used for classification. The normal and abnormal MRI brain image features are decomposed by DWT, SWT and DMWT. The coefficients of sub-bands are selected by rank features for the classification. Results show that DWT, SWT and DMWT produce 98% accuracy for the MRI brain classification system.

“One Stop Shop” for Prostate Cancer Staging using Imaging Biomarkers and Spatially Registered Multi-Parametric MRI

  • Mayer, Rulon
2020 Patent, cited 0 times

A quantitative validation of segmented colon in virtual colonoscopy using image moments

  • Manjunath, K. N.
  • Prabhu, G. K.
  • Siddalingaswamy, P. C.
Biomedical Journal 2020 Journal Article, cited 1 times
Background: Evaluation of segmented colon is one of the challenges in Computed Tomography Colonography (CTC). The objective of the study was to measure the segmented colon accurately using image processing techniques. Methods: This was a retrospective study, and the Institutional Ethical clearance was obtained for the secondary dataset. The technique was tested on 85 CTC dataset. The CTCdataset of 100 - 120 kVp, 100 mA, and ST (Slice Thickness) of 1.25 and 2.5 mm were used for empirical testing. The initial results of the work appear in the conference proceedings. Post colon segmentation, three distance measurement techniques, and one volumetric overlap computation were applied in Euclidian space in which the distances were measured on MPR views of the segmented and unsegmented colons and the volumetric overlap calculation between these two volumes. Results: The key finding was that the measurements on both the segmented and the unsegmented volumes remain same without much difference noticed. This was statistically proved. The results were validated quantitatively on 2D MPR images. An accuracy of 95.265 ± 0.4551% was achieved through volumetric overlap computation. Through paired t-test, at alpha = 5% ; statistical values were p = 0.6769, and t = 0.4169 which infer that there was no much significant difference. Conclusion: The combination of different validation techniques was applied to check the robustness of colon segmentation method, and good results were achieved with this approach. Through quantitative validation, the results were accepted at alpha =5%.

Metal Artifacts Reduction in CT Scans using Convolutional Neural Network with Ground Truth Elimination

  • Mai, Q.
  • Wan, J. W. L.
Annu Int Conf IEEE Eng Med Biol Soc 2020 Journal Article, cited 0 times
Metal artifacts are very common in CT scans since metal insertion or replacement is performed for enhancing certain functionality or mechanism of patient's body. These streak artifacts could degrade CT image quality severely, and consequently, they could influence clinician's diagnosis. Many existing supervised learning methods approaching this problem assume the availability of clean images data, images free of metal artifacts, at the part with metal implant. However, in clinical practices, those clean images do not usually exist. Therefore, there is no support for the existing supervised learning based methods to work clinically. We focus on reducing the streak artifacts on the hip scans and propose a convolutional neural network based method to eliminate the need of the clean images at the implant part during model training. The idea is to use the scans of the parts near the hip for model training. Our method is able to suppress the artifacts in corrupted images, highly improve the image quality, and preserve the details of surrounding tissues, without using any clean hip scans. We apply our method on clinical CT hip scans from multiple patients and obtain artifact-free images with high image quality.

Radiogenomics correlation between MR imaging features and mRNA-based subtypes in lower-grade glioma

  • Liu, Zhenyin
  • Zhang, Jing
BMC Neurology 2020 Journal Article, cited 0 times
To investigate associations between lower-grade glioma (LGG) mRNA-based subtypes (R1-R4) and MR features.

Radiomics-based prediction of survival in patients with head and neck squamous cell carcinoma based on pre- and post-treatment (18)F-PET/CT

  • Liu, Z.
  • Cao, Y.
  • Diao, W.
  • Cheng, Y.
  • Jia, Z.
  • Peng, X.
Aging (Albany NY) 2020 Journal Article, cited 0 times
BACKGROUND: 18-fluorodeoxyglucose positron emission tomography/computed tomography ((18)F-PET/CT) has been widely applied for the imaging of head and neck squamous cell carcinoma (HNSCC). This study examined whether pre- and post-treatment (18)F-PET/CT features can help predict the survival of HNSCC patients. RESULTS: Three radiomics features were identified as prognostic factors. Radiomics score calculated from these features significantly predicted overall survival (OS) and disease-free disease (DFS). The clinicopathological characteristics combined with pre- or post-treatment nomograms showed better ROC curves and decision curves than the nomogram based only on clinicopathological characteristics. CONCLUSIONS: Combining clinicopathological characteristics with radiomics features of pre-treatment PET/CT or post-treatment PET/CT assessment of primary tumor sites as positive or negative may substantially improve prediction of OS and DFS of HNSCC patients. METHODS: 171 patients who received pre-treatment (18)F-PET/CT scans and 154 patients who received post-treatment (18)F-PET/CT scans with HNSCC in the Cancer Imaging Achieve (TCIA) were included. Nomograms that combined clinicopathological features with either pre-treatment PET/CT radiomics features or post-treatment assessment of primary tumor sites were constructed using data from 154 HNSCC patients. Receiver operating characteristic (ROC) curves and decision curves were used to compare the predictions of these models with those of a model incorporating only clinicopathological features.

Context aware deep learning for brain tumor segmentation, subtype classification, and survival prediction using radiology images

  • Linmin, Pei
  • Lasitha, Vidyaratne
  • Monibor, Rahman Md
  • Iftekharuddin, Khan M
Scientific Reports (Nature Publisher Group) 2020 Journal Article, cited 0 times

MRI-based radiogenomics analysis for predicting genetic alterations in oncogenic signalling pathways in invasive breast carcinoma

  • Lin, P
  • Liu, WK
  • Li, X
  • Wan, D
  • Qin, H
  • Li, Q
  • Chen, G
  • He, Y
  • Yang, H
Clinical Radiology 2020 Journal Article, cited 0 times

Three-dimensional steerable discrete cosine transform with application to 3D image compression

  • Lima, Verusca S.
  • Madeiro, Francisco
  • Lima, Juliano B.
Multidimensional Systems and Signal Processing 2020 Journal Article, cited 0 times
This work introduces the three-dimensional steerable discrete cosine transform (3D-SDCT), which is obtained from the relationship between the discrete cosine transform (DCT) and the graph Fourier transform of a signal on a path graph. One employs the fact that the basis vectors of the 3D-DCT constitute a possible eigenbasis for the Laplacian of the product of such graphs. The proposed transform employs a rotated version of the 3D-DCT basis. We then evaluate the applicability of the 3D-SDCT in the field of 3D medical image compression. We consider the case where we have only one pair of rotation angles per block, rotating all the 3D-DCT basis vectors by the same pair. The obtained results show that the 3D-SDCT can be efficiently used in the referred application scenario and it outperforms the classical 3D-DCT.

Influence of feature calculating parameters on the reproducibility of CT radiomic features: a thoracic phantom study

  • Li, Ying
  • Tan, Guanghua
  • Vangel, Mark
  • Hall, Jonathan
  • Cai, Wenli
Quantitative Imaging in Medicine and Surgery 2020 Journal Article, cited 0 times

The Impact of Obesity on Tumor Glucose Uptake in Breast and Lung Cancer

  • Leitner, Brooks P.
  • Perry, Rachel J.
JNCI Cancer Spectrum 2020 Journal Article, cited 0 times
Obesity confers an increased incidence and poorer clinical prognosis in over ten cancer types. Paradoxically, obesity provides protection from poor outcomes in lung cancer. Mechanisms for the obesity-cancer links are not fully elucidated, with altered glucose metabolism being a promising candidate. Using 18F-Fluorodeoxyglucose positron-emission-tomography/computed-tomography images from The Cancer Imaging Archive, we explored the relationship between body mass index (BMI) and glucose metabolism in several cancers. In 188 patients (BMI: 27.7, SD = 5.1, Range = 17.4-49.3 kg/m2), higher BMI was associated with greater tumor glucose uptake in obesity-associated breast cancer r = 0.36, p = 0.02), and with lower tumor glucose uptake in non-small-cell lung cancer (r=-0.26, p = 0.048) using two-sided Pearson correlations. No relationship was observed in soft tissue sarcoma or squamous cell carcinoma. Harnessing The National Cancer Institute’s open-access database, we demonstrate altered tumor glucose metabolism as a potential mechanism for the detrimental and protective effects of obesity on breast and lung cancer, respectively.

Integrative Radiogenomics Approach for Risk Assessment of Post-Operative Metastasis in Pathological T1 Renal Cell Carcinoma: A Pilot Retrospective Cohort Study

  • Lee, H. W.
  • Cho, H. H.
  • Joung, J. G.
  • Jeon, H. G.
  • Jeong, B. C.
  • Jeon, S. S.
  • Lee, H. M.
  • Nam, D. H.
  • Park, W. Y.
  • Kim, C. K.
  • Seo, S. I.
  • Park, H.
Cancers (Basel) 2020 Journal Article, cited 0 times
Despite the increasing incidence of pathological stage T1 renal cell carcinoma (pT1 RCC), postoperative distant metastases develop in many surgically treated patients, causing death in certain cases. Therefore, this study aimed to create a radiomics model using imaging features from multiphase computed tomography (CT) to more accurately predict the postoperative metastasis of pT1 RCC and further investigate the possible link between radiomics parameters and gene expression profiles generated by whole transcriptome sequencing (WTS). Four radiomic features, including the minimum value of a histogram feature from inner regions of interest (ROIs) (INNER_Min_hist), the histogram of the energy feature from outer ROIs (OUTER_Energy_Hist), the maximum probability of gray-level co-occurrence matrix (GLCM) feature from inner ROIs (INNER_MaxProb_GLCM), and the ratio of voxels under 80 Hounsfield units (Hus) in the nephrographic phase of postcontrast CT (Under80HURatio), were detected to predict the postsurgical metastasis of patients with pathological stage T1 RCC, and the clinical outcomes of patients could be successfully stratified based on their radiomic risk scores. Furthermore, we identified heterogenous-trait-associated gene signatures correlated with these four radiomic features, which captured clinically relevant molecular pathways, tumor immune microenvironment, and potential treatment strategies. Our results of accurate surrogates using radiogenomics could lead to additional benefit from adjuvant therapy or postsurgical metastases in pT1 RCC.

XGBoost Improves Classification of MGMT Promoter Methylation Status in IDH1 Wildtype Glioblastoma

  • Le, N. Q. K.
  • Do, D. T.
  • Chiu, F. Y.
  • Yapp, E. K. Y.
  • Yeh, H. Y.
  • Chen, C. Y.
J Pers Med 2020 Journal Article, cited 1 times
Approximately 96% of patients with glioblastomas (GBM) have IDH1 wildtype GBMs, characterized by extremely poor prognosis, partly due to resistance to standard temozolomide treatment. O6-Methylguanine-DNA methyltransferase (MGMT) promoter methylation status is a crucial prognostic biomarker for alkylating chemotherapy resistance in patients with GBM. However, MGMT methylation status identification methods, where the tumor tissue is often undersampled, are time consuming and expensive. Currently, presurgical noninvasive imaging methods are used to identify biomarkers to predict MGMT methylation status. We evaluated a novel radiomics-based eXtreme Gradient Boosting (XGBoost) model to identify MGMT promoter methylation status in patients with IDH1 wildtype GBM. This retrospective study enrolled 53 patients with pathologically proven GBM and tested MGMT methylation and IDH1 status. Radiomics features were extracted from multimodality MRI and tested by F-score analysis to identify important features to improve our model. We identified nine radiomics features that reached an area under the curve of 0.896, which outperformed other classifiers reported previously. These features could be important biomarkers for identifying MGMT methylation status in IDH1 wildtype GBM. The combination of radiomics feature extraction and F-core feature selection significantly improved the performance of the XGBoost model, which may have implications for patient stratification and therapeutic strategy in GBM.

A Hybrid End-to-End Approach Integrating Conditional Random Fields into CNNs for Prostate Cancer Detection on MRI

  • Lapa, Paulo
  • Castelli, Mauro
  • Gonçalves, Ivo
  • Sala, Evis
  • Rundo, Leonardo
Applied Sciences 2020 Journal Article, cited 0 times

Medical image segmentation using modified fuzzy c mean based clustering

  • Kumar, Dharmendra
  • Solanki, Anil Kumar
  • Ahlawat, Anil
  • Malhotra, Sukhnandan
2020 Conference Proceedings, cited 0 times
Locating disease area in medical images is one of the most challenging task in the field of image segmentation. This paper presents a new approach of image-segmentation using modified fuzzy c-means(MFCM) clustering. Considering low illuminated medical images, the input image is firstly enhanced using histogram equalization(HE) technique. The enhanced image is now segmented into various regions using the MFCM based approach. The local information is employed in the objective-function of MFCM to overcome the issue of noise sensitivity. After that membership partitioning is improved by using fast membership filtering. The observed result of the proposed scheme is found suitable in terms of various evaluating parameters for experimentation.

Impact of internal target volume definition for pencil beam scanned proton treatment planning in the presence of respiratory motion variability for lung cancer: A proof of concept

  • Krieger, Miriam
  • Giger, Alina
  • Salomir, Rares
  • Bieri, Oliver
  • Celicanin, Zarko
  • Cattin, Philippe C
  • Lomax, Antony J
  • Weber, Damien C
  • Zhang, Ye
Radiotherapy and Oncology 2020 Journal Article, cited 0 times

Usefulness of gradient tree boosting for predicting histological subtype and EGFR mutation status of non-small cell lung cancer on (18)F FDG-PET/CT

  • Koyasu, S.
  • Nishio, M.
  • Isoda, H.
  • Nakamoto, Y.
  • Togashi, K.
Ann Nucl Med 2020 Journal Article, cited 3 times
OBJECTIVE: To develop and evaluate a radiomics approach for classifying histological subtypes and epidermal growth factor receptor (EGFR) mutation status in lung cancer on PET/CT images. METHODS: PET/CT images of lung cancer patients were obtained from public databases and used to establish two datasets, respectively to classify histological subtypes (156 adenocarcinomas and 32 squamous cell carcinomas) and EGFR mutation status (38 mutant and 100 wild-type samples). Seven types of imaging features were obtained from PET/CT images of lung cancer. Two types of machine learning algorithms were used to predict histological subtypes and EGFR mutation status: random forest (RF) and gradient tree boosting (XGB). The classifiers used either a single type or multiple types of imaging features. In the latter case, the optimal combination of the seven types of imaging features was selected by Bayesian optimization. Receiver operating characteristic analysis, area under the curve (AUC), and tenfold cross validation were used to assess the performance of the approach. RESULTS: In the classification of histological subtypes, the AUC values of the various classifiers were as follows: RF, single type: 0.759; XGB, single type: 0.760; RF, multiple types: 0.720; XGB, multiple types: 0.843. In the classification of EGFR mutation status, the AUC values were: RF, single type: 0.625; XGB, single type: 0.617; RF, multiple types: 0.577; XGB, multiple types: 0.659. CONCLUSIONS: The radiomics approach to PET/CT images, together with XGB and Bayesian optimization, is useful for classifying histological subtypes and EGFR mutation status in lung cancer.

A Quantum-Inspired Self-Supervised Network model for automatic segmentation of brain MR images

  • Konar, Debanjan
  • Bhattacharyya, Siddhartha
  • Gandhi, Tapan Kr
  • Panigrahi, Bijaya Ketan
Applied Soft Computing 2020 Journal Article, cited 1 times
The classical self-supervised neural network architectures suffer from slow convergence problem and incorporation of quantum computing in classical self-supervised networks is a potential solution towards it. In this article, a fully self-supervised novel quantum-inspired neural network model referred to as Quantum-Inspired Self-Supervised Network (QIS-Net) is proposed and tailored for fully automatic segmentation of brain MR images to obviate the challenges faced by deeply supervised Convolutional Neural Network (CNN) architectures. The proposed QIS-Net architecture is composed of three layers of quantum neuron (input, intermediate and output) expressed as qbits. The intermediate and output layers of the QIS-Net architecture are inter-linked through bi-directional propagation of quantum states, wherein the image pixel intensities (quantum bits) are self-organized in between these two layers without any external supervision or training. Quantum observation allows to obtain the true output once the superimposed quantum states interact with the external environment. The proposed self-supervised quantum-inspired network model has been tailored for and tested on Dynamic Susceptibility Contrast (DSC) brain MR images from Nature data sets for detecting complete tumor and reported promising accuracy and reasonable dice similarity scores in comparison with the unsupervised Fuzzy C-Means clustering, self-trained QIBDS Net, Opti-QIBDS Net, deeply supervised U-Net and Fully Convolutional Neural Networks (FCNNs).

Design and evaluation of an accurate CNR-guided small region iterative restoration-based tumor segmentation scheme for PET using both simulated and real heterogeneous tumors

  • Koç, Alpaslan
  • Güveniş, Albert
Med Biol Eng ComputMed Biol Eng Comput 2020 Journal Article, cited 0 times
Tumor delineation accuracy directly affects the effectiveness of radiotherapy. This study presents a methodology that minimizes potential errors during the automated segmentation of tumors in PET images. Iterative blind deconvolution was implemented in a region of interest encompassing the tumor with the number of iterations determined from contrast-to-noise ratios. The active contour and random forest classification-based segmentation method was evaluated using three distinct image databases that included both synthetic and real heterogeneous tumors. Ground truths about tumor volumes were known precisely. The volumes of the tumors were in the range of 0.49-26.34 cm(3), 0.64-1.52 cm(3), and 40.38-203.84 cm(3) respectively. Widely available software tools, namely, MATLAB, MIPAV, and ITK-SNAP were utilized. When using the active contour method, image restoration reduced mean errors in volumes estimation from 95.85 to 3.37%, from 815.63 to 17.45%, and from 32.61 to 6.80% for the three datasets. The accuracy gains were higher using datasets that include smaller tumors for which PVE is known to be more predominant. Computation time was reduced by a factor of about 10 in the smaller deconvolution region. Contrast-to-noise ratios were improved for all tumors in all data. The presented methodology has the potential to improve delineation accuracy in particular for smaller tumors at practically feasible computational times. Graphical abstract Evaluation of accurate lesion volumes using CNR-guided and ROI-based restoration method for PET images.

PleThora: Pleural effusion and thoracic cavity segmentations in diseased lungs for benchmarking chest CT processing pipelines

  • Kiser, Kendall J
  • Ahmed, Sara
  • Stieb, Sonja
  • Mohamed, Abdallah S R
  • Elhalawani, Hesham
  • Park, Peter Y S
  • Doyle, Nathan S
  • Wang, Brandon J
  • Barman, Arko
  • Li, Zhao
  • Zheng, W Jim
  • Fuller, Clifton D
  • Giancardo, Luca
Med Phys 2020 Journal Article, cited 0 times
PURPOSE: This manuscript describes a dataset of thoracic cavity segmentations and discrete pleural effusion segmentations we have annotated on 402 CT scans acquired from patients with non-small cell lung cancer. The segmentation of these anatomic regions precedes fundamental tasks in image analysis pipelines such as lung structure segmentation, lesion detection, and radiomics feature extraction. ACQUISITION AND VALIDATION METHODS: Bilateral thoracic cavity volumes and pleural effusion volumes were manually segmented on CT scans acquired from The Cancer Imaging Archive "NSCLC Radiomics" data collection. Four-hundred-two thoracic segmentations were first generated automatically by a U-Net based algorithm trained on chest CTs without cancer, manually corrected by a medical student to include the complete thoracic cavity (normal, pathologic, and atelectatic lung parenchyma, lung hilum, pleural effusion, fibrosis, nodules, tumor, and other anatomic anomalies), and revised by a radiation oncologist or a radiologist. Seventy-eight pleural effusions were manually segmented by a medical student and revised by a radiologist or radiation oncologist. Interobserver agreement between the radiation oncologist and radiologist corrections was acceptable. DATA FORMAT AND USAGE NOTES: All expert-vetted segmentations are publicly available in NIfTI format through The Cancer Imaging Archive at Tabular data detailing clinical and technical metadata linked to segmentation cases are also available. POTENTIAL APPLICATIONS: Thoracic cavity segmentations will be valuable for developing image analysis pipelines on pathologic lungs - where current automated algorithms struggle most. In conjunction with gross tumor volume segmentations already available from "NSCLC Radiomics," pleural effusion segmentations may be valuable for investigating radiomics profile differences between effusion and primary tumor or training algorithms to discriminate between them.

Correlation between MR Image-Based Radiomics Features and Risk Scores Associated with Gene Expression Profiles in Breast Cancer

  • Kim, Ga Ram
  • Ku, You Jin
  • Kim, Jun Ho
  • Kim, Eun-Kyung
Journal of the Korean Society of Radiology 2020 Journal Article, cited 0 times

Application of Homomorphic Encryption on Neural Network in Prediction of Acute Lymphoid Leukemia

  • Khilji, Ishfaque Qamar
  • Saha, Kamonashish
  • Amin, Jushan
  • Iqbal, Muhammad
International Journal of Advanced Computer Science and Applications 2020 Journal Article, cited 0 times
Machine learning is now becoming a widely used mechanism and applying it in certain sensitive fields like medical and financial data has only made things easier. Accurate Diagnosis of cancer is essential in treating it properly. Medical tests regarding cancer in recent times are quite expensive and not available in many parts of the world. CryptoNets, on the other hand, is an exhibit of the use of Neural-Networks over data encrypted with Homomorphic Encryption. This project demonstrates the use of Homomorphic Encryption for outsourcing neural-network predictions in case of Acute Lymphoid Leukemia (ALL). By using CryptoNets, the patients or doctors in need of the service can encrypt their data using Homomorphic Encryption and send only the encrypted message to the service provider (hospital or model owner). Since Homomorphic Encryptions allow the provider to operate on the data while it is encrypted, the provider can make predictions using a pre-trained Neural-Network while the data remains encrypted all throughout the process and finally sending the prediction to the user who can decrypt the results. During the process the service provider (hospital or the model owner) gains no knowledge about the data that was used or the result since everything is encrypted throughout the process. Our work proposes a Neural Network model which will be able to predict ALL-Acute Lymphoid Leukemia with approximate 80% accuracy using the C_NMC Challenge dataset. Prior to building our own model, we used the dataset and pre-process it using a different approach. We then ran on different machine learning and Neural Network models like VGG16, SVM, AlexNet, ResNet50 and compared the validation accuracies of these models with our own model which lastly gives better accuracy than the rest of the models used. We then use our own pre-trained Neural Network to make predictions using CryptoNets. We were able to achieve an encrypted prediction of about 78% which is close to what we achieved when validating our own CNN model that has a validation accuracy of 80% for prediction of Acute Lymphoid Leukemia (ALL).

Arterial input function and tracer kinetic model-driven network for rapid inference of kinetic maps in Dynamic Contrast-Enhanced MRI (AIF-TK-net)

  • Kettelkamp, Joseph
  • Lingala, Sajan Goud
2020 Conference Paper, cited 0 times
We propose a patient-specific arterial input function (AIF) and tracer kinetic (TK) model-driven network to rapidly estimate the extended Tofts- Kety kinetic model parameters in DCE-MRI. We term our network as AIF-TK-net, which maps an input comprising of an image patch of the DCE-time series and the patient-specific AIF to the output image patch of the TK parameters. We leverage the open-source NEURO-RIDER database of brain tumor DCE-MRI scans to train our network. Once trained, our model rapidly infers the TK maps of unseen DCE-MRI images on the order of a 0.34 sec/slice for a 256x256x65 time series data on a NVIDIA GeForce GTX 1080 Ti GPU. We show its utility on high time resolution DCE-MRI datasets where significant variability in AIFs across patients exists. We demonstrate that the proposed AIF - TK net considerably improves the TK parameter estimation accuracy in comparison to a network, which does not utilize the patient AIF.

The Combination of Low Skeletal Muscle Mass and High Tumor Interleukin-6 Associates with Decreased Survival in Clear Cell Renal Cell Carcinoma

  • Kays, J. K.
  • Koniaris, L. G.
  • Cooper, C. A.
  • Pili, R.
  • Jiang, G.
  • Liu, Y.
  • Zimmers, T. A.
Cancers (Basel) 2020 Journal Article, cited 0 times
Clear cell renal carcinoma (ccRCC) is frequently associated with cachexia which is itself associated with decreased survival and quality of life. We examined relationships among body phenotype, tumor gene expression, and survival. Demographic, clinical, computed tomography (CT) scans and tumor RNASeq for 217 ccRCC patients were acquired from the Cancer Imaging Archive and The Cancer Genome Atlas (TCGA). Skeletal muscle and fat masses measured from CT scans and tumor cytokine gene expression were compared with survival by univariate and multivariate analysis. Patients in the lowest skeletal muscle mass (SKM) quartile had significantly shorter overall survival versus the top three SKM quartiles. Patients who fell into the lowest quartiles for visceral adipose mass (VAT) and subcutaneous adipose mass (SCAT) also demonstrated significantly shorter overall survival. Multiple tumor cytokines correlated with mortality, most strongly interleukin-6 (IL-6); high IL-6 expression was associated with significantly decreased survival. The combination of low SKM/high IL-6 was associated with significantly lower overall survival compared to high SKM/low IL-6 expression (26.1 months vs. not reached; p < 0.001) and an increased risk of mortality (HR = 5.95; 95% CI = 2.86-12.38). In conclusion, tumor cytokine expression, body composition, and survival are closely related, with low SKM/high IL-6 expression portending worse prognosis in ccRCC.

ECIDS-Enhanced Cancer Image Diagnosis and Segmentation Using Artificial Neural Networks and Active Contour Modelling

  • Kavitha, M. S.
  • Shanthini, J.
  • Bhavadharini, R. M.
Journal of Medical Imaging and Health Informatics 2020 Journal Article, cited 0 times
In the present decade, image processing techniques are extensively utilized in various medical image diagnoses, specifically in dealing with cancer images for detection and treatment in advance. The quality of the image and the accuracy are the significant factors to be considered while analyzing the images for cancer diagnosis. With that note, in this paper, an Enhanced Cancer Image Diagnosis and Segmentation (ECIDS) framework has been developed for effective detection and segmentation of lung cancer cells. Initially, the Computed Tomography lung image (CT image) has been processed for denoising by employing kernel based global denoising function. Following that, the noise free lung images are given for feature extraction. The images are further classified into normal and abnormal classes using Feed Forward Artificial Neural Network Classification. With that, the classified lung cancer images are given for segmentation and the process of segmentation has been done here with the Active Contour Modelling with reduced gradient. The segmented cancer images are further given for medical processing. Moreover, the framework is experimented with MATLAB tool using the clinical dataset called LIDC-IDRI lung CT dataset. The results are analyzed and discussed based on some performance evaluation metrics such as energy, Entropy, Correlation and Homogeneity are involved in effective classification.

Radiomic analysis identifies tumor subtypes associated with distinct molecular and microenvironmental factors in head and neck squamous cell carcinoma

  • Katsoulakis, Evangelia
  • Yu, Yao
  • Apte, Aditya P.
  • Leeman, Jonathan E.
  • Katabi, Nora
  • Morris, Luc
  • Deasy, Joseph O.
  • Chan, Timothy A.
  • Lee, Nancy Y.
  • Riaz, Nadeem
  • Hatzoglou, Vaios
  • Oh, Jung Hun
Oral Oncology 2020 Journal Article, cited 0 times
Purpose To identify whether radiomic features from pre-treatment computed tomography (CT) scans can predict molecular differences between head and neck squamous cell carcinoma (HNSCC) using The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA). Methods 77 patients from the TCIA with HNSCC had imaging suitable for analysis. Radiomic features were extracted and unsupervised consensus clustering was performed to identify subtypes. Genomic data was extracted from the matched patients in the TCGA database. We explored relationships between radiomic features and molecular profiles of tumors, including the tumor immune microenvironment. A machine learning method was used to build a model predictive of CD8 + T-cells. An independent cohort of 83 HNSCC patients was used to validate the radiomic clusters. Results We initially extracted 104 two-dimensional radiomic features, and after feature stability tests and removal of volume dependent features, reduced this to 67 features for subsequent analysis. Consensus clustering based on these features resulted in two distinct clusters. The radiomic clusters differed by primary tumor subsite (p = 0.0096), HPV status (p = 0.0127), methylation-based clustering results (p = 0.0025), and tumor immune microenvironment. A random forest model using radiomic features predicted CD8 + T-cells independent of HPV status with R2 = 0.30 (p < 0.0001) on cross validation. Consensus clustering on the validation cohort resulted in two distinct clusters that differ in tumor subsite (p = 1.3 × 10-7) and HPV status (p = 4.0 × 10-7). Conclusion Radiomic analysis can identify biologic features of tumors such as HPV status and T-cell infiltration and may be able to provide other information in the near future to help with patient stratification.

Multi-Institutional Validation of Deep Learning for Pretreatment Identification of Extranodal Extension in Head and Neck Squamous Cell Carcinoma

  • Kann, B. H.
  • Hicks, D. F.
  • Payabvash, S.
  • Mahajan, A.
  • Du, J.
  • Gupta, V.
  • Park, H. S.
  • Yu, J. B.
  • Yarbrough, W. G.
  • Burtness, B. A.
  • Husain, Z. A.
  • Aneja, S.
J Clin Oncol 2020 Journal Article, cited 5 times
PURPOSE: Extranodal extension (ENE) is a well-established poor prognosticator and an indication for adjuvant treatment escalation in patients with head and neck squamous cell carcinoma (HNSCC). Identification of ENE on pretreatment imaging represents a diagnostic challenge that limits its clinical utility. We previously developed a deep learning algorithm that identifies ENE on pretreatment computed tomography (CT) imaging in patients with HNSCC. We sought to validate our algorithm performance for patients from a diverse set of institutions and compare its diagnostic ability to that of expert diagnosticians. METHODS: We obtained preoperative, contrast-enhanced CT scans and corresponding pathology results from two external data sets of patients with HNSCC: an external institution and The Cancer Genome Atlas (TCGA) HNSCC imaging data. Lymph nodes were segmented and annotated as ENE-positive or ENE-negative on the basis of pathologic confirmation. Deep learning algorithm performance was evaluated and compared directly to two board-certified neuroradiologists. RESULTS: A total of 200 lymph nodes were examined in the external validation data sets. For lymph nodes from the external institution, the algorithm achieved an area under the receiver operating characteristic curve (AUC) of 0.84 (83.1% accuracy), outperforming radiologists' AUCs of 0.70 and 0.71 (P = .02 and P = .01). Similarly, for lymph nodes from the TCGA, the algorithm achieved an AUC of 0.90 (88.6% accuracy), outperforming radiologist AUCs of 0.60 and 0.82 (P < .0001 and P = .16). Radiologist diagnostic accuracy improved when receiving deep learning assistance. CONCLUSION: Deep learning successfully identified ENE on pretreatment imaging across multiple institutions, exceeding the diagnostic ability of radiologists with specialized head and neck experience. Our findings suggest that deep learning has utility in the identification of ENE in patients with HNSCC and has the potential to be integrated into clinical decision making.

The contribution of axillary lymph node volume to recurrence-free survival status in breast cancer patients with sub-stratification by molecular subtypes and pathological complete response

  • Kang, James
  • Li, Haifang
  • Cattell, Renee
  • Talanki, Varsha
  • Cohen, Jules A.
  • Bernstein, Clifford S.
  • Duong, Tim
Breast Cancer Research 2020 Journal Article, cited 0 times
Purpose This study sought to examine the contribution of axillary lymph node (LN) volume to recurrence-free survival (RFS) in breast cancer patients with sub-stratification by molecular subtypes, and full or nodal PCR. Methods The largest LN volumes per patient at pre-neoadjuvant chemotherapy on standard clinical breast 1.5-Tesla MRI, 3 molecular subtypes, full, breast, and nodal PCR, and 10-year RFS were tabulated (N = 110 patients from MRIs of I-SPY-1 TRIAL). A volume threshold of two standard deviations was used to categorize large versus small LNs for sub stratification. In addition, “normal” node volumes were determined from a different cohort of 218 axillary LNs. Results LN volume (4.07 ± 5.45 cm3) were significantly larger than normal axillary LN volumes (0.646 ± 0.657 cm3, P = 10− 16). Full and nodal pathologic complete response (PCR) was not dependent on pre-neoadjuvant chemotherapy nodal volume (P > .05). The HR+/HER2– group had smaller axillary LN volumes than the HER2 + and triple-negative groups (P < .05). Survival was not dependent on pre-treatment axillary LN volumes alone (P = .29). However, when substratified by PCR, the large LN group with full (P = .011) or nodal PCR (P = .0026) both showed better recurrence-free survival than the small LN group. There was significant difference in RFS when the small node group was separated by the 3 molecular subtypes (P = .036) but not the large node group (P = .97). Conclusions This study found an interaction of axillary lymph node volume, pathological complete responses, and molecular subtypes that inform recurrence-free survival status. Improved characterization of the axillary lymph nodes has the potential to improve the management of breast cancer patients.

FAIR-compliant clinical, radiomics and DICOM metadata of RIDER, interobserver, Lung1 and head-Neck1 TCIA collections

  • Kalendralis, Petros
  • Shi, Zhenwei
  • Traverso, Alberto
  • Choudhury, Ananya
  • Sloep, Matthijs
  • Zhovannik, Ivan
  • Starmans, Martijn P A
  • Grittner, Detlef
  • Feltens, Peter
  • Monshouwer, Rene
  • Klein, Stefan
  • Fijten, Rianne
  • Aerts, Hugo
  • Dekker, Andre
  • van Soest, Johan
  • Wee, Leonard
Med Phys 2020 Journal Article, cited 0 times
PURPOSE: One of the most frequently cited radiomics investigations showed that features automatically extracted from routine clinical images could be used in prognostic modeling. These images have been made publicly accessible via The Cancer Imaging Archive (TCIA). There have been numerous requests for additional explanatory metadata on the following datasets - RIDER, Interobserver, Lung1, and Head-Neck1. To support repeatability, reproducibility, generalizability, and transparency in radiomics research, we publish the subjects' clinical data, extracted radiomics features, and digital imaging and communications in medicine (DICOM) headers of these four datasets with descriptive metadata, in order to be more compliant with findable, accessible, interoperable, and reusable (FAIR) data management principles. ACQUISITION AND VALIDATION METHODS: Overall survival time intervals were updated using a national citizens registry after internal ethics board approval. Spatial offsets of the primary gross tumor volume (GTV) regions of interest (ROIs) associated with the Lung1 CT series were improved on the TCIA. GTV radiomics features were extracted using the open-source Ontology-Guided Radiomics Analysis Workflow (O-RAW). We reshaped the output of O-RAW to map features and extraction settings to the latest version of Radiomics Ontology, so as to be consistent with the Image Biomarker Standardization Initiative (IBSI). Digital imaging and communications in medicine metadata was extracted using a research version of Semantic DICOM (SOHARD, GmbH, Fuerth; Germany). Subjects' clinical data were described with metadata using the Radiation Oncology Ontology. All of the above were published in Resource Descriptor Format (RDF), that is, triples. Example SPARQL queries are shared with the reader to use on the online triples archive, which are intended to illustrate how to exploit this data submission. DATA FORMAT: The accumulated RDF data are publicly accessible through a SPARQL endpoint where the triples are archived. The endpoint is remotely queried through a graph database web application at SPARQL queries are intrinsically federated, such that we can efficiently cross-reference clinical, DICOM, and radiomics data within a single query, while being agnostic to the original data format and coding system. The federated queries work in the same way even if the RDF data were partitioned across multiple servers and dispersed physical locations. POTENTIAL APPLICATIONS: The public availability of these data resources is intended to support radiomics features replication, repeatability, and reproducibility studies by the academic community. The example SPARQL queries may be freely used and modified by readers depending on their research question. Data interoperability and reusability are supported by referencing existing public ontologies. The RDF data are readily findable and accessible through the aforementioned link. Scripts used to create the RDF are made available at a code repository linked to this submission:

Homology-based radiomic features for prediction of the prognosis of lung cancer based on CT-based radiomics

  • Kadoya, Noriyuki
  • Tanaka, Shohei
  • Kajikawa, Tomohiro
  • Tanabe, Shunpei
  • Abe, Kota
  • Nakajima, Yujiro
  • Yamamoto, Takaya
  • Takahashi, Noriyoshi
  • Takeda, Kazuya
  • Dobashi, Suguru
  • Takeda, Ken
  • Nakane, Kazuaki
  • Jingu, Keiichi
Med Phys 2020 Journal Article, cited 0 times
PURPOSE: Radiomics is a new technique that enables noninvasive prognostic prediction by extracting features from medical images. Homology is a concept used in many branches of algebra and topology that can quantify the contact degree. In the present study, we developed homology-based radiomic features to predict the prognosis of non-small-cell lung cancer (NSCLC) patients and then evaluated the accuracy of this prediction method. METHODS: Four data sets were used: two to provide training and test data and two for the selection of robust radiomic features. All the data sets were downloaded from The Cancer Imaging Archive (TCIA). In two-dimensional cases, the Betti numbers consist of two values: b0 (zero-dimensional Betti number), which is the number of isolated components, and b1 (one-dimensional Betti number), which is the number of one-dimensional or "circular" holes. For homology-based evaluation, CT images must be converted to binarized images in which each pixel has two possible values: 0 or 1. All CT slices of the gross tumor volume were used for calculating the homology histogram. First, by changing the threshold of the CT value (range: -150 to 300 HU) for all its slices, we developed homology-based histograms for b0 , b1 , and b1 /b0 using binarized images All histograms were then summed, and the summed histogram was normalized by the number of slices. 144 homology-based radiomic features were defined from the histogram. To compare the standard radiomic features, 107 radiomic features were calculated using the standard radiomics technique. To clarify the prognostic power, the relationship between the values of the homology-based radiomic features and overall survival was evaluated using LASSO Cox regression model and the Kaplan-Meier method. The retained features with non-zero coefficients calculated by the LASSO Cox regression model were used for fitting the regression model. Moreover, these features were then integrated into a radiomics signature. An individualized rad score was calculated from a linear combination of the selected features, which were weighted by their respective coefficients. RESULTS: When the patients in the training and test data sets were stratified into high-risk and low-risk groups according to the rad scores, the overall survival of the groups was significantly different. The C-index values for the homology-based features (rad score), standard features (rad score), and tumor size were 0.625, 0.603, and 0.607, respectively, for the training data sets and 0.689, 0.668, and 0.667 for the test data sets. This result showed that homology-based radiomic features had slightly higher prediction power than the standard radiomic features. CONCLUSIONS: Prediction performance using homology-based radiomic features had a comparable or slightly higher prediction power than standard radiomic features. These findings suggest that homology-based radiomic features may have great potential for improving the prognostic prediction accuracy of CT-based radiomics. In this result, it is noteworthy that there are some limitations.

Evaluation of Feature Robustness Against Technical Parameters in CT Radiomics: Verification of Phantom Study with Patient Dataset

  • Jin, Hyeongmin
  • Kim, Jong Hyo
Journal of Signal Processing Systems 2020 Journal Article, cited 1 times
Recent advances in radiomics have shown promising results in prognostic and diagnostic studies with high dimensional imaging feature analysis. However, radiomic features are known to be affected by technical parameters and feature extraction methodology. We evaluate the robustness of CT radiomic features against the technical parameters involved in CT acquisition and feature extraction procedures using a standardized phantom and verify the feature robustness by using patient cases. ACR phantom was scanned with two tube currents, two reconstruction kernels, and two fields of view size. A total of 47 radiomic features of textures and first-order statistics were extracted on the homogeneous region from all scans. Intrinsic variability was measured to identify unstable features vulnerable to inherent CT noise and texture. Susceptibility index was defined to represent the susceptibility to the variation of a given technical parameter. Eighteen radiomic features were shown to be intrinsically unstable on reference condition. The features were more susceptible to the reconstruction kernel variation than to other sources of variation. The feature robustness evaluated on the phantom CT correlated with those evaluated on clinical CT scans. We revealed a number of scan parameters could significantly affect the radiomic features. These characteristics should be considered in a radiomic study when different scan parameters are used in a clinical dataset.

Retina U-Net: Embarrassingly Simple Exploitation of Segmentation Supervision for Medical Object Detection

  • Jaeger, PF
  • Kohl, SAA
  • Bickelhaupt, S
  • Isensee, F
  • Kuder, TA
  • Schlemmer, H-P
  • Maier-Hein, KH
2020 Conference Paper, cited 33 times
The task of localizing and categorizing objects in medical images often remains formulated as a semantic segmentation problem. This approach, however, only indirectly solves the coarse localization task by predicting pixel-level scores, requiring ad-hoc heuristics when mapping back to object-level scores. State-of-the-art object detectors on the other hand, allow for individual object scoring in an end-to-end fashion, while ironically trading in the ability to exploit the full pixel-wise supervision signal. This can be particularly disadvantageous in the setting of medical image analysis, where data sets are notoriously small. In this paper, we propose Retina U-Net, a simple architecture, which naturally fuses the Retina Net one-stage detector with the U-Net architecture widely used for semantic segmentation in medical images. The proposed architecture recaptures discarded supervision signals by complementing object detection with an auxiliary task in the form of semantic segmentation without introducing the additional complexity of previously proposed two-stage detectors. We evaluate the importance of full segmentation supervision on two medical data sets, provide an in-depth analysis on a series of toy experiments and show how the corresponding performance gain grows in the limit of small data sets. Retina U-Net yields strong detection performance only reached by its more complex two-staged counterparts. Our framework including all methods implemented for operation on 2D and 3D images is available at

A Comparison of the Efficiency of Using a Deep CNN Approach with Other Common Regression Methods for the Prediction of EGFR Expression in Glioblastoma Patients

  • Hedyehzadeh, Mohammadreza
  • Maghooli, Keivan
  • MomenGharibvand, Mohammad
  • Pistorius, Stephen
J Digit Imaging 2020 Journal Article, cited 0 times
To estimate epithermal growth factor receptor (EGFR) expression level in glioblastoma (GBM) patients using radiogenomic analysis of magnetic resonance images (MRI). A comparative study using a deep convolutional neural network (CNN)-based regression, deep neural network, least absolute shrinkage and selection operator (LASSO) regression, elastic net regression, and linear regression with no regularization was carried out to estimate EGFR expression of 166 GBM patients. Except for the deep CNN case, overfitting was prevented by using feature selection, and loss values for each method were compared. The loss values in the training phase for deep CNN, deep neural network, Elastic net, LASSO, and the linear regression with no regularization were 2.90, 8.69, 7.13, 14.63, and 21.76, respectively, while in the test phase, the loss values were 5.94, 10.28, 13.61, 17.32, and 24.19 respectively. These results illustrate that the efficiency of the deep CNN approach is better than that of the other methods, including Lasso regression, which is a regression method known for its advantage in high-dimension cases. A comparison between deep CNN, deep neural network, and three other common regression methods was carried out, and the efficiency of the CNN deep learning approach, in comparison with other regression models, was demonstrated.

Feasibility study of a multi-criteria decision-making based hierarchical model for multi-modality feature and multi-classifier fusion: Applications in medical prognosis prediction

  • He, Qiang
  • Li, Xin
  • Kim, DW Nathan
  • Jia, Xun
  • Gu, Xuejun
  • Zhen, Xin
  • Zhou, Linghong
Information Fusion 2020 Journal Article, cited 0 times

Descriptions and evaluations of methods for determining surface curvature in volumetric data

  • Hauenstein, Jacob D.
  • Newman, Timothy S.
Computers & Graphics 2020 Journal Article, cited 0 times
Highlights • Methods using convolution or fitting are often the most accurate. • The existing TE method is fast and accurate on noise-free data. • The OP method is faster than existing, similarly accurate methods on real data. • Even modest errors in curvature notably impact curvature-based renderings. • On real data, GSTH, GSTI, and OP produce the best curvature-based renderings. Abstract Three methods developed for determining surface curvature in volumetric data are described, including one convolution-based method, one fitting-based method, and one method that uses normal estimates to directly determine curvature. Additionally, a study of the accuracy and computational performance of these methods and prior methods is presented. The study considers synthetic data, noise-added synthetic data, and real data. Sample volume renderings using curvature-based transfer functions, where curvatures were determined with the methods, are also exhibited.

Breast cancer masses classification using deep convolutional neural networks and transfer learning

  • Hassan, Shayma’a A.
  • Sayed, Mohammed S.
  • Abdalla, Mahmoud I.
  • Rashwan, Mohsen A.
Multimedia Tools and Applications 2020 Journal Article, cited 0 times
With the recent advances in the deep learning field, the use of deep convolutional neural networks (DCNNs) in biomedical image processing becomes very encouraging. This paper presents a new classification model for breast cancer masses based on DCNNs. We investigated the use of transfer learning from AlexNet and GoogleNet pre-trained models to suit this task. We experimentally determined the best DCNN model for accurate classification by comparing different models, which vary according to the design and hyper-parameters. The effectiveness of these models were demonstrated using four mammogram databases. All models were trained and tested using a mammographic dataset from CBIS-DDSM and INbreast databases to select the best AlexNet and GoogleNet models. The performance of the two proposed models was further verified using images from Egyptian National Cancer Institute (NCI) and MIAS database. When tested on CBIS-DDSM and INbreast databases, the proposed AlexNet model achieved an accuracy of 100% for both databases. While, the proposed GoogleNet model achieved accuracy of 98.46% and 92.5%, respectively. When tested on NCI images and MIAS databases, AlexNet achieved an accuracy of 97.89% with AUC of 98.32%, and accuracy of 98.53% with AUC of 98.95%, respectively. GoogleNet achieved an accuracy of 91.58% with AUC of 96.5%, and accuracy of 88.24% with AUC of 94.65%, respectively. These results suggest that AlexNet has better performance and more robustness than GoogleNet. To the best of our knowledge, the proposed AlexNet model outperformed the latest methods. It achieved the highest accuracy and AUC score and the lowest testing time reported on CBIS-DDSM, INbreast and MIAS databases.

Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs in CT Scans Using Q-Deformed Entropy and Deep Learning Features

  • Hasan, Ali M.
  • Al-Jawad, Mohammed M.
  • Jalab, Hamid A.
  • Shaiba, Hadil
  • Ibrahim, Rabha W.
  • Al-Shamasneh, Ala’a R.
Entropy 2020 Journal Article, cited 0 times
Many health systems over the world have collapsed due to limited capacity and a dramatic increase of suspected COVID-19 cases. What has emerged is the need for finding an efficient, quick and accurate method to mitigate the overloading of radiologists’ efforts to diagnose the suspected cases. This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans. In this study, pre-processing is used to reduce the effect of intensity variations between CT slices. Then histogram thresholding is used to isolate the background of the CT lung scan. Each CT lung scan undergoes a feature extraction which involves deep learning and a Q-deformed entropy algorithm. The obtained features are classified using a long short-term memory (LSTM) neural network classifier. Subsequently, combining all extracted features significantly improves the performance of the LSTM network to precisely discriminate between COVID-19, pneumonia and healthy cases. The maximum achieved accuracy for classifying the collected dataset comprising 321 patients is 99.68%.

Potential Added Value of PET/CT Radiomics for Survival Prognostication beyond AJCC 8th Edition Staging in Oropharyngeal Squamous Cell Carcinoma

  • Haider, S. P.
  • Zeevi, T.
  • Baumeister, P.
  • Reichel, C.
  • Sharaf, K.
  • Forghani, R.
  • Kann, B. H.
  • Judson, B. L.
  • Prasad, M. L.
  • Burtness, B.
  • Mahajan, A.
  • Payabvash, S.
Cancers (Basel) 2020 Journal Article, cited 2 times
Accurate risk-stratification can facilitate precision therapy in oropharyngeal squamous cell carcinoma (OPSCC). We explored the potential added value of baseline positron emission tomography (PET)/computed tomography (CT) radiomic features for prognostication and risk stratification of OPSCC beyond the American Joint Committee on Cancer (AJCC) 8th edition staging scheme. Using institutional and publicly available datasets, we included OPSCC patients with known human papillomavirus (HPV) status, without baseline distant metastasis and treated with curative intent. We extracted 1037 PET and 1037 CT radiomic features quantifying lesion shape, imaging intensity, and texture patterns from primary tumors and metastatic cervical lymph nodes. Utilizing random forest algorithms, we devised novel machine-learning models for OPSCC progression-free survival (PFS) and overall survival (OS) using "radiomics" features, "AJCC" variables, and the "combined" set as input. We designed both single- (PET or CT) and combined-modality (PET/CT) models. Harrell's C-index quantified survival model performance; risk stratification was evaluated in Kaplan-Meier analysis. A total of 311 patients were included. In HPV-associated OPSCC, the best "radiomics" model achieved an average C-index +/- standard deviation of 0.62 +/- 0.05 (p = 0.02) for PFS prediction, compared to 0.54 +/- 0.06 (p = 0.32) utilizing "AJCC" variables. Radiomics-based risk-stratification of HPV-associated OPSCC was significant for PFS and OS. Similar trends were observed in HPV-negative OPSCC. In conclusion, radiomics imaging features extracted from pre-treatment PET/CT may provide complimentary information to the current AJCC staging scheme for survival prognostication and risk-stratification of HPV-associated OPSCC.

Prediction of post-radiotherapy locoregional progression in HPV-associated oropharyngeal squamous cell carcinoma using machine-learning analysis of baseline PET/CT radiomics

  • Haider, S. P.
  • Sharaf, K.
  • Zeevi, T.
  • Baumeister, P.
  • Reichel, C.
  • Forghani, R.
  • Kann, B. H.
  • Petukhova, A.
  • Judson, B. L.
  • Prasad, M. L.
  • Liu, C.
  • Burtness, B.
  • Mahajan, A.
  • Payabvash, S.
Transl Oncol 2020 Journal Article, cited 0 times
Locoregional failure remains a therapeutic challenge in oropharyngeal squamous cell carcinoma (OPSCC). We aimed to devise novel objective imaging biomarkers for prediction of locoregional progression in HPV-associated OPSCC. Following manual lesion delineation, 1037 PET and 1037 CT radiomic features were extracted from each primary tumor and metastatic cervical lymph node on baseline PET/CT scans. Applying random forest machine-learning algorithms, we generated radiomic models for censoring-aware locoregional progression prognostication (evaluated by Harrell's C-index) and risk stratification (evaluated in Kaplan-Meier analysis). A total of 190 patients were included; an optimized model yielded a median (interquartile range) C-index of 0.76 (0.66-0.81; p=0.01) in prognostication of locoregional progression, using combined PET/CT radiomic features from primary tumors. Radiomics-based risk stratification reliably identified patients at risk for locoregional progression within 2-, 3-, 4-, and 5-year follow-up intervals, with log-rank p-values of p=0.003, p=0.001, p=0.02, p=0.006 in Kaplan-Meier analysis, respectively. Our results suggest PET/CT radiomic biomarkers can predict post-radiotherapy locoregional progression in HPV-associated OPSCC. Pending validation in large, independent cohorts, such objective biomarkers may improve patient selection for treatment de-intensification trials in this prognostically favorable OPSCC entity, and eventually facilitate personalized therapy.

PET/CT radiomics signature of human papilloma virus association in oropharyngeal squamous cell carcinoma

  • Haider, S. P.
  • Mahajan, A.
  • Zeevi, T.
  • Baumeister, P.
  • Reichel, C.
  • Sharaf, K.
  • Forghani, R.
  • Kucukkaya, A. S.
  • Kann, B. H.
  • Judson, B. L.
  • Prasad, M. L.
  • Burtness, B.
  • Payabvash, S.
Eur J Nucl Med Mol Imaging 2020 Journal Article, cited 1 times
PURPOSE: To devise, validate, and externally test PET/CT radiomics signatures for human papillomavirus (HPV) association in primary tumors and metastatic cervical lymph nodes of oropharyngeal squamous cell carcinoma (OPSCC). METHODS: We analyzed 435 primary tumors (326 for training, 109 for validation) and 741 metastatic cervical lymph nodes (518 for training, 223 for validation) using FDG-PET and non-contrast CT from a multi-institutional and multi-national cohort. Utilizing 1037 radiomics features per imaging modality and per lesion, we trained, optimized, and independently validated machine-learning classifiers for prediction of HPV association in primary tumors, lymph nodes, and combined "virtual" volumes of interest (VOI). PET-based models were additionally validated in an external cohort. RESULTS: Single-modality PET and CT final models yielded similar classification performance without significant difference in independent validation; however, models combining PET and CT features outperformed single-modality PET- or CT-based models, with receiver operating characteristic area under the curve (AUC) of 0.78, and 0.77 for prediction of HPV association using primary tumor lesion features, in cross-validation and independent validation, respectively. In the external PET-only validation dataset, final models achieved an AUC of 0.83 for a virtual VOI combining primary tumor and lymph nodes, and an AUC of 0.73 for a virtual VOI combining all lymph nodes. CONCLUSION: We found that PET-based radiomics signatures yielded similar classification performance to CT-based models, with potential added value from combining PET- and CT-based radiomics for prediction of HPV status. While our results are promising, radiomics signatures may not yet substitute tissue sampling for clinical decision-making.

Radiomics feature reproducibility under inter-rater variability in segmentations of CT images

  • Haarburger, C.
  • Muller-Franzes, G.
  • Weninger, L.
  • Kuhl, C.
  • Truhn, D.
  • Merhof, D.
Sci RepScientific reports 2020 Journal Article, cited 0 times
Identifying image features that are robust with respect to segmentation variability is a tough challenge in radiomics. So far, this problem has mainly been tackled in test-retest analyses. In this work we analyse radiomics feature reproducibility in two phases: first with manual segmentations provided by four expert readers and second with probabilistic automated segmentations using a recently developed neural network (PHiseg). We test feature reproducibility on three publicly available datasets of lung, kidney and liver lesions. We find consistent results both over manual and automated segmentations in all three datasets and show that there are subsets of radiomic features which are robust against segmentation variability and other radiomic features which are prone to poor reproducibility under differing segmentations. By providing a detailed analysis of robustness of the most common radiomics features across several datasets, we envision that more reliable and reproducible radiomic models can be built in the future based on this work.

Optimal Statistical incorporation of independent feature Stability information into Radiomics Studies

  • Götz, Michael
  • Maier-Hein, Klaus H
Sci RepScientific reports 2020 Journal Article, cited 0 times
Conducting side experiments termed robustness experiments, to identify features that are stable with respect to rescans, annotation, or other confounding effects is an important element in radiomics research. However, the matter of how to include the finding of these experiments into the model building process still needs to be explored. Three different methods for incorporating prior knowledge into a radiomics modelling process were evaluated: the naive approach (ignoring feature quality), the most common approach consisting of removing unstable features, and a novel approach using data augmentation for information transfer (DAFIT). Multiple experiments were conducted using both synthetic and publicly available real lung imaging patient data. Ignoring additional information from side experiments resulted in significantly overestimated model performances meaning the estimated mean area under the curve achieved with a model was increased. Removing unstable features improved the performance estimation, while slightly decreasing the model performance, i.e. decreasing the area under curve achieved with the model. The proposed approach was superior both in terms of the estimation of the model performance and the actual model performance. Our experiments show that data augmentation can prevent biases in performance estimation and has several advantages over the plain omission of the unstable feature. The actual gain that can be obtained depends on the quality and applicability of the prior information on the features in the given domain. This will be an important topic of future research.

T2-FDL: A robust sparse representation method using adaptive type-2 fuzzy dictionary learning for medical image classification

  • Ghasemi, Majid
  • Kelarestaghi, Manoochehr
  • Eshghi, Farshad
  • Sharifi, Arash
Expert Systems with Applications 2020 Journal Article, cited 0 times
In this paper, a robust sparse representation for medical image classification is proposed based on the adaptive type-2 fuzzy learning (T2-FDL) system. In the proposed method, sparse coding and dictionary learning processes are executed iteratively until a near-optimal dictionary is obtained. The sparse coding step aiming at finding a combination of dictionary atoms to represent the input data efficiently, and the dictionary learning step rigorously adjusts a minimum set of dictionary items. The two-step operation helps create an adaptive sparse representation algorithm by involving the type-2 fuzzy sets in the design process of image classification. Since the existing image measurements are not made under the same conditions and with the same accuracy, the performance of medical diagnosis is always affected by noise and uncertainty. By introducing an adaptive type-2 fuzzy learning method, a better approximation in an environment with higher degrees of uncertainty and noise is achieved. The experiments are executed over two open-access brain tumor magnetic resonance image databases, REMBRANDT and TCGA-LGG, from The Cancer Imaging Archive (TCIA). The experimental results of a brain tumor classification task show that the proposed T2-FDL method can adequately minimize the negative effects of uncertainty in the input images. The results demonstrate the outperformance of T2-FDL compared to other important classification methods in the literature, in terms of accuracy, specificity, and sensitivity.

Imaging-AMARETTO: An Imaging Genomics Software Tool to Interrogate Multiomics Networks for Relevance to Radiography and Histopathology Imaging Biomarkers of Clinical Outcomes

  • Gevaert, O.
  • Nabian, M.
  • Bakr, S.
  • Everaert, C.
  • Shinde, J.
  • Manukyan, A.
  • Liefeld, T.
  • Tabor, T.
  • Xu, J.
  • Lupberger, J.
  • Haas, B. J.
  • Baumert, T. F.
  • Hernaez, M.
  • Reich, M.
  • Quintana, F. J.
  • Uhlmann, E. J.
  • Krichevsky, A. M.
  • Mesirov, J. P.
  • Carey, V.
  • Pochet, N.
JCO Clin Cancer Inform 2020 Journal Article, cited 1 times
PURPOSE: The availability of increasing volumes of multiomics, imaging, and clinical data in complex diseases such as cancer opens opportunities for the formulation and development of computational imaging genomics methods that can link multiomics, imaging, and clinical data. METHODS: Here, we present the Imaging-AMARETTO algorithms and software tools to systematically interrogate regulatory networks derived from multiomics data within and across related patient studies for their relevance to radiography and histopathology imaging features predicting clinical outcomes. RESULTS: To demonstrate its utility, we applied Imaging-AMARETTO to integrate three patient studies of brain tumors, specifically, multiomics with radiography imaging data from The Cancer Genome Atlas (TCGA) glioblastoma multiforme (GBM) and low-grade glioma (LGG) cohorts and transcriptomics with histopathology imaging data from the Ivy Glioblastoma Atlas Project (IvyGAP) GBM cohort. Our results show that Imaging-AMARETTO recapitulates known key drivers of tumor-associated microglia and macrophage mechanisms, mediated by STAT3, AHR, and CCR2, and neurodevelopmental and stemness mechanisms, mediated by OLIG2. Imaging-AMARETTO provides interpretation of their underlying molecular mechanisms in light of imaging biomarkers of clinical outcomes and uncovers novel master drivers, THBS1 and MAP2, that establish relationships across these distinct mechanisms. CONCLUSION: Our network-based imaging genomics tools serve as hypothesis generators that facilitate the interrogation of known and uncovering of novel hypotheses for follow-up with experimental validation studies. We anticipate that our Imaging-AMARETTO imaging genomics tools will be useful to the community of biomedical researchers for applications to similar studies of cancer and other complex diseases with available multiomics, imaging, and clinical data.

Machine Learning Methods for Image Analysis in Medical Applications From Alzheimer’s Disease, Brain Tumors, to Assisted Living

  • Chenjie Ge
2020 Thesis, cited 0 times
Healthcare has progressed greatly nowadays owing to technological advances, where machine learning plays an important role in processing and analyzing a large amount of medical data. This thesis investigates four healthcare-related issues (Alzheimer's disease detection, glioma classification, human fall detection, and obstacle avoidance in prosthetic vision), where the underlying methodologies are associated with machine learning and computer vision. For Alzheimer’s disease (AD) diagnosis, apart from symptoms of patients, Magnetic Resonance Images (MRIs) also play an important role. Inspired by the success of deep learning, a new multi-stream multi-scale Convolutional Neural Network (CNN) architecture is proposed for AD detection from MRIs, where AD features are characterized in both the tissue level and the scale level for improved feature learning. Good classification performance is obtained for AD/NC (normal control) classification with test accuracy 94.74%. In glioma subtype classification, biopsies are usually needed for determining different molecular-based glioma subtypes. We investigate non-invasive glioma subtype prediction from MRIs by using deep learning. A 2D multi-stream CNN architecture is used to learn the features of gliomas from multi-modal MRIs, where the training dataset is enlarged with synthetic brain MRIs generated by pairwise Generative Adversarial Networks (GANs). Test accuracy 88.82% has been achieved for IDH mutation (a molecular-based subtype) prediction. A new deep semi-supervised learning method is also proposed to tackle the problem of missing molecular-related labels in training datasets for improving the performance of glioma classification. In other two applications, we also address video-based human fall detection by using co-saliency-enhanced Recurrent Convolutional Networks (RCNs), as well as obstacle avoidance in prosthetic vision by characterizing obstacle-related video features using a Spiking Neural Network (SNN). These investigations can benefit future research, where artificial intelligence/deep learning may open a new way for real medical applications.

Simultaneous emission and attenuation reconstruction in time-of-flight PET using a reference object

  • Garcia-Perez, P.
  • Espana, S.
EJNMMI Phys 2020 Journal Article, cited 0 times
BACKGROUND: Simultaneous reconstruction of emission and attenuation images in time-of-flight (TOF) positron emission tomography (PET) does not provide a unique solution. In this study, we propose to solve this limitation by including additional information given by a reference object with known attenuation placed outside the patient. Different configurations of the reference object were studied including geometry, material composition, and activity, and an optimal configuration was defined. In addition, this configuration was tested for different timing resolutions and noise levels. RESULTS: The proposed strategy was tested in 2D simulations obtained by forward projection of available PET/CT data and noise was included using Monte Carlo techniques. Obtained results suggest that the optimal configuration corresponds to a water cylinder inserted in the patient table and filled with activity. In that case, mean differences between reconstructed and true images were below 10%. However, better results can be obtained by increasing the activity of the reference object. CONCLUSION: This study shows promising results that might allow to obtain an accurate attenuation map from pure TOF-PET data without prior knowledge obtained from CT, MRI, or transmission scans.

A fast and scalable method for quality assurance of deformable image registration on lung CT scans using convolutional neural networks

  • Galib, Shaikat M
  • Lee, Hyoung K
  • Guy, Christopher L
  • Riblett, Matthew J
  • Hugo, Geoffrey D
Med Phys 2020 Journal Article, cited 1 times
PURPOSE: To develop and evaluate a method to automatically identify and quantify deformable image registration (DIR) errors between lung computed tomography (CT) scans for quality assurance (QA) purposes. METHODS: We propose a deep learning method to flag registration errors. The method involves preparation of a dataset for machine learning model training and testing, design of a three-dimensional (3D) convolutional neural network architecture that classifies registrations into good or poor classes, and evaluation of a metric called registration error index (REI) which provides a quantitative measure of registration error. RESULTS: Our study shows that, despite having limited number of training images available (10 CT scan pairs for training and 17 CT scan pairs for testing), the method achieves 0.882 AUC-ROC on the test dataset. Furthermore, the combined standard uncertainty of the estimated REI by our model lies within +/- 0.11 (+/- 11% of true REI value), with a confidence level of approximately 68%. CONCLUSIONS: We have developed and evaluated our method using original clinical registrations without generating any synthetic/simulated data. Moreover, test data were acquired from a different environment than that of training data, so that the method was validated robustly. The results of this study showed that our algorithm performs reasonably well in challenging scenarios.

A novel approach to 2D/3D registration of X-ray images using Grangeat's relation

  • Frysch, R.
  • Pfeiffer, T.
  • Rose, G.
Med Image Anal 2020 Journal Article, cited 0 times
Fast and accurate 2D/3D registration plays an important role in many applications, ranging from scientific and engineering domains all the way to medical care. Today's predominant methods are based on computationally expensive approaches, such as virtual forward or back projections, that limit the real-time applicability of the routines. Here, we present a novel concept that makes use of Grangeat's relation to intertwine information from the 3D volume and the 2D projection space in a way that allows pre-computation of all time-intensive steps. The main effort within actual registration tasks is reduced to simple resampling of the pre-calculated values, which can be executed rapidly on modern GPU hardware. We analyze the applicability of the proposed method on simulated data under various conditions and evaluate the findings on real data from a C-arm CT scanner. Our results show high registration quality in both simulated as well as real data scenarios and demonstrate a reduction in computation time for the crucial computation step by a factor of six to eight when compared to state-of-the-art routines. With minor trade-offs in accuracy, this speed-up can even be increased up to a factor of 100 in particular settings. To our knowledge, this is the first application of Grangeat's relation to the topic of 2D/3D registration. Due to its high computational efficiency and broad range of potential applications, we believe it constitutes a highly relevant approach for various problems dealing with cone beam transmission images.

Identifying BAP1 Mutations in Clear-Cell Renal Cell Carcinoma by CT Radiomics: Preliminary Findings

  • Feng, Zhan
  • Zhang, Lixia
  • Qi, Zhong
  • Shen, Qijun
  • Hu, Zhengyu
  • Chen, Feng
Frontiers in Oncology 2020 Journal Article, cited 0 times
To evaluate the potential application of computed tomography (CT) radiomics in the prediction of BRCA1-associated protein 1 (BAP1) mutation status in patients with clear-cell renal cell carcinoma (ccRCC). In this retrospective study, clinical and CT imaging data of 54 patients were retrieved from The Cancer Genome Atlas–Kidney Renal Clear Cell Carcinoma database. Among these, 45 patients had wild-type BAP1 and nine patients had BAP1 mutation. The texture features of tumor images were extracted using the Matlab-based IBEX package. To produce class-balanced data and improve the stability of prediction, we performed data augmentation for the BAP1 mutation group during cross validation. A model to predict BAP1 mutation status was constructed using Random Forest Classification algorithms, and was evaluated using leave-one-out-cross-validation. Random Forest model of predict BAP1 mutation status had an accuracy of 0.83, sensitivity of 0.72, specificity of 0.87, precision of 0.65, AUC of 0.77, F-score of 0.68. CT radiomics is a potential and feasible method for predicting BAP1 mutation status in patients with ccRCC.

Quantitative Imaging Informatics for Cancer Research

  • Fedorov, Andrey
  • Beichel, Reinhard
  • Kalpathy-Cramer, Jayashree
  • Clunie, David
  • Onken, Michael
  • Riesmeier, Jorg
  • Herz, Christian
  • Bauer, Christian
  • Beers, Andrew
  • Fillion-Robin, Jean-Christophe
  • Lasso, Andras
  • Pinter, Csaba
  • Pieper, Steve
  • Nolden, Marco
  • Maier-Hein, Klaus
  • Herrmann, Markus D
  • Saltz, Joel
  • Prior, Fred
  • Fennessy, Fiona
  • Buatti, John
  • Kikinis, Ron
JCO Clin Cancer Inform 2020 Journal Article, cited 0 times
PURPOSE: We summarize Quantitative Imaging Informatics for Cancer Research (QIICR; U24 CA180918), one of the first projects funded by the National Cancer Institute (NCI) Informatics Technology for Cancer Research program. METHODS: QIICR was motivated by the 3 use cases from the NCI Quantitative Imaging Network. 3D Slicer was selected as the platform for implementation of open-source quantitative imaging (QI) tools. Digital Imaging and Communications in Medicine (DICOM) was chosen for standardization of QI analysis outputs. Support of improved integration with community repositories focused on The Cancer Imaging Archive (TCIA). Priorities included improved capabilities of the standard, toolkits and tools, reference datasets, collaborations, and training and outreach. RESULTS: Fourteen new tools to support head and neck cancer, glioblastoma, and prostate cancer QI research were introduced and downloaded over 100,000 times. DICOM was amended, with over 40 correction proposals addressing QI needs. Reference implementations of the standard in a popular toolkit and standalone tools were introduced. Eight datasets exemplifying the application of the standard and tools were contributed. An open demonstration/connectathon was organized, attracting the participation of academic groups and commercial vendors. Integration of tools with TCIA was improved by implementing programmatic communication interface and by refining best practices for QI analysis results curation. CONCLUSION: Tools, capabilities of the DICOM standard, and datasets we introduced found adoption and utility within the cancer imaging community. A collaborative approach is critical to addressing challenges in imaging informatics at the national and international levels. Numerous challenges remain in establishing and maintaining the infrastructure of analysis tools and standardized datasets for the imaging community. Ideas and technology developed by the QIICR project are contributing to the NCI Imaging Data Commons currently being developed.

Recurrent Attention Network for False Positive Reduction in the Detection of Pulmonary Nodules in Thoracic CT Scans

  • M. Mehdi Farhangi
  • Nicholas Petrick
  • Berkman Sahiner
  • Hichem Frigui
  • Amir A. Amini
  • Aria Pezeshk
Med Phys 2020 Journal Article, cited 0 times
PURPOSE: Multi-view 2-D Convolutional Neural Networks (CNNs) and 3-D CNNs have been successfully used for analyzing volumetric data in many state-of-the-art medical imaging applications. We propose an alternative modular framework that analyzes volumetric data with an approach that is analogous to radiologists' interpretation, and apply the framework to reduce false positives that are generated in Computer-Aided Detection (CADe) systems for pulmonary nodules in thoracic CT scans. METHODS: In our approach, a deep network consisting of 2-D CNNs first processes slices individually. The features extracted in this stage are then passed to a Recurrent Neural Network (RNN), thereby modeling consecutive slices as a sequence of temporal data and capturing the contextual information across all three dimensions in the volume of interest. Outputs of the RNN layer are weighed before the final fully connected layer, enabling the network to scale the importance of different slices within a volume of interest in an end-to-end training framework. RESULTS: We validated the proposed architecture on the false positive reduction track of the Lung Nodule Analysis (LUNA) challenge for pulmonary nodule detection in chest CT scans, and obtained competitive results compared to 3-D CNNs. Our results show that the proposed approach can encode the 3-D information in volumetric data effectively by achieving a sensitivity > 0.8 with just 1/8 false positives per scan. CONCLUSIONS: Our experimental results demonstrate the effectiveness of temporal analysis of volumetric images for the application of false positive reduction in chest CT scans and show that state-of-the-art 2-D architectures from the literature can be directly applied to analyzing volumetric medical data. As newer and better 2-D architectures are being developed at a much faster rate compared to 3-D architectures, our approach makes it easy to obtain state-of-the-art performance on volumetric data using new 2-D architectures.

A genome-wide gain-of-function screen identifies CDKN2C as a HBV host factor

  • Eller, Carla
  • Heydmann, Laura
  • Colpitts, Che C.
  • El Saghire, Houssein
  • Piccioni, Federica
  • Jühling, Frank
  • Majzoub, Karim
  • Pons, Caroline
  • Bach, Charlotte
  • Lucifora, Julie
  • Lupberger, Joachim
  • Nassal, Michael
  • Cowley, Glenn S.
  • Fujiwara, Naoto
  • Hsieh, Sen-Yung
  • Hoshida, Yujin
  • Felli, Emanuele
  • Pessaux, Patrick
  • Sureau, Camille
  • Schuster, Catherine
  • Root, David E.
  • Verrier, Eloi R.
  • Baumert, Thomas F.
Nature Communications 2020 Journal Article, cited 0 times
Chronic HBV infection is a major cause of liver disease and cancer worldwide. Approaches for cure are lacking, and the knowledge of virus-host interactions is still limited. Here, we perform a genome-wide gain-of-function screen using a poorly permissive hepatoma cell line to uncover host factors enhancing HBV infection. Validation studies in primary human hepatocytes identified CDKN2C as an important host factor for HBV replication. CDKN2C is overexpressed in highly permissive cells and HBV-infected patients. Mechanistic studies show a role for CDKN2C in inducing cell cycle G1 arrest through inhibition of CDK4/6 associated with the upregulation of HBV transcription enhancers. A correlation between CDKN2C expression and disease progression in HBV-infected patients suggests a role in HBV-induced liver disease. Taken together, we identify a previously undiscovered clinically relevant HBV host factor, allowing the development of improved infectious model systems for drug discovery and the study of the HBV life cycle.

The Veterans Affairs Precision Oncology Data Repository, a Clinical, Genomic, and Imaging Research Database

  • Elbers, Danne C.
  • Fillmore, Nathanael R.
  • Sung, Feng-Chi
  • Ganas, Spyridon S.
  • Prokhorenkov, Andrew
  • Meyer, Christopher
  • Hall, Robert B.
  • Ajjarapu, Samuel J.
  • Chen, Daniel C.
  • Meng, Frank
  • Grossman, Robert L.
  • Brophy, Mary T.
  • Do, Nhan V.
Patterns 2020 Journal Article, cited 0 times
The Veterans Affairs Precision Oncology Data Repository (VA-PODR) is a large, nationwide repository of de-identified data on patients diagnosed with cancer at the Department of Veterans Affairs (VA). Data include longitudinal clinical data from the VA's nationwide electronic health record system and the VA Central Cancer Registry, targeted tumor sequencing data, and medical imaging data including computed tomography (CT) scans and pathology slides. A subset of the repository is available at the Genomic Data Commons (GDC) and The Cancer Imaging Archive (TCIA), and the full repository is available through the Veterans Precision Oncology Data Commons (VPODC). By releasing this de-identified dataset, we aim to advance Veterans' health care through enabling translational research on the Veteran population by a wide variety of researchers.

Long short-term memory networks predict breast cancer recurrence in analysis of consecutive MRIs acquired during the course of neoadjuvant chemotherapy

  • Drukker, Karen
  • Edwards, Alexandra
  • Papaioannou, John
  • Giger, Maryellen
  • Hahn, Horst K.
  • Mazurowski, Maciej A.
2020 Conference Paper, cited 0 times
The purpose of this study was to assess long short-term memory networks in the prediction of recurrence-free survival in breast cancer patients using features extracted from MRIs acquired during the course of neoadjuvant chemotherapy. In the I-SPY1 dataset, up to 4 MRI exams were available per patient acquired at pre-treatment, early-treatment, interregimen, and pre-surgery time points. Breast cancers were automatically segmented and 8 features describing kinetic curve characteristics were extracted. We assessed performance of long short-term memory networks in the prediction of recurrence-free survival status at 2 years and at 5 years post-surgery. For these predictions, we analyzed MRIs from women who had at least 2 (or 5) years of recurrence-free follow-up or experienced recurrence or death within that timeframe: 157 women and 73 women, respectively. One approach used features extracted from all available exams and the other approach used features extracted from only exams prior to the second cycle of neoadjuvant chemotherapy. The areas under the ROC curve in the prediction of recurrence-free survival status at 2 years post-surgery were 0.80, 95% confidence interval [0.68; 0.88] and 0.75 [0.62; 0.83] for networks trained with all 4 available exams and only the ‘early’ exams, respectively. Hazard ratios at the lowest, median, and highest quartile cut -points were 6.29 [2.91; 13.62], 3.27 [1.77; 6.03], 1.65 [0.83; 3.27] and 2.56 [1.20; 5.48], 3.01 [1.61; 5.66], 2.30 [1.14; 4.67]. Long short-term memory networks were able to predict recurrence-free survival in breast cancer patients, also when analyzing only MRIs acquired ‘early on’ during neoadjuvant treatment.

Investigation of inter-fraction target motion variations in the context of pencil beam scanned proton therapy in non-small cell lung cancer patients

  • den Otter, L. A.
  • Anakotta, R. M.
  • Weessies, M.
  • Roos, C. T. G.
  • Sijtsema, N. M.
  • Muijs, C. T.
  • Dieters, M.
  • Wijsman, R.
  • Troost, E. G. C.
  • Richter, C.
  • Meijers, A.
  • Langendijk, J. A.
  • Both, S.
  • Knopf, A. C.
Med Phys 2020 Journal Article, cited 0 times
PURPOSE: For locally advanced-stage non-small cell lung cancer (NSCLC), inter-fraction target motion variations during the whole time span of a fractionated treatment course are assessed in a large and representative patient cohort. The primary objective is to develop a suitable motion monitoring strategy for pencil beam scanning proton therapy (PBS-PT) treatments of NSCLC patients during free breathing. METHODS: Weekly 4D computed tomography (4DCT; 41 patients) and daily 4D cone beam computed tomography (4DCBCT; 10 of 41 patients) scans were analyzed for a fully fractionated treatment course. Gross tumor volumes (GTVs) were contoured and the 3D displacement vectors of the centroid positions were compared for all scans. Furthermore, motion amplitude variations in different lung segments were statistically analyzed. The dosimetric impact of target motion variations and target motion assessment was investigated in exemplary patient cases. RESULTS: The median observed centroid motion was 3.4 mm (range: 0.2-12.4 mm) with an average variation of 2.2 mm (range: 0.1-8.8 mm). Ten of 32 patients (31.3%) with an initial motion <5 mm increased beyond a 5-mm motion amplitude during the treatment course. Motion observed in the 4DCBCT scans deviated on average 1.5 mm (range: 0.0-6.0 mm) from the motion observed in the 4DCTs. Larger motion variations for one example patient compromised treatment plan robustness while no dosimetric influence was seen due to motion assessment biases in another example case. CONCLUSIONS: Target motion variations were investigated during the course of radiotherapy for NSCLC patients. Patients with initial GTV motion amplitudes of < 2 mm can be assumed to be stable in motion during the treatment course. For treatments of NSCLC patients who exhibit motion amplitudes of > 2 mm, 4DCBCT should be considered for motion monitoring due to substantial motion variations observed.

AI-based Prognostic Imaging Biomarkers for Precision Neurooncology: the ReSPOND Consortium

  • Davatzikos, C.
  • Barnholtz-Sloan, J. S.
  • Bakas, S.
  • Colen, R.
  • Mahajan, A.
  • Quintero, C. B.
  • Font, J. C.
  • Puig, J.
  • Jain, R.
  • Sloan, A. E.
  • Badve, C.
  • Marcus, D. S.
  • Choi, Y. S.
  • Lee, S. K.
  • Chang, J. H.
  • Poisson, L. M.
  • Griffith, B.
  • Dicker, A. P.
  • Flanders, A. E.
  • Booth, T. C.
  • Rathore, S.
  • Akbari, H.
  • Sako, C.
  • Bilello, M.
  • Shukla, G.
  • Kazerooni, A. F.
  • Brem, S.
  • Lustig, R.
  • Mohan, S.
  • Bagley, S.
  • Nasrallah, M.
  • O'Rourke, D. M.
Neuro-oncology 2020 Journal Article, cited 0 times

Immunotherapy in Metastatic Colorectal Cancer: Could the Latest Developments Hold the Key to Improving Patient Survival?

  • Damilakis, E.
  • Mavroudis, D.
  • Sfakianaki, M.
  • Souglakos, J.
Cancers (Basel) 2020 Journal Article, cited 0 times
Immunotherapy has considerably increased the number of anticancer agents in many tumor types including metastatic colorectal cancer (mCRC). Anti-PD-1 (programmed death 1) and cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) immune checkpoint inhibitors (ICI) have been shown to benefit the mCRC patients with mismatch repair deficiency (dMMR) or high microsatellite instability (MSI-H). However, ICI is not effective in mismatch repair proficient (pMMR) colorectal tumors, which constitute a large population of patients. Several clinical trials evaluating the efficacy of immunotherapy combined with chemotherapy, radiation therapy, or other agents are currently ongoing to extend the benefit of immunotherapy to pMMR mCRC cases. In dMMR patients, MSI testing through immunohistochemistry and/or polymerase chain reaction can be used to identify patients that will benefit from immunotherapy. Next-generation sequencing has the ability to detect MSI-H using a low amount of nucleic acids and its application in clinical practice is currently being explored. Preliminary data suggest that radiomics is capable of discriminating MSI from microsatellite stable mCRC and may play a role as an imaging biomarker in the future. Tumor mutational burden, neoantigen burden, tumor-infiltrating lymphocytes, immunoscore, and gastrointestinal microbiome are promising biomarkers that require further investigation and validation.

Superpixel-based deep convolutional neural networks and active contour model for automatic prostate segmentation on 3D MRI scans

  • da Silva, Giovanni L F
  • Diniz, Petterson S
  • Ferreira, Jonnison L
  • Franca, Joao V F
  • Silva, Aristofanes C
  • de Paiva, Anselmo C
  • de Cavalcanti, Elton A A
Med Biol Eng ComputMed Biol Eng Comput 2020 Journal Article, cited 0 times
Automatic and reliable prostate segmentation is an essential prerequisite for assisting the diagnosis and treatment, such as guiding biopsy procedure and radiation therapy. Nonetheless, automatic segmentation is challenging due to the lack of clear prostate boundaries owing to the similar appearance of prostate and surrounding tissues and the wide variation in size and shape among different patients ascribed to pathological changes or different resolutions of images. In this regard, the state-of-the-art includes methods based on a probabilistic atlas, active contour models, and deep learning techniques. However, these techniques have limitations that need to be addressed, such as MRI scans with the same spatial resolution, initialization of the prostate region with well-defined contours and a set of hyperparameters of deep learning techniques determined manually, respectively. Therefore, this paper proposes an automatic and novel coarse-to-fine segmentation method for prostate 3D MRI scans. The coarse segmentation step combines local texture and spatial information using the Intrinsic Manifold Simple Linear Iterative Clustering algorithm and probabilistic atlas in a deep convolutional neural networks model jointly with the particle swarm optimization algorithm to classify prostate and non-prostate tissues. Then, the fine segmentation uses the 3D Chan-Vese active contour model to obtain the final prostate surface. The proposed method has been evaluated on the Prostate 3T and PROMISE12 databases presenting a dice similarity coefficient of 84.86%, relative volume difference of 14.53%, sensitivity of 90.73%, specificity of 99.46%, and accuracy of 99.11%. Experimental results demonstrate the high performance potential of the proposed method compared to those previously published.

Predicting the ISUP grade of clear cell renal cell carcinoma with multiparametric MR and multiphase CT radiomics

  • Cui, Enming
  • Li, Zhuoyong
  • Ma, Changyi
  • Li, Qing
  • Lei, Yi
  • Lan, Yong
  • Yu, Juan
  • Zhou, Zhipeng
  • Li, Ronggang
  • Long, Wansheng
  • Lin, Fan
Eur Radiol 2020 Journal Article, cited 0 times
OBJECTIVE: To investigate externally validated magnetic resonance (MR)-based and computed tomography (CT)-based machine learning (ML) models for grading clear cell renal cell carcinoma (ccRCC). MATERIALS AND METHODS: Patients with pathologically proven ccRCC in 2009-2018 were retrospectively included for model development and internal validation; patients from another independent institution and The Cancer Imaging Archive dataset were included for external validation. Features were extracted from T1-weighted, T2-weighted, corticomedullary-phase (CMP), and nephrographic-phase (NP) MR as well as precontrast-phase (PCP), CMP, and NP CT. CatBoost was used for ML-model investigation. The reproducibility of texture features was assessed using intraclass correlation coefficient (ICC). Accuracy (ACC) was used for ML-model performance evaluation. RESULTS: Twenty external and 440 internal cases were included. Among 368 and 276 texture features from MR and CT, 322 and 250 features with good to excellent reproducibility (ICC >/= 0.75) were included for ML-model development. The best MR- and CT-based ML models satisfactorily distinguished high- from low-grade ccRCCs in internal (MR-ACC = 73% and CT-ACC = 79%) and external (MR-ACC = 74% and CT-ACC = 69%) validation. Compared to single-sequence or single-phase images, the classifiers based on all-sequence MR (71% to 73% in internal and 64% to 74% in external validation) and all-phase CT (77% to 79% in internal and 61% to 69% in external validation) images had significant increases in ACC. CONCLUSIONS: MR- and CT-based ML models are valuable noninvasive techniques for discriminating high- from low-grade ccRCCs, and multiparameter MR- and multiphase CT-based classifiers are potentially superior to those based on single-sequence or single-phase imaging. KEY POINTS: * Both the MR- and CT-based machine learning models are reliable predictors for differentiating high- from low-grade ccRCCs. * ML models based on multiparameter MR sequences and multiphase CT images potentially outperform those based on single-sequence or single-phase images in ccRCC grading.

Parallel Implementation of the DRLSE Algorithm

  • Coelho, Daniel Popp
  • Furuie, Sérgio Shiguemi
2020 Conference Proceedings, cited 0 times
The Distance-Regularized Level Set Evolution (DRLSE) algorithm solves many problems that plague the class of Level Set algorithms, but has a significant computational cost and is sensitive to its many parameters. Configuring these parameters is a time-intensive trial-and-error task that limits the usability of the algorithm. This is especially true in the field of Medical Imaging, where it would be otherwise highly suitable. The aim of this work is to develop a parallel implementation of the algorithm using the Compute-Unified Device Architecture (CUDA) for Graphics Processing Units (GPU), which would reduce the computational cost of the algorithm, bringing it to the interactive regime. This would lessen the burden of configuring its parameters and broaden its application. Using consumer-grade, hardware, we observed performance gains between roughly 800% and 1700% when comparing against a purely serial C++ implementation we developed, and gains between roughly 180% and 500%, when comparing against the MATLAB reference implementation of DRLSE, both depending on input image resolution.

Parallel Implementation of the DRLSE Algorithm

  • Coelho, Daniel Popp
  • Furuie, Sérgio Shiguemi
2020 Conference Paper, cited 0 times

Acute Lymphoblastic Leukemia Detection Using Depthwise Separable Convolutional Neural Networks

  • Clinton Jr, Laurence P
  • Somes, Karen M
  • Chu, Yongjun
  • Javed, Faizan
SMU Data Science Review 2020 Journal Article, cited 0 times

Machine learning and radiomic phenotyping of lower grade gliomas: improving survival prediction

  • Choi, Yoon Seong
  • Ahn, Sung Soo
  • Chang, Jong Hee
  • Kang, Seok-Gu
  • Kim, Eui Hyun
  • Kim, Se Hoon
  • Jain, Rajan
  • Lee, Seung-Koo
Eur Radiol 2020 Journal Article, cited 0 times
BACKGROUND AND PURPOSE: Recent studies have highlighted the importance of isocitrate dehydrogenase (IDH) mutational status in stratifying biologically distinct subgroups of gliomas. This study aimed to evaluate whether MRI-based radiomic features could improve the accuracy of survival predictions for lower grade gliomas over clinical and IDH status. MATERIALS AND METHODS: Radiomic features (n = 250) were extracted from preoperative MRI data of 296 lower grade glioma patients from databases at our institutional (n = 205) and The Cancer Genome Atlas (TCGA)/The Cancer Imaging Archive (TCIA) (n = 91) datasets. For predicting overall survival, random survival forest models were trained with radiomic features; non-imaging prognostic factors including age, resection extent, WHO grade, and IDH status on the institutional dataset, and validated on the TCGA/TCIA dataset. The performance of the random survival forest (RSF) model and incremental value of radiomic features were assessed by time-dependent receiver operating characteristics. RESULTS: The radiomics RSF model identified 71 radiomic features to predict overall survival, which were successfully validated on TCGA/TCIA dataset (iAUC, 0.620; 95% CI, 0.501-0.756). Relative to the RSF model from the non-imaging prognostic parameters, the addition of radiomic features significantly improved the overall survival prediction accuracy of the random survival forest model (iAUC, 0.627 vs. 0.709; difference, 0.097; 95% CI, 0.003-0.209). CONCLUSION: Radiomic phenotyping with machine learning can improve survival prediction over clinical profile and genomic data for lower grade gliomas. KEY POINTS: * Radiomics analysis with machine learning can improve survival prediction over the non-imaging factors (clinical and molecular profiles) for lower grade gliomas, across different institutions.

Reproducible and Interpretable Spiculation Quantification for Lung Cancer Screening

  • Choi, Wookjin
  • Nadeem, Saad
  • Alam, Sadegh R.
  • Deasy, Joseph O.
  • Tannenbaum, Allen
  • Lu, Wei
Computer methods and programs in biomedicine 2020 Journal Article, cited 0 times
Spiculations are important predictors of lung cancer malignancy, which are spikes on the surface of the pulmonary nodules. In this study, we proposed an interpretable and parameter-free technique to quantify the spiculation using area distortion metric obtained by the conformal (angle-preserving) spherical parameterization. We exploit the insight that for an angle-preserved spherical mapping of a given nodule, the corresponding negative area distortion precisely characterizes the spiculations on that nodule. We introduced novel spiculation scores based on the area distortion metric and spiculation measures. We also semi-automatically segment lung nodule (for reproducibility) as well as vessel and wall attachment to differentiate the real spiculations from lobulation and attachment. A simple pathological malignancy prediction model is also introduced. We used the publicly-available LIDC-IDRI dataset pathologists (strong-label) and radiologists (weak-label) ratings to train and test radiomics models containing this feature, and then externally validate the models. We achieved AUC = 0.80 and 0.76, respectively, with the models trained on the 811 weakly-labeled LIDC datasets and tested on the 72 strongly-labeled LIDC and 73 LUNGx datasets; the previous best model for LUNGx had AUC = 0.68. The number-of-spiculations feature was found to be highly correlated (Spearman’s rank correlation coefficient ) with the radiologists’ spiculation score. We developed a reproducible and interpretable, parameter-free technique for quantifying spiculations on nodules. The spiculation quantification measures was then applied to the radiomics framework for pathological malignancy prediction with reproducible semi-automatic segmentation of nodule. Using our interpretable features (size, attachment, spiculation, lobulation), we were able to achieve higher performance than previous models. In the future, we will exhaustively test our model for lung cancer screening in the clinic.

A Joint Detection and Recognition Approach to Lung Cancer Diagnosis From CT Images With Label Uncertainty

  • L. Chenyang
  • S. C. Chan
IEEE Access 2020 Journal Article, cited 0 times
Automatic lung cancer diagnosis from computer tomography (CT) images requires the detection of nodule location as well as nodule malignancy prediction. This article proposes a joint lung nodule detection and classification network for simultaneous lung nodule detection, segmentation and classification subject to possible label uncertainty in the training set. It operates in an end-to-end manner and provides detection and classification of nodules simultaneously together with a segmentation of the detected nodules. Both the nodule detection and classification subnetworks of the proposed joint network adopt a 3-D encoder-decoder architecture for better exploration of the 3-D data. Moreover, the classification subnetwork utilizes the features extracted from the detection subnetwork and multiscale nodule-specific features for boosting the classification performance. The former serves as valuable prior information for optimizing the more complicated 3D classification network directly to better distinguish suspicious nodules from other tissues compared with direct backpropagation from the decoder. Experimental results show that this co-training yields better performance on both tasks. The framework is validated on the LUNA16 and LIDC-IDRI datasets and a pseudo-label approach is proposed for addressing the label uncertainty problem due to inconsistent annotations/labels. Experimental results show that the proposed nodule detector outperforms the state-of-the-art algorithms and yields comparable performance as state-of-the-art nodule classification algorithms when classification alone is considered. Since our joint detection/recognition approach can directly detect nodules and classify its malignancy instead of performing the tasks separately, our approach is more practical for automatic cancer and nodules detection.

Aggregating Multi-scale Prediction Based on 3D U-Net in Brain Tumor Segmentation

  • Chen, Minglin
  • Wu, Yaozu
  • Wu, Jianhuang
2020 Conference Paper, cited 0 times
Magnetic resonance imaging (MRI) is the dominant modality used in the initial evaluation of patients with primary brain tumors due to its superior image resolution and high safety profile. Automated segmentation of brain tumors from MRI is critical in the determination of response to therapy. In this paper, we propose a novel method which aggregates multi-scale prediction from 3D U-Net to segment enhancing tumor (ET), whole tumor (WT) and tumor core (TC) from multimodal MRI. Multi-scale prediction is derived from the decoder part of 3D U-Net at different resolutions. The final prediction takes the minimum value of the corresponding pixel from the upsampling multi-scale prediction. Aggregating multi-scale prediction can add constraints to the network which is beneficial for limited data. Additionally, we employ model ensembling strategy to further improve the performance of the proposed network. Finally, we achieve dice scores of 0.7745, 0.8640 and 0.7914, and Hausdorff distances (95th percentile) of 4.2365, 6.9381 and 6.6026 for ET, WT and TC respectively on the test set in BraTS 2019.

Automatic Classification of Brain Tumor Types with the MRI Scans and Histopathology Images

  • Chan, Hsiang-Wei
  • Weng, Yan-Ting
  • Huang, Teng-Yi
2020 Conference Paper, cited 0 times
In the study, we used two neural networks, including VGG16 and Resnet50, to process the whole slide images with feature extracting. To classify the three types of brain tumors (i.e., glioblastoma, oligodendroglioma, and astrocytoma), we tried several clustering methods include k-means and random forest classification methods. In the prediction stage, we compared the prediction results with and without MRI features. The results support that the classification method performed with image features extracted by VGG16 has the highest prediction accuracy. Moreover, we found that combining with radiomics generated from MR images slightly improved the accuracy of the classification.

Evaluation of data augmentation via synthetic images for improved breast mass detection on mammograms using deep learning

  • Cha, K. H.
  • Petrick, N.
  • Pezeshk, A.
  • Graff, C. G.
  • Sharma, D.
  • Badal, A.
  • Sahiner, B.
J Med Imaging (Bellingham) 2020 Journal Article, cited 1 times
We evaluated whether using synthetic mammograms for training data augmentation may reduce the effects of overfitting and increase the performance of a deep learning algorithm for breast mass detection. Synthetic mammograms were generated using in silico procedural analytic breast and breast mass modeling algorithms followed by simulated x-ray projections of the breast models into mammographic images. In silico breast phantoms containing masses were modeled across the four BI-RADS breast density categories, and the masses were modeled with different sizes, shapes, and margins. A Monte Carlo-based x-ray transport simulation code, MC-GPU, was used to project the three-dimensional phantoms into realistic synthetic mammograms. 2000 mammograms with 2522 masses were generated to augment a real data set during training. From the Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM) data set, we used 1111 mammograms (1198 masses) for training, 120 mammograms (120 masses) for validation, and 361 mammograms (378 masses) for testing. We used faster R-CNN for our deep learning network with pretraining from ImageNet using the Resnet-101 architecture. We compared the detection performance when the network was trained using different percentages of the real CBIS-DDSM training set (100%, 50%, and 25%), and when these subsets of the training set were augmented with 250, 500, 1000, and 2000 synthetic mammograms. Free-response receiver operating characteristic (FROC) analysis was performed to compare performance with and without the synthetic mammograms. We generally observed an improved test FROC curve when training with the synthetic images compared to training without them, and the amount of improvement depended on the number of real and synthetic images used in training. Our study shows that enlarging the training data with synthetic samples can increase the performance of deep learning systems.

Detection of Tumor Slice in Brain Magnetic Resonance Images by Feature Optimized Transfer Learning

  • Celik, Salih
  • KASIM, Ömer
Aksaray University Journal of Science and Engineering 2020 Journal Article, cited 0 times

The Impact of Normalization Approaches to Automatically Detect Radiogenomic Phenotypes Characterizing Breast Cancer Receptors Status

  • Castaldo, Rossana
  • Pane, Katia
  • Nicolai, Emanuele
  • Salvatore, Marco
  • Franzese, Monica
Cancers (Basel) 2020 Journal Article, cited 0 times
In breast cancer studies, combining quantitative radiomic with genomic signatures can help identifying and characterizing radiogenomic phenotypes, in function of molecular receptor status. Biomedical imaging processing lacks standards in radiomic feature normalization methods and neglecting feature normalization can highly bias the overall analysis. This study evaluates the effect of several normalization techniques to predict four clinical phenotypes such as estrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor receptor 2 (HER2), and triple negative (TN) status, by quantitative features. The Cancer Imaging Archive (TCIA) radiomic features from 91 T1-weighted Dynamic Contrast Enhancement MRI of invasive breast cancers were investigated in association with breast invasive carcinoma miRNA expression profiling from the Cancer Genome Atlas (TCGA). Three advanced machine learning techniques (Support Vector Machine, Random Forest, and Naive Bayesian) were investigated to distinguish between molecular prognostic indicators and achieved an area under the ROC curve (AUC) values of 86%, 93%, 91%, and 91% for the prediction of ER+ versus ER-, PR+ versus PR-, HER2+ versus HER2-, and triple-negative, respectively. In conclusion, radiomic features enable to discriminate major breast cancer molecular subtypes and may yield a potential imaging biomarker for advancing precision medicine.

Multimodal mixed reality visualisation for intraoperative surgical guidance

  • Cartucho, João
  • Shapira, David
  • Ashrafian, Hutan
  • Giannarou, Stamatia
International journal of computer assisted radiology and surgery 2020 Journal Article, cited 0 times

Standardization of brain MR images across machines and protocols: bridging the gap for MRI-based radiomics

  • Carré, Alexandre
  • Klausner, Guillaume
  • Edjlali, Myriam
  • Lerousseau, Marvin
  • Briend-Diop, Jade
  • Sun, Roger
  • Ammari, Samy
  • Reuzé, Sylvain
  • Andres, Emilie Alvarez
  • Estienne, Théo
Sci RepScientific reports 2020 Journal Article, cited 0 times

Head and neck cancer patient images for determining auto-segmentation accuracy in T2-weighted magnetic resonance imaging through expert manual segmentations

  • Cardenas, Carlos E
  • Mohamed, Abdallah S R
  • Yang, Jinzhong
  • Gooding, Mark
  • Veeraraghavan, Harini
  • Kalpathy-Cramer, Jayashree
  • Ng, Sweet Ping
  • Ding, Yao
  • Wang, Jihong
  • Lai, Stephen Y
  • Fuller, Clifton D
  • Sharp, Greg
Med Phys 2020 Dataset, cited 0 times
PURPOSE: The use of magnetic resonance imaging (MRI) in radiotherapy treatment planning has rapidly increased due to its ability to evaluate patient's anatomy without the use of ionizing radiation and due to its high soft tissue contrast. For these reasons, MRI has become the modality of choice for longitudinal and adaptive treatment studies. Automatic segmentation could offer many benefits for these studies. In this work, we describe a T2-weighted MRI dataset of head and neck cancer patients that can be used to evaluate the accuracy of head and neck normal tissue auto-segmentation systems through comparisons to available expert manual segmentations. ACQUISITION AND VALIDATION METHODS: T2-weighted MRI images were acquired for 55 head and neck cancer patients. These scans were collected after radiotherapy computed tomography (CT) simulation scans using a thermoplastic mask to replicate patient treatment position. All scans were acquired on a single 1.5 T Siemens MAGNETOM Aera MRI with two large four-channel flex phased-array coils. The scans covered the region encompassing the nasopharynx region cranially and supraclavicular lymph node region caudally, when possible, in the superior-inferior direction. Manual contours were created for the left/right submandibular gland, left/right parotids, left/right lymph node level II, and left/right lymph node level III. These contours underwent quality assurance to ensure adherence to predefined guidelines, and were corrected if edits were necessary. DATA FORMAT AND USAGE NOTES: The T2-weighted images and RTSTRUCT files are available in DICOM format. The regions of interest are named based on AAPM's Task Group 263 nomenclature recommendations (Glnd_Submand_L, Glnd_Submand_R, LN_Neck_II_L, Parotid_L, Parotid_R, LN_Neck_II_R, LN_Neck_III_L, LN_Neck_III_R). This dataset is available on The Cancer Imaging Archive (TCIA) by the National Cancer Institute under the collection "AAPM RT-MAC Grand Challenge 2019" ( POTENTIAL APPLICATIONS: This dataset provides head and neck patient MRI scans to evaluate auto-segmentation systems on T2-weighted images. Additional anatomies could be provided at a later time to enhance the existing library of contours.

FPB: Improving Multi-Scale Feature Representation Inside Convolutional Layer Via Feature Pyramid Block

  • Cao, Zheng
  • Zhang, Kailai
  • Wu, Ji
2020 Conference Paper, cited 0 times
Multi-scale features exist widely in biomedical images. For example, the scale of lesions may vary greatly according to different diseases. Effective representation of multi-scale features is essential for fully perceiving and understanding objects, which guarantees the performance of models. However, in biomedical image tasks, the insufficiency of data may prevent models from effectively capturing multi-scale features. In this paper, we propose Feature Pyramid Block (FPB), a novel structure to improve multi-scale feature representation within a single convolutional layer, which can be easily plugged into existing convolutional networks. Experiments on public biomedical image datasets prove consistent performance improvement with FPB. Furthermore, the convergence speed is faster and the computational costs are lower when using FPB, which proves high efficiency of our method.

A quantitative model based on clinically relevant MRI features differentiates lower grade gliomas and glioblastoma

  • Cao, H.
  • Erson-Omay, E. Z.
  • Li, X.
  • Gunel, M.
  • Moliterno, J.
  • Fulbright, R. K.
Eur Radiol 2020 Journal Article, cited 0 times
OBJECTIVES: To establish a quantitative MR model that uses clinically relevant features of tumor location and tumor volume to differentiate lower grade glioma (LRGG, grades II and III) and glioblastoma (GBM, grade IV). METHODS: We extracted tumor location and tumor volume (enhancing tumor, non-enhancing tumor, peritumor edema) features from 229 The Cancer Genome Atlas (TCGA)-LGG and TCGA-GBM cases. Through two sampling strategies, i.e., institution-based sampling and repeat random sampling (10 times, 70% training set vs 30% validation set), LASSO (least absolute shrinkage and selection operator) regression and nine-machine learning method-based models were established and evaluated. RESULTS: Principal component analysis of 229 TCGA-LGG and TCGA-GBM cases suggested that the LRGG and GBM cases could be differentiated by extracted features. For nine machine learning methods, stack modeling and support vector machine achieved the highest performance (institution-based sampling validation set, AUC > 0.900, classifier accuracy > 0.790; repeat random sampling, average validation set AUC > 0.930, classifier accuracy > 0.850). For the LASSO method, regression model based on tumor frontal lobe percentage and enhancing and non-enhancing tumor volume achieved the highest performance (institution-based sampling validation set, AUC 0.909, classifier accuracy 0.830). The formula for the best performance of the LASSO model was established. CONCLUSIONS: Computer-generated, clinically meaningful MRI features of tumor location and component volumes resulted in models with high performance (validation set AUC > 0.900, classifier accuracy > 0.790) to differentiate lower grade glioma and glioblastoma. KEY POINTS: * Lower grade glioma and glioblastoma have significant different location and component volume distributions. * We built machine learning prediction models that could help accurately differentiate lower grade gliomas and GBM cases. We introduced a fast evaluation model for possible clinical differentiation and further analysis.

Formal methods for prostate cancer gleason score and treatment prediction using radiomic biomarkers

  • Brunese, Luca
  • Mercaldo, Francesco
  • Reginelli, Alfonso
  • Santone, Antonella
Magnetic Resonance Imaging 2020 Journal Article, cited 11 times

3D automatic levels propagation approach to breast MRI tumor segmentation

  • Bouchebbah, Fatah
  • Slimani, Hachem
Expert Systems with Applications 2020 Journal Article, cited 0 times
Magnetic Resonance Imaging MRI is a relevant tool for breast cancer screening. Moreover, an accurate 3D segmentation of breast tumors from MRI scans plays a key role in the analysis of the disease. In this manuscript, we propose a novel 3D automatic method for segmenting MRI breast tumors, called 3D Automatic Levels Propagation Approach (3D-ALPA). The proposed method performs the segmentation automatically in two steps: in the first step, the entire MRI volume to process is segmented slice by slice. Specifically, using a new automatic approach called 2D Automatic Levels Propagation Approach (2D-ALPA) which is an improved version of a previous semi-automatic approach, named 2D Levels Propagation Approach (2D-LPA). In the second step, the partial segmentations obtained after the application of 2D-ALPA are recombined to rebuild the complete volume(s) of tumor(s). 3D-ALPA has many characteristics, mainly: it is an automatic method which can take into consideration multi-tumor segmentation, and it has the property to be easily applicable according to the Axial, Coronal, as well as Sagittal planes. Therefore, it offers a multi-view representation of the segmented tumor(s). To validate the new 3D-ALPA method, we have firstly performed tests on a 2D private dataset composed of eighteen patients to estimate the accuracy of the new 2D-ALPA in comparison to the previous 2D-LPA. The obtained results have been in favor of the proposed 2D-ALPA, showing hence an improvement in accuracy after integrating the automatization in the 2D-ALPA approach. Then, we have evaluated the complete 3D-ALPA method on a 3D private dataset constituted of MRI exams of twenty-two patients having real breast tumors of different types, and on the public RIDER dataset. Essentially, 3D-ALPA has been evaluated regarding two main features: segmentation accuracy and running time, by considering two kinds of breast tumors: non-enhanced and enhanced tumors. The experimental studies have shown that 3D-ALPA has produced better results for the both kinds of tumors than a recent and concurrent method in the literature that addresses the same problematic.

Integration of convolutional neural networks for pulmonary nodule malignancy assessment in a lung cancer classification pipeline

  • Bonavita, I.
  • Rafael-Palou, X.
  • Ceresa, M.
  • Piella, G.
  • Ribas, V.
  • Gonzalez Ballester, M. A.
Comput Methods Programs Biomed 2020 Journal Article, cited 3 times
BACKGROUND AND OBJECTIVE: The early identification of malignant pulmonary nodules is critical for a better lung cancer prognosis and less invasive chemo or radio therapies. Nodule malignancy assessment done by radiologists is extremely useful for planning a preventive intervention but is, unfortunately, a complex, time-consuming and error-prone task. This explains the lack of large datasets containing radiologists malignancy characterization of nodules; METHODS: In this article, we propose to assess nodule malignancy through 3D convolutional neural networks and to integrate it in an automated end-to-end existing pipeline of lung cancer detection. For training and testing purposes we used independent subsets of the LIDC dataset; RESULTS: Adding the probabilities of nodules malignity in a baseline lung cancer pipeline improved its F1-weighted score by 14.7%, whereas integrating the malignancy model itself using transfer learning outperformed the baseline prediction by 11.8% of F1-weighted score; CONCLUSIONS: Despite the limited size of the lung cancer datasets, integrating predictive models of nodule malignancy improves prediction of lung cancer.

Dynamic conformal arcs for lung stereotactic body radiation therapy: A comparison with volumetric-modulated arc therapy

  • Bokrantz, R.
  • Wedenberg, M.
  • Sandwall, P.
J Appl Clin Med Phys 2020 Journal Article, cited 1 times
This study constitutes a feasibility assessment of dynamic conformal arc (DCA) therapy as an alternative to volumetric-modulated arc therapy (VMAT) for stereotactic body radiation therapy (SBRT) of lung cancer. The rationale for DCA is lower geometric complexity and hence reduced risk for interplay errors induced by respiratory motion. Forward planned DCA and inverse planned DCA based on segment-weight optimization were compared to VMAT for single arc treatments of five lung patients. Analysis of dose-volume histograms and clinical goal fulfillment revealed that DCA can generate satisfactory and near equivalent dosimetric quality to VMAT, except for complex tumor geometries. Segment-weight optimized DCA provided spatial dose distributions qualitatively similar to those for VMAT. Our results show that DCA, and particularly segment-weight optimized DCA, may be an attractive alternative to VMAT for lung SBRT treatments if the patient anatomy is favorable.

Multiparametric MRI and auto-fixed volume of interest-based radiomics signature for clinically significant peripheral zone prostate cancer

  • Bleker, J.
  • Kwee, T. C.
  • Dierckx, Rajo
  • de Jong, I. J.
  • Huisman, H.
  • Yakar, D.
Eur Radiol 2020 Journal Article, cited 2 times
OBJECTIVES: To create a radiomics approach based on multiparametric magnetic resonance imaging (mpMRI) features extracted from an auto-fixed volume of interest (VOI) that quantifies the phenotype of clinically significant (CS) peripheral zone (PZ) prostate cancer (PCa). METHODS: This study included 206 patients with 262 prospectively called mpMRI prostate imaging reporting and data system 3-5 PZ lesions. Gleason scores > 6 were defined as CS PCa. Features were extracted with an auto-fixed 12-mm spherical VOI placed around a pin point in each lesion. The value of dynamic contrast-enhanced imaging(DCE), multivariate feature selection and extreme gradient boosting (XGB) vs. univariate feature selection and random forest (RF), expert-based feature pre-selection, and the addition of image filters was investigated using the training (171 lesions) and test (91 lesions) datasets. RESULTS: The best model with features from T2-weighted (T2-w) + diffusion-weighted imaging (DWI) + DCE had an area under the curve (AUC) of 0.870 (95% CI 0.980-0.754). Removal of DCE features decreased AUC to 0.816 (95% CI 0.920-0.710), although not significantly (p = 0.119). Multivariate and XGB outperformed univariate and RF (p = 0.028). Expert-based feature pre-selection and image filters had no significant contribution. CONCLUSIONS: The phenotype of CS PZ PCa lesions can be quantified using a radiomics approach based on features extracted from T2-w + DWI using an auto-fixed VOI. Although DCE features improve diagnostic performance, this is not statistically significant. Multivariate feature selection and XGB should be preferred over univariate feature selection and RF. The developed model may be a valuable addition to traditional visual assessment in diagnosing CS PZ PCa. KEY POINTS: * T2-weighted and diffusion-weighted imaging features are essential components of a radiomics model for clinically significant prostate cancer; addition of dynamic contrast-enhanced imaging does not significantly improve diagnostic performance. * Multivariate feature selection and extreme gradient outperform univariate feature selection and random forest. * The developed radiomics model that extracts multiparametric MRI features with an auto-fixed volume of interest may be a valuable addition to visual assessment in diagnosing clinically significant prostate cancer.

Convolutional neural networks for head and neck tumor segmentation on 7-channel multiparametric MRI: a leave-one-out analysis

  • Bielak, Lars
  • Wiedenmann, Nicole
  • Berlin, Arnie
  • Nicolay, Nils Henrik
  • Gunashekar, Deepa Darshini
  • Hagele, Leonard
  • Lottner, Thomas
  • Grosu, Anca-Ligia
  • Bock, Michael
Radiat Oncol 2020 Journal Article, cited 1 times
BACKGROUND: Automatic tumor segmentation based on Convolutional Neural Networks (CNNs) has shown to be a valuable tool in treatment planning and clinical decision making. We investigate the influence of 7 MRI input channels of a CNN with respect to the segmentation performance of head&neck cancer. METHODS: Head&neck cancer patients underwent multi-parametric MRI including T2w, pre- and post-contrast T1w, T2*, perfusion (ktrans, ve) and diffusion (ADC) measurements at 3 time points before and during radiochemotherapy. The 7 different MRI contrasts (input channels) and manually defined gross tumor volumes (primary tumor and lymph node metastases) were used to train CNNs for lesion segmentation. A reference CNN with all input channels was compared to individually trained CNNs where one of the input channels was left out to identify which MRI contrast contributes the most to the tumor segmentation task. A statistical analysis was employed to account for random fluctuations in the segmentation performance. RESULTS: The CNN segmentation performance scored up to a Dice similarity coefficient (DSC) of 0.65. The network trained without T2* data generally yielded the worst results, with DeltaDSCGTV-T = 5.7% for primary tumor and DeltaDSCGTV-Ln = 5.8% for lymph node metastases compared to the network containing all input channels. Overall, the ADC input channel showed the least impact on segmentation performance, with DeltaDSCGTV-T = 2.4% for primary tumor and DeltaDSCGTV-Ln = 2.2% respectively. CONCLUSIONS: We developed a method to reduce overall scan times in MRI protocols by prioritizing those sequences that add most unique information for the task of automatic tumor segmentation. The optimized CNNs could be used to aid in the definition of the GTVs in radiotherapy planning, and the faster imaging protocols will reduce patient scan times which can increase patient compliance. TRIAL REGISTRATION: The trial was registered retrospectively at the German Register for Clinical Studies (DRKS) under register number DRKS00003830 on August 20th, 2015.

Isolation of Prostate Gland in T1-Weighted Magnetic Resonance Images using Computer Vision

  • Bhattacharya, Sayantan
  • Sharma, Apoorv
  • Gupta, Rinki
  • Bhan, Anupama
2020 Conference Proceedings, cited 0 times

Deep-learning framework to detect lung abnormality – A study with chest X-Ray and lung CT scan images

  • Bhandary, Abhir
  • Prabhu, G. Ananth
  • Rajinikanth, V.
  • Thanaraj, K. Palani
  • Satapathy, Suresh Chandra
  • Robbins, David E.
  • Shasky, Charles
  • Zhang, Yu-Dong
  • Tavares, João Manuel R. S.
  • Raja, N. Sri Madhava
Pattern Recognition Letters 2020 Journal Article, cited 0 times
Lung abnormalities are highly risky conditions in humans. The early diagnosis of lung abnormalities is essential to reduce the risk by enabling quick and efficient treatment. This research work aims to propose a Deep-Learning (DL) framework to examine lung pneumonia and cancer. This work proposes two different DL techniques to assess the considered problem: (i) The initial DL method, named a modified AlexNet (MAN), is proposed to classify chest X-Ray images into normal and pneumonia class. In the MAN, the classification is implemented using with Support Vector Machine (SVM), and its performance is compared against Softmax. Further, its performance is validated with other pre-trained DL techniques, such as AlexNet, VGG16, VGG19 and ResNet50. (ii) The second DL work implements a fusion of handcrafted and learned features in the MAN to improve classification accuracy during lung cancer assessment. This work employs serial fusion and Principal Component Analysis (PCA) based features selection to enhance the feature vector. The performance of this DL frame work is tested using benchmark lung cancer CT images of LIDC-IDRI and classification accuracy (97.27%) is attained.

Fuzzy volumetric delineation of brain tumor and survival prediction

  • Bhadani, Saumya
  • Mitra, Sushmita
  • Banerjee, Subhashis
Soft Computing 2020 Journal Article, cited 0 times
A novel three-dimensional detailed delineation algorithm is introduced for Glioblastoma multiforme tumors in MRI. It efficiently delineates the whole tumor, enhancing core, edema and necrosis volumes using fuzzy connectivity and multi-thresholding, based on a single seed voxel. While the whole tumor volume delineation uses FLAIR and T2 MRI channels, the outlining of the enhancing core, necrosis and edema volumes employs the T1C channel. Discrete curve evolution is initially applied for multi-thresholding, to determine intervals around significant (visually critical) points, and a threshold is determined in each interval using bi-level Otsu’s method or Li and Lee’s entropy. This is followed by an interactive whole tumor volume delineation using FLAIR and T2 MRI sequences, requiring a single user-defined seed. An efficient and robust whole tumor extraction is executed using fuzzy connectedness and dynamic thresholding. Finally, the segmented whole tumor volume in T1C MRI channel is again subjected to multi-level segmentation, to delineate its sub-parts, encompassing enhancing core, necrosis and edema. This was followed by survival prediction of patients using the concept of habitats. Qualitative and quantitative evaluation, on FLAIR, T2 and T1C MR sequences of 29 GBM patients, establish its superiority over related methods, visually as well as in terms of Dice scores, Sensitivity and Hausdorff distance.

Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T

  • Bell, Laura C
  • Stokes, Ashley M
  • Quarles, C Chad
Journal of Magnetic Resonance Imaging 2020 Journal Article, cited 0 times

Evaluating the Use of rCBV as a Tumor Grade and Treatment Response Classifier Across NCI Quantitative Imaging Network Sites: Part II of the DSC-MRI Digital Reference Object (DRO) Challenge

  • Bell, Laura C
  • Semmineh, Natenael
  • An, Hongyu
  • Eldeniz, Cihat
  • Wahl, Richard
  • Schmainda, Kathleen M
  • Prah, Melissa A
  • Erickson, Bradley J
  • Korfiatis, Panagiotis
  • Wu, Chengyue
  • Sorace, Anna G
  • Yankeelov, Thomas E
  • Rutledge, Neal
  • Chenevert, Thomas L
  • Malyarenko, Dariya
  • Liu, Yichu
  • Brenner, Andrew
  • Hu, Leland S
  • Zhou, Yuxiang
  • Boxerman, Jerrold L
  • Yen, Yi-Fen
  • Kalpathy-Cramer, Jayashree
  • Beers, Andrew L
  • Muzi, Mark
  • Madhuranthakam, Ananth J
  • Pinho, Marco
  • Johnson, Brian
  • Quarles, C Chad
Tomography 2020 Journal Article, cited 1 times
We have previously characterized the reproducibility of brain tumor relative cerebral blood volume (rCBV) using a dynamic susceptibility contrast magnetic resonance imaging digital reference object across 12 sites using a range of imaging protocols and software platforms. As expected, reproducibility was highest when imaging protocols and software were consistent, but decreased when they were variable. Our goal in this study was to determine the impact of rCBV reproducibility for tumor grade and treatment response classification. We found that varying imaging protocols and software platforms produced a range of optimal thresholds for both tumor grading and treatment response, but the performance of these thresholds was similar. These findings further underscore the importance of standardizing acquisition and analysis protocols across sites and software benchmarking.

Radiogenomic-Based Survival Risk Stratification of Tumor Habitat on Gd-T1w MRI Is Associated with Biological Processes in Glioblastoma

  • Beig, Niha
  • Bera, Kaustav
  • Prasanna, Prateek
  • Antunes, Jacob
  • Correa, Ramon
  • Singh, Salendra
  • Saeed Bamashmos, Anas
  • Ismail, Marwa
  • Braman, Nathaniel
  • Verma, Ruchika
  • Hill, Virginia B
  • Statsevych, Volodymyr
  • Ahluwalia, Manmeet S
  • Varadan, Vinay
  • Madabhushi, Anant
  • Tiwari, Pallavi
Clin Cancer Res 2020 Journal Article, cited 0 times
PURPOSE: To (i) create a survival risk score using radiomic features from the tumor habitat on routine MRI to predict progression-free survival (PFS) in glioblastoma and (ii) obtain a biological basis for these prognostic radiomic features, by studying their radiogenomic associations with molecular signaling pathways. EXPERIMENTAL DESIGN: Two hundred three patients with pretreatment Gd-T1w, T2w, T2w-FLAIR MRI were obtained from 3 cohorts: The Cancer Imaging Archive (TCIA; n = 130), Ivy GAP (n = 32), and Cleveland Clinic (n = 41). Gene-expression profiles of corresponding patients were obtained for TCIA cohort. For every study, following expert segmentation of tumor subcompartments (necrotic core, enhancing tumor, peritumoral edema), 936 3D radiomic features were extracted from each subcompartment across all MRI protocols. Using Cox regression model, radiomic risk score (RRS) was developed for every protocol to predict PFS on the training cohort (n = 130) and evaluated on the holdout cohort (n = 73). Further, Gene Ontology and single-sample gene set enrichment analysis were used to identify specific molecular signaling pathway networks associated with RRS features. RESULTS: Twenty-five radiomic features from the tumor habitat yielded the RRS. A combination of RRS with clinical (age and gender) and molecular features (MGMT and IDH status) resulted in a concordance index of 0.81 (P < 0.0001) on training and 0.84 (P = 0.03) on the test set. Radiogenomic analysis revealed associations of RRS features with signaling pathways for cell differentiation, cell adhesion, and angiogenesis, which contribute to chemoresistance in GBM. CONCLUSIONS: Our findings suggest that prognostic radiomic features from routine Gd-T1w MRI may also be significantly associated with key biological processes that affect response to chemotherapy in GBM.

Integration of proteomics with CT-based qualitative and radiomic features in high-grade serous ovarian cancer patients: an exploratory analysis

  • Beer, Lucian
  • Sahin, Hilal
  • Bateman, Nicholas W
  • Blazic, Ivana
  • Vargas, Hebert Alberto
  • Veeraraghavan, Harini
  • Kirby, Justin
  • Fevrier-Sullivan, Brenda
  • Freymann, John B
  • Jaffe, C Carl
European Radiology 2020 Journal Article, cited 1 times

A Heterogeneous and Multi-Range Soft-Tissue Deformation Model for Applications in Adaptive Radiotherapy

  • Bartelheimer, Kathrin
2020 Thesis, cited 0 times
Abstract During fractionated radiotherapy, anatomical changes result in uncertainties in the applied dose distribution. With increasing steepness of applied dose gradients, the relevance of patient deformations increases. Especially in proton therapy, small anatomical changes in the order of millimeters can result in large range uncertainties and therefore in substantial deviations from the planned dose. To quantify the anatomical changes, deformation models are required. With upcoming MR-guidance, the soft-tissue deformations gain visibility, but so far only few soft-tissue models meeting the requirements of high-precision radiotherapy exist. Most state-of-the-art models either lack anatomical detail or exhibit long computation times. In this work, a fast soft-tissue deformation model is developed which is capable of considering tissue properties of heterogeneous tissue. The model is based on the chainmail (CM)-concept, which is improved by three basic features. For the first time, rotational degrees of freedom are introduced into the CM-concept to improve the characteristic deformation behavior. A novel concept for handling multiple deformation initiators is developed to cope with global deformation input. And finally, a concept for handling various shapes of deformation input is proposed to provide a high flexibility concerning the design of deformation input. To demonstrate the model flexibility, it was coupled to a kinematic skeleton model for the head and neck region, which provides anatomically correct deformation input for the bones. For exemplary patient CTs, the combined model was shown to be capable of generating artificially deformed CT images with realistic appearance. This was achieved for small-range deformations in the order of interfractional deformations, as well as for large-range deformations like an arms-up to arms-down deformation, as can occur between images of different modalities. The deformation results showed a strong improvement in biofidelity, compared to the original chainmail-concept, as well as compared to clinically used image-based deformation methods. The computation times for the model are in the order of 30 min for single-threaded calculations, by simple code parallelization times in the order of 1 min can be achieved. Applications that require realistic forward deformations of CT images will benefit from the improved biofidelity of the developed model. Envisioned applications are the generation of plan libraries and virtual phantoms, as well as data augmentation for deep learning approaches. Due to the low computation times, the model is also well suited for image registration applications. In this context, it will contribute to an improved calculation of accumulated dose, as is required in high-precision adaptive radiotherapy. Translation of abstract (German) Anatomische Veränderungen im Laufe der fraktionierten Strahlentherapie erzeugen Unsicherheiten in der tatsächlich applizierten Dosisverteilung. Je steiler die Dosisgradienten in der Verteilung sind, desto größer wird der Einfluss von Patientendeformationen. Insbesondere in der Protonentherapie erzeugen schon kleine anatomische Veränderungen im mm-Bereich große Unsicherheiten in der Reichweite und somit extreme Unterschiede zur geplanten Dosis. Um solche anatomischen Veränderungen zu quantifizieren, werden Deformationsmodelle benötigt. Durch die aufkommenden Möglichkeiten von MR-guidance gewinnt das Weichgewebe an Sichtbarkeit. Allerdings gibt es bisher nur wenige Modelle für Weichgewebe, welche den Anforderungen von hochpräziser Strahlentherapie genügen. Die meisten Modelle berücksichtigen entweder nicht genügend anatomische Details oder benötigen lange Rechenzeiten. In dieser Arbeit wird ein schnelles Deformationsmodell für Weichgewebe entwickelt, welches es ermöglicht, Gewebeeigenschaften von heterogenem Gewebe zu berücksichtigen. Dieses Modell basiert auf dem Chainmail (CM)-Konzept, welches um drei grundlegende Eigenschaften erweitert wird. Rotationsfreiheitsgrade werden in das CM-Konzept eingebracht, um das charakteristische Deformationsverhalten zu verbessern. Es wird ein neues Konzept für multiple Deformationsinitiatoren entwickelt, um mit globalem Deformationsinput umgehen zu können. Und zuletzt wird ein Konzept zum Umgang mit verschiedenen Formen von Deformationsinput vorgestellt, welches eine hohe Flexibilität für die Kopplung zu anderen Modellen ermöglicht. Um diese Flexibilität des Modells zu zeigen, wurde es mit einem kinematischen Skelettmodell für die Kopf-Hals-Region gekoppelt, welches anatomisch korrekten Input für die Knochen liefert. Basierend auf exemplarischen Patientendatensätzen wurde gezeigt, dass das gekoppelte Modell realistisch aussehende, künstlich deformierte CTs erzeugen kann. Dies war sowohl für eine kleine Deformation im Bereich von interfraktionellen Bewegungen als auch für eine große Deformation, wie z.B. eine arms-up zu arms-down Bewegung, welche zwischen multimodalen Bildern auftreten kann, möglich. Die Ergebnisse zeigen eine starke Verbesserung der Biofidelity im Vergleich zum CM-Modell, und auch im Vergleich zu klinisch eingesetzten bildbasierten Deformationsmodellen. Die Rechenzeiten für das Modell liegen im Bereich von 30 min für single-threaded Berechnungen. Durch einfache Code-Parallelisierung können Zeiten im Bereich von 1 min erreicht werden. Anwendungen, die realistische CTs aus Vorwärtsdeformationen benötigen, werden von der verbesserten Biofidelity des entwickelten Modells profitieren. Mögliche Anwendungen sind die Erstellung von Plan-Bibliotheken und virtuellen Phantomen sowie Daten-Augmentation für deep-learning Ansätze. Aufgrund der geringen Rechenzeiten ist das Modell auch für Anwendungen in der Bildregistrierung gut geeignet. In diesem Kontext wird es zu einer verbesserten Berechnung der akkumulierten Dosis beitragen, welche für hochpräzise adaptive Strahlentherapie benötigt wird.

A novel fully automated MRI-based deep-learning method for classification of IDH mutation status in brain gliomas

  • Bangalore Yogananda, Chandan Ganesh
  • Shah, Bhavya R
  • Vejdani-Jahromi, Maryam
  • Nalawade, Sahil S
  • Murugesan, Gowtham K
  • Yu, Frank F
  • Pinho, Marco C
  • Wagner, Benjamin C
  • Mickey, Bruce
  • Patel, Toral R
Neuro-oncology 2020 Journal Article, cited 4 times

Glioma Classification Using Deep Radiomics

  • Banerjee, Subhashis
  • Mitra, Sushmita
  • Masulli, Francesco
  • Rovetta, Stefano
SN Computer Science 2020 Journal Article, cited 1 times
Glioma constitutes $$80\%$$80%of malignant primary brain tumors in adults, and is usually classified as high-grade glioma (HGG) and low-grade glioma (LGG). The LGG tumors are less aggressive, with slower growth rate as compared to HGG, and are responsive to therapy. Tumor biopsy being challenging for brain tumor patients, noninvasive imaging techniques like magnetic resonance imaging (MRI) have been extensively employed in diagnosing brain tumors. Therefore, development of automated systems for the detection and prediction of the grade of tumors based on MRI data becomes necessary for assisting doctors in the framework of augmented intelligence. In this paper, we thoroughly investigate the power of deep convolutional neural networks (ConvNets) for classification of brain tumors using multi-sequence MR images. We propose novel ConvNet models, which are trained from scratch, on MRI patches, slices, and multi-planar volumetric slices. The suitability of transfer learning for the task is next studied by applying two existing ConvNets models (VGGNet and ResNet) trained on ImageNet dataset, through fine-tuning of the last few layers. Leave-one-patient-out testing, and testing on the holdout dataset are used to evaluate the performance of the ConvNets. The results demonstrate that the proposed ConvNets achieve better accuracy in all cases where the model is trained on the multi-planar volumetric dataset. Unlike conventional models, it obtains a testing accuracy of $$95\%$$95%for the low/high grade glioma classification problem. A score of $$97\%$$97%is generated for classification of LGG with/without 1p/19q codeletion, without any additional effort toward extraction and selection of features. We study the properties of self-learned kernels/ filters in different layers, through visualization of the intermediate layer outputs. We also compare the results with that of state-of-the-art methods, demonstrating a maximum improvement of $$7\%$$7%on the grading performance of ConvNets and $$9\%$$9%on the prediction of 1p/19q codeletion status.

Multimodal Brain Tumor Segmentation with Normal Appearance Autoencoder

  • Astaraki, Mehdi
  • Wang, Chunliang
  • Carrizo, Gabriel
  • Toma-Dasu, Iuliana
  • Smedby, Örjan
2020 Conference Paper, cited 0 times
We propose a hybrid segmentation pipeline based on the autoencoders’ capability of anomaly detection. To this end, we, first, introduce a new augmentation technique to generate synthetic paired images. Gaining advantage from the paired images, we propose a Normal Appearance Autoencoder (NAA) that is able to remove tumors and thus reconstruct realistic-looking, tumor-free images. After estimating the regions where the abnormalities potentially exist, a segmentation network is guided toward the candidate region. We tested the proposed pipeline on the BraTS 2019 database. The preliminary results indicate that the proposed model improved the segmentation accuracy of brain tumor subregions compared to the U-Net model.

Hand-Crafted and Deep Learning-Based Radiomics Models for Recurrence Prediction of Non-Small Cells Lung Cancers

  • Aonpong, Panyanat
  • Iwamoto, Yutaro
  • Wang, Weibin
  • Lin, Lanfen
  • Chen, Yen-Wei
Innovation in Medicine and Healthcare 2020 Journal Article, cited 0 times
This research was created to examine the recurrence of non-small lung cancer (NSCLC) using computed-tomography images (CT-images) to avoid biopsy from patients because the cancer cells may have an uneven distribution which can lead to the investigation mistake. This work presents a comparison of the operations of two different methods: Hand-Crafted Radiomics model and deep learning-based radiomics model using 88 patient samples from open-access dataset of non-small cell lung cancer in The Cancer Imaging Archive (TCIA) Public Access. In Hand-Crafted Radiomics Models, the pattern of NSCLC CT-images was analyzed in various statistics as radiomics features. The radiomics features associated with recurrence are selected through three statistical calculations: LASSO, Chi-2, and ANOVA. Then, those selected radiomics features were processed using different models. In the Deep Learning-based Radiomics Model, the proposed artificial neural network has been used to enhance the recurrence prediction. The Hand-Crafted Radiomics Model with non-selected, Lasso, Chi-2, and ANOVA, give the following results: 76.56% (AUC 0.6361), 76.83% (AUC 0.6375), 78.64% (AUC 0.6778), and 78.17% (AUC 0.6556), respectively, and the Deep Learning-based Radiomic Models, including ResNet50 and DenseNet121 give the following results: 79.00% (AUC 0.6714), and 79.31% (AUC 0.6712), respectively.

Automated apparent diffusion coefficient analysis for genotype prediction in lower grade glioma: association with the T2-FLAIR mismatch sign

  • Aliotta, E.
  • Dutta, S. W.
  • Feng, X.
  • Tustison, N. J.
  • Batchala, P. P.
  • Schiff, D.
  • Lopes, M. B.
  • Jain, R.
  • Druzgal, T. J.
  • Mukherjee, S.
  • Patel, S. H.
J Neurooncol 2020 Journal Article, cited 0 times
PURPOSE: The prognosis of lower grade glioma (LGG) patients depends (in large part) on both isocitrate dehydrogenase (IDH) gene mutation and chromosome 1p/19q codeletion status. IDH-mutant LGG without 1p/19q codeletion (IDHmut-Noncodel) often exhibit a unique imaging appearance that includes high apparent diffusion coefficient (ADC) values not observed in other subtypes. The purpose of this study was to develop an ADC analysis-based approach that can automatically identify IDHmut-Noncodel LGG. METHODS: Whole-tumor ADC metrics, including fractional tumor volume with ADC > 1.5 x 10(-3)mm(2)/s (VADC>1.5), were used to identify IDHmut-Noncodel LGG in a cohort of N = 134 patients. Optimal threshold values determined in this dataset were then validated using an external dataset containing N = 93 cases collected from The Cancer Imaging Archive. Classifications were also compared with radiologist-identified T2-FLAIR mismatch sign and evaluated concurrently to identify added value from a combined approach. RESULTS: VADC>1.5 classified IDHmut-Noncodel LGG in the internal cohort with an area under the curve (AUC) of 0.80. An optimal threshold value of 0.35 led to sensitivity/specificity = 0.57/0.93. Classification performance was similar in the validation cohort, with VADC>1.5 >/= 0.35 achieving sensitivity/specificity = 0.57/0.91 (AUC = 0.81). Across both groups, 37 cases exhibited positive T2-FLAIR mismatch sign-all of which were IDHmut-Noncodel. Of these, 32/37 (86%) also exhibited VADC>1.5 >/= 0.35, as did 23 additional IDHmut-Noncodel cases which were negative for T2-FLAIR mismatch sign. CONCLUSION: Tumor subregions with high ADC were a robust indicator of IDHmut-Noncodel LGG, with VADC>1.5 achieving > 90% classification specificity in both internal and validation cohorts. VADC>1.5 exhibited strong concordance with the T2-FLAIR mismatch sign and the combination of both parameters improved sensitivity in detecting IDHmut-Noncodel LGG.

Semi-automatic classification of prostate cancer on multi-parametric MR imaging using a multi-channel 3D convolutional neural network

  • Aldoj, Nader
  • Lukas, Steffen
  • Dewey, Marc
  • Penzkofer, Tobias
European Radiology 2020 Journal Article, cited 1 times

A Novel Approach to Improving Brain Image Classification Using Mutual Information-Accelerated Singular Value Decomposition

  • Al-Saffar, Zahraa A
  • Yildirim, Tülay
IEEE Access 2020 Journal Article, cited 0 times

Pharmacokinetic modeling of dynamic contrast‐enhanced MRI using a reference region and input function tail

  • Ahmed, Zaki
  • Levesque, Ives R
Magnetic Resonance in Medicine 2020 Journal Article, cited 0 times

3D-MCN: A 3D Multi-scale Capsule Network for Lung Nodule Malignancy Prediction

  • Afshar, Parnian
  • Oikonomou, Anastasia
  • Naderkhani, Farnoosh
  • Tyrrell, Pascal N
  • Plataniotis, Konstantinos N
  • Farahani, Keyvan
  • Mohammadi, Arash
Sci RepScientific reports 2020 Journal Article, cited 1 times
Despite the advances in automatic lung cancer malignancy prediction, achieving high accuracy remains challenging. Existing solutions are mostly based on Convolutional Neural Networks (CNNs), which require a large amount of training data. Most of the developed CNN models are based only on the main nodule region, without considering the surrounding tissues. Obtaining high sensitivity is challenging with lung nodule malignancy prediction. Moreover, the interpretability of the proposed techniques should be a consideration when the end goal is to utilize the model in a clinical setting. Capsule networks (CapsNets) are new and revolutionary machine learning architectures proposed to overcome shortcomings of CNNs. Capitalizing on the success of CapsNet in biomedical domains, we propose a novel model for lung tumor malignancy prediction. The proposed framework, referred to as the 3D Multi-scale Capsule Network (3D-MCN), is uniquely designed to benefit from: (i) 3D inputs, providing information about the nodule in 3D; (ii) Multi-scale input, capturing the nodule's local features, as well as the characteristics of the surrounding tissues, and; (iii) CapsNet-based design, being capable of dealing with a small number of training samples. The proposed 3D-MCN architecture predicted lung nodule malignancy with a high accuracy of 93.12%, sensitivity of 94.94%, area under the curve (AUC) of 0.9641, and specificity of 90% when tested on the LIDC-IDRI dataset. When classifying patients as having a malignant condition (i.e., at least one malignant nodule is detected) or not, the proposed model achieved an accuracy of 83%, and a sensitivity and specificity of 84% and 81% respectively.

A novel CAD system to automatically detect cancerous lung nodules using wavelet transform and SVM

  • Abu Baker, Ayman A.
  • Ghadi, Yazeed
International Journal of Electrical and Computer Engineering (IJECE) 2020 Journal Article, cited 0 times
A novel cancerous nodules detection algorithm for computed tomography images (CT - images ) is presented in this paper. CT -images are large size images with high resolution. In some cases, number of cancerous lung nodule lesions may missed by the radiologist due to fatigue. A CAD system that is proposed in this paper can help the radiologist in detecting cancerous nodules in CT -images. The proposed algorithm is divided to four stages. In the first stage, an enhancement algorithm is implement to highlight the suspicious regions. Then in the second stage, the region of interest will be detected. The adaptive SVM and wavelet transform techniques are used to reduce the detected false positive regions. This algorithm is evaluated using 60 cases (normal and cancerous cases), and it shows a high sensitivity in detecting the cancerous lung nodules with TP ration 94.5% and with FP ratio 7 cluster/image.

Three-dimensional visualization of brain tumor progression based accurate segmentation via comparative holographic projection

  • Abdelazeem, R. M.
  • Youssef, D.
  • El-Azab, J.
  • Hassab-Elnaby, S.
  • Agour, M.
PLoS One 2020 Journal Article, cited 0 times
We propose a new optical method based on comparative holographic projection for visual comparison between two abnormal follow-up magnetic resonance (MR) exams of glioblastoma patients to effectively visualize and assess tumor progression. First, the brain tissue and tumor areas are segmented from the MR exams using the fast marching method (FMM). The FMM approach is implemented on a computed pixel weight matrix based on an automated selection of a set of initialized target points. Thereafter, the associated phase holograms are calculated for the segmented structures based on an adaptive iterative Fourier transform algorithm (AIFTA). Within this approach, a spatial multiplexing is applied to reduce the speckle noise. Furthermore, hologram modulation is performed to represent two different reconstruction schemes. In both schemes, all calculated holograms are superimposed into a single two-dimensional (2D) hologram which is then displayed on a reflective phase-only spatial light modulator (SLM) for optical reconstruction. The optical reconstruction of the first scheme displays a 3D map of the tumor allowing to visualize the volume of the tumor after treatment and at the progression. Whereas, the second scheme displays the follow-up exams in a side-by-side mode highlighting tumor areas, so the assessment of each case can be fast achieved. The proposed system can be used as a valuable tool for interpretation and assessment of the tumor progression with respect to the treatment method providing an improvement in diagnosis and treatment planning.

Assessing robustness of radiomic features by image perturbation

  • Zwanenburg, Alex
  • Leger, Stefan
  • Agolli, Linda
  • Pilz, Karoline
  • Troost, Esther G C
  • Richter, Christian
  • Löck, Steffen
Sci RepScientific reports 2019 Journal Article, cited 0 times
Image features need to be robust against differences in positioning, acquisition and segmentation to ensure reproducibility. Radiomic models that only include robust features can be used to analyse new images, whereas models with non-robust features may fail to predict the outcome of interest accurately. Test-retest imaging is recommended to assess robustness, but may not be available for the phenotype of interest. We therefore investigated 18 combinations of image perturbations to determine feature robustness, based on noise addition (N), translation (T), rotation (R), volume growth/shrinkage (V) and supervoxel-based contour randomisation (C). Test-retest and perturbation robustness were compared for combined total of 4032 morphological, statistical and texture features that were computed from the gross tumour volume in two cohorts with computed tomography imaging: I) 31 non-small-cell lung cancer (NSCLC) patients; II): 19 head-and-neck squamous cell carcinoma (HNSCC) patients. Robustness was determined using the 95% confidence interval (CI) of the intraclass correlation coefficient (1, 1). Features with CI >/= 0:90 were considered robust. The NTCV, TCV, RNCV and RCV perturbation chain produced similar results and identified the fewest false positive robust features (NSCLC: 0.2-0.9%; HNSCC: 1.7-1.9%). Thus, these perturbation chains may be used as an alternative to test-retest imaging to assess feature robustness.

Automatic classification of lung nodule candidates based on a novel 3D convolution network and knowledge transferred from a 2D network

  • Zuo, Wangxia
  • Zhou, Fuqiang
  • He, Yuzhu
  • Li, Xiaosong
Med Phys 2019 Journal Article, cited 0 times
OBJECTIVE: In the automatic lung nodule detection system, the authenticity of a large number of nodule candidates needs to be judged, which is a classification task. However, the variable shapes and sizes of the lung nodules have posed a great challenge to the classification of candidates. To solve this problem, we propose a method for classifying nodule candidates through three-dimensional (3D) convolution neural network (ConvNet) model which is trained by transferring knowledge from a multiresolution two-dimensional (2D) ConvNet model. METHODS: In this scheme, a novel 3D ConvNet model is preweighted with the weights of the trained 2D ConvNet model, and then the 3D ConvNet model is trained with 3D image volumes. In this way, the knowledge transfer method can make 3D network easier to converge and make full use of the spatial information of nodules with different sizes and shapes to improve the classification accuracy. RESULTS: The experimental results on 551 065 pulmonary nodule candidates in the LUNA16 dataset show that our method gains a competitive average score in the false-positive reduction track in lung nodule detection, with the sensitivities of 0.619 and 0.642 at 0.125 and 0.25 FPs per scan, respectively. CONCLUSIONS: The proposed method can maintain satisfactory classification accuracy even when the false-positive rate is extremely small in the face of nodules of different sizes and shapes. Moreover, as a transfer learning idea, the method to transfer knowledge from 2D ConvNet to 3D ConvNet is the first attempt to carry out full migration of parameters of various layers including convolution layers, full connection layers, and classifier between different dimensional models, which is more conducive to utilizing the existing 2D ConvNet resources and generalizing transfer learning schemes.

Comparison of Active Learning Strategies Applied to Lung Nodule Segmentation in CT Scans

  • Zotova, Daria
  • Lisowska, Aneta
  • Anderson, Owen
  • Dilys, Vismantas
  • O’Neil, Alison
2019 Book Section, cited 0 times
Supervised machine learning techniques require large amounts of annotated training data to attain good performance. Active learning aims to ease the data collection process by automatically detecting which instances an expert should annotate in order to train a model as quickly and effectively as possible. Such strategies have been previously reported for medical imaging, but for other tasks than focal pathologies where there is high class imbalance and heterogeneous background appearance. In this study we evaluate different data selection approaches (random, uncertain, and representative sampling) and a semi-supervised model training procedure (pseudo-labelling), in the context of lung nodule segmentation in CT volumes from the publicly available LIDC-IDRI dataset. We find that active learning strategies allow us to train a model with equal performance but less than half of the annotation effort; data selection by uncertainty sampling offers the most gain, with the incorporation of representativeness or the addition of pseudo-labelling giving further small improvements. We conclude that active learning is a valuable tool and that further development of these strategies can play a key role in making diagnostic algorithms viable.

The Utilization of Consignable Multi-Model in Detection and Classification of Pulmonary Nodules

  • Zia, Muhammad Bilal
  • Juan, Zhao Juan
  • Rehman, Zia Ur
  • Javed, Kamran
  • Rauf, Saad Abdul
  • Khan, Arooj
International Journal of Computer Applications 2019 Journal Article, cited 2 times
Early stage Detection and Classification of pulmonary nodule diagnostics from CT images is a complicated task. The risk assessment for malignancy is usually used to assist the physician in assessing the cancer stage and creating a follow-up prediction strategy. Due to the difference in size, structure, and location of the nodules, the classification of nodules in the computer-assisted diagnostic system has been a great challenge. While deep learning is currently the most effective solution in terms of image detection and classification, there are many training information required, typically not readily accessible in most routine frameworks of medical imaging. Though, it is complicated for radiologists to recognize the inexplicability of deep neural networks. In this paper, a Consignable Multi-Model (CMM) is proposed for the detection and classification of a lung nodule, which first detect the lung nodule from CT images by different detection algorithms and then classify the lung nodules using Multi-Output DenseNet (MOD) technique. In order to enhance the interpretability of the proposed CMM, two inputs with multiple early outputs have been introduced in dense blocks. MOD accepts the detect patches into its two inputs which were identified from the detection phase and then classified it between benign and malignant using early outputs to gain more knowledge of a tumor. In addition, the experimental results on the LIDC-IDRI dataset demonstrate a 92.10% accuracy of CMM for the lung nodule classification, respectively. CMM made substantial progress in the diagnosis of nodules in contrast to the existing methods.

Deep Learning for Automated Medical Image Analysis

  • Wentao Zhu
2019 Thesis, cited 0 times
Medical imaging is an essential tool in many areas of medical applications, used for both diagnosis and treatment. However, reading medical images and making diagnosis or treatment recommendations require specially trained medical specialists. The current practice of reading medical images is labor-intensive, time-consuming, costly, and error-prone. It would be more desirable to have a computer-aided system that can automatically make diagnosis and treatment recommendations. Recent advances in deep learning enable us to rethink the ways of clinician diagnosis based on medical images. Early detection has proven to be critical to give patients the best chance of recovery and survival. Advanced computer-aided diagnosis systems are expected to have high sensitivities and small low positive rates. How to provide accurate diagnosis results and explore different types of clinical data is an important topic in the current computer-aided diagnosis research. In this thesis, we will introduce 1) mammograms for detecting breast cancers, the most frequently diagnosed solid cancer for U.S. women, 2) lung Computed Tomography (CT) images for detecting lung cancers, the most frequently diagnosed malignant cancer, and 3) head and neck CT images for automated delineation of organs at risk in radiotherapy. First, we will show how to employ the adversarial concept to generate the hard examples improving mammogram mass segmentation. Second, we will demonstrate how to use the weakly labelled data for the mammogram breast cancer diagnosis by efficiently design deep learning for multiinstance learning. Third, the thesis will walk through DeepLung system which combines deep 3D ConvNets and Gradient Boosting Machine (GBM) for automated lung nodule detection and classification. Fourth, we will show how to use weakly labelled data to improve existing lung nodule detection system by integrating deep learning with a probabilistic graphic model. Lastly, we will demonstrate the AnatomyNet which is thousands of times faster and more accurate than previous methods on automated anatomy segmentation.

Preliminary Clinical Study of the Differences Between Interobserver Evaluation and Deep Convolutional Neural Network-Based Segmentation of Multiple Organs at Risk in CT Images of Lung Cancer

  • Zhu, Jinhan
  • Liu, Yimei
  • Zhang, Jun
  • Wang, Yixuan
  • Chen, Lixin
Frontiers in Oncology 2019 Journal Article, cited 0 times
Background: In this study, publicly datasets with organs at risk (OAR) structures were used as reference data to compare the differences of several observers. Convolutional neural network (CNN)-based auto-contouring was also used in the analysis. We evaluated the variations among observers and the effect of CNN-based auto-contouring in clinical applications. Materials and methods: A total of 60 publicly available lung cancer CT with structures were used; 48 cases were used for training, and the other 12 cases were used for testing. The structures of the datasets were used as reference data. Three observers and a CNN-based program performed contouring for 12 testing cases, and the 3D dice similarity coefficient (DSC) and mean surface distance (MSD) were used to evaluate differences from the reference data. The three observers edited the CNN-based contours, and the results were compared to those of manual contouring. A value of P<0.05 was considered statistically significant. Results: Compared to the reference data, no statistically significant differences were observed for the DSCs and MSDs among the manual contouring performed by the three observers at the same institution for the heart, esophagus, spinal cord, and left and right lungs. The 95% confidence interval (CI) and P-values of the CNN-based auto-contouring results comparing to the manual results for the heart, esophagus, spinal cord, and left and right lungs were as follows: the DSCs were CNN vs. A: 0.914~0.939(P = 0.004), 0.746~0.808(P = 0.002), 0.866~0.887(P = 0.136), 0.952~0.966(P = 0.158) and 0.960~0.972 (P = 0.136); CNN vs. B: 0.913~0.936 (P = 0.002), 0.745~0.807 (P = 0.005), 0.864~0.894 (P = 0.239), 0.952~0.964 (P = 0.308), and 0.959~0.971 (P = 0.272); and CNN vs. C: 0.912~0.933 (P = 0.004), 0.748~0.804(P = 0.002), 0.867~0.890 (P = 0.530), 0.952~0.964 (P = 0.308), and 0.958~0.970 (P = 0.480), respectively. The P-values of MSDs are similar to DSCs. The P-values of heart and esophagus is smaller than 0.05. No significant differences were found between the edited CNN-based auto-contouring results and the manual results. Conclusion: For the spinal cord, both lungs, no statistically significant differences were found between CNN-based auto-contouring and manual contouring. Further modifications to contouring of the heart and esophagus are necessary. Overall, editing based on CNN-based auto-contouring can effectively shorten the contouring time without affecting the results. CNNs have considerable potential for automatic contouring applications.

Prior-aware Neural Network for Partially-Supervised Multi-Organ Segmentation

  • Zhou, Yuyin
  • Li, Zhe
  • Bai, Song
  • Wang, Chong
  • Chen, Xinlei
  • Han, Mei
  • Fishman, Elliot
  • Yuille, Alan L.
2019 Conference Paper, cited 0 times
Accurate multi-organ abdominal CT segmentation is essential to many clinical applications such as computeraided intervention. As data annotation requires massive human labor from experienced radiologists, it is common that training data are partially labeled, e.g., pancreas datasets only have the pancreas labeled while leaving the rest marked as background. However, these background labels can be misleading in multi-organ segmentation since the “background” usually contains some other organs of interest. To address the background ambiguity in these partially-labeled datasets, we propose Prior-aware Neural Network (PaNN) via explicitly incorporating anatomical priors on abdominal organ sizes, guiding the training process with domain-specific knowledge. More specifically, PaNN assumes that the average organ size distributions in the abdomen should approximate their empirical distributions, prior statistics obtained from the fully-labeled dataset. As our training objective is difficult to be directly optimized using stochastic gradient descent, we propose to reformulate it in a min-max form and optimize it via the stochastic primal-dual gradient algorithm. PaNN achieves state-of-the-art performance on the MICCAI2015 challenge “Multi-Atlas Labeling Beyond the Cranial Vault”, a competition on organ segmentation in the abdomen. We report an average Dice score of 84.97%, surpassing the prior art by a large margin of 3.27%.

Improving Classification with CNNs using Wavelet Pooling with Nesterov-Accelerated Adam

  • Zhou, Wenjin
  • Rossetto, Allison
2019 Conference Proceedings, cited 0 times
Wavelet pooling methods can improve the classification accuracy of Convolutional Neural Networks (CNNs). Combining wavelet pooling with the Nesterov-accelerated Adam (NAdam) gradient calculation method can improve both the accuracy of the CNN. We have implemented wavelet pooling with NAdam in this work using both a Haar wavelet (WavPool-NH) and a Shannon wavelet (WavPool-NS). The WavPool-NH and WavPool- NS methods are most accurate of the methods we considered for the MNIST and LIDC- IDRI lung tumor data-sets. The WavPool-NH and WavPool-NS implementations have an accuracy of 95.92% and 95.52%, respectively, on the LIDC-IDRI data-set. This is an improvement from the 92.93% accuracy obtained on this data-set with the max pooling method. The WavPool methods also avoid overfitting which is a concern with max pool- ing. We also found WavPool performed fairly well on the CIFAR-10 data-set, however, overfitting was an issue with all the methods we considered. Wavelet pooling, especially when combined with an adaptive gradient and wavelets chosen specifically for the data, has the potential to outperform current methods.

Machine learning reveals multimodal MRI patterns predictive of isocitrate dehydrogenase and 1p/19q status in diffuse low-and high-grade gliomas.

  • Zhou, H.
  • Chang, K.
  • Bai, H. X.
  • Xiao, B.
  • Su, C.
  • Bi, W. L.
  • Zhang, P. J.
  • Senders, J. T.
  • Vallieres, M.
  • Kavouridis, V. K.
  • Boaro, A.
  • Arnaout, O.
  • Yang, L.
  • Huang, R. Y.
Journal of neuro-oncology 2019 Journal Article, cited 0 times
PURPOSE: Isocitrate dehydrogenase (IDH) and 1p19q codeletion status are importantin providing prognostic information as well as prediction of treatment response in gliomas. Accurate determination of the IDH mutation status and 1p19q co-deletion prior to surgery may complement invasive tissue sampling and guide treatment decisions. METHODS: Preoperative MRIs of 538 glioma patients from three institutions were used as a training cohort. Histogram, shape, and texture features were extracted from preoperative MRIs of T1 contrast enhanced and T2-FLAIR sequences. The extracted features were then integrated with age using a random forest algorithm to generate a model predictive of IDH mutation status and 1p19q codeletion. The model was then validated using MRIs from glioma patients in the Cancer Imaging Archive. RESULTS: Our model predictive of IDH achieved an area under the receiver operating characteristic curve (AUC) of 0.921 in the training cohort and 0.919 in the validation cohort. Age offered the highest predictive value, followed by shape features. Based on the top 15 features, the AUC was 0.917 and 0.916 for the training and validation cohort, respectively. The overall accuracy for 3 group prediction (IDH-wild type, IDH-mutant and 1p19q co-deletion, IDH-mutant and 1p19q non-codeletion) was 78.2% (155 correctly predicted out of 198). CONCLUSION: Using machine-learning algorithms, high accuracy was achieved in the prediction of IDH genotype in gliomas and moderate accuracy in a three-group prediction including IDH genotype and 1p19q codeletion.

Bronchus Segmentation and Classification by Neural Networks and Linear Programming

  • Zhao, Tianyi
  • Yin, Zhaozheng
  • Wang, Jiao
  • Gao, Dashan
  • Chen, Yunqiang
  • Mao, Yunxiang
2019 Book Section, cited 0 times
Airway segmentation is a critical problem for lung disease analysis. However, building a complete airway tree is still a challenging problem because of the complex tree structure, and tracing the deep bronchi is not trivial in CT images because there are numerous small airways with various directions. In this paper, we develop two-stage 2D+3D neural networks and a linear programming based tracking algorithm for airway segmentation. Furthermore, we propose a bronchus classification algorithm based on the segmentation results. Our algorithm is evaluated on a dataset collected from 4 resources. We achieved the dice coefficient of 0.94 and F1 score of 0.86 by a centerline based evaluation metric, compared to the ground-truth manually labeled by our radiologists.

A radiomics nomogram based on multiparametric MRI might stratify glioblastoma patients according to survival

  • Zhang, Xi
  • Lu, Hongbing
  • Tian, Qiang
  • Feng, Na
  • Yin, Lulu
  • Xu, Xiaopan
  • Du, Peng
  • Liu, Yang
European Radiology 2019 Journal Article, cited 0 times

Comparison of CT and MRI images for the prediction of soft-tissue sarcoma grading and lung metastasis via a convolutional neural networks model

  • Zhang, L.
  • Ren, Z.
Clin Radiol 2019 Journal Article, cited 0 times
AIM: To realise the automated prediction of soft-tissue sarcoma (STS) grading and lung metastasis based on computed tomography (CT), T1-weighted (T1W) magnetic resonance imaging (MRI), and fat-suppressed T2-weighted MRI (FST2W) via the convolutional neural networks (CNN) model. MATERIALS AND METHODS: MRI and CT images of 51 patients diagnosed with STS were analysed retrospectively. The patients could be divided into three groups based on disease grading: high-grade group (n=28), intermediate-grade group (n=15), low-grade group (n=8). Among these patients, 32 had lung metastasis, while the remaining 19 had no lung metastasis. The data were divided into the training, validation, and testing groups according to the ratio of 5:2:3. The receiver operating characteristic (ROC) curves and accuracy values were acquired using the testing dataset to evaluate the performance of the CNN model. RESULTS: For STS grading, the accuracy of the T1W, FST2W, CT, and the fusion of T1W and FST2W testing data were 0.86, 0.89, 0.86, and 0.85, respectively. In addition, Area Under Curve (AUC) were 0.96, 0.97, 0.97, and 0.94 respectively. For the prediction of lung metastasis, the accuracy of the T1W, FST2W, CT, and the fusion of T1W and FST2W test data were 0.92, 0.93, 0.88, and 0.91, respectively. The corresponding AUC values were 0.97, 0.96, 0.95, and 0.95, respectively. FST2W MRI performed best for predicting STS grading and lung metastasis. CONCLUSION: MRI and CT images combined with the CNN model can be useful for making predictions regarding STS grading and lung metastasis, thus providing help for patient diagnosis and treatment.

Brain tumor detection based on Naïve Bayes Classification

  • Zaw, Hein Tun
  • Maneerat, Noppadol
  • Win, Khin Yadanar
2019 Conference Paper, cited 2 times
Brain cancer is caused by the population of abnormal cells called glial cells that takes place in the brain. Over the years, the number of patients who have brain cancer is increasing with respect to the aging population, is a worldwide health problem. The objective of this paper is to develop a method to detect the brain tissues which are affected by cancer especially for grade-4 tumor, Glioblastoma multiforme (GBM). GBM is one of the most malignant cancerous brain tumors as they are fast growing and more likely to spread to other parts of the brain. In this paper, Naïve Bayes classification is utilized for recognition of a tumor region accurately that contains all spreading cancerous tissues. Brain MRI database, preprocessing, morphological operations, pixel subtraction, maximum entropy threshold, statistical features extraction, and Naïve Bayes classifier based prediction algorithm are used in this research. The goal of this method is to detect the tumor area from different brain MRI images and to predict that detected area whether it is a tumor or not. When compared to other methods, this method can properly detect the tumor located in different regions of the brain including the middle region (aligned with eye level) which is the significant advantage of this method. When tested on 50 MRI images, this method develops 81.25% detection rate on tumor images and 100% detection rate on non-tumor images with the overall accuracy 94%.

Prediction of pathologic stage in non-small cell lung cancer using machine learning algorithm based on CT image feature analysis

  • Yu, L.
  • Tao, G.
  • Zhu, L.
  • Wang, G.
  • Li, Z.
  • Ye, J.
  • Chen, Q.
BMC cancer 2019 Journal Article, cited 11 times
PURPOSE: To explore imaging biomarkers that can be used for diagnosis and prediction of pathologic stage in non-small cell lung cancer (NSCLC) using multiple machine learning algorithms based on CT image feature analysis. METHODS: Patients with stage IA to IV NSCLC were included, and the whole dataset was divided into training and testing sets and an external validation set. To tackle imbalanced datasets in NSCLC, we generated a new dataset and achieved equilibrium of class distribution by using SMOTE algorithm. The datasets were randomly split up into a training/testing set. We calculated the importance value of CT image features by means of mean decrease gini impurity generated by random forest algorithm and selected optimal features according to feature importance (mean decrease gini impurity > 0.005). The performance of prediction model in training and testing sets were evaluated from the perspectives of classification accuracy, average precision (AP) score and precision-recall curve. The predictive accuracy of the model was externally validated using lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) samples from TCGA database. RESULTS: The prediction model that incorporated nine image features exhibited a high classification accuracy, precision and recall scores in the training and testing sets. In the external validation, the predictive accuracy of the model in LUAD outperformed that in LUSC. CONCLUSIONS: The pathologic stage of patients with NSCLC can be accurately predicted based on CT image features, especially for LUAD. Our findings extend the application of machine learning algorithms in CT image feature prediction for pathologic staging and identify potential imaging biomarkers that can be used for diagnosis of pathologic stage in NSCLC patients.

Correlative hierarchical clustering-based low-rank dimensionality reduction of radiomics-driven phenotype in non-small cell lung cancer

  • Bardia Yousefi
  • Nariman Jahani
  • Michael J. LaRiviere
  • Eric Cohen
  • Meng-Kang Hsieh
  • José Marcio Luna
  • Rhea D. Chitalia
  • Jeffrey C. Thompson
  • Erica L. Carpenter
  • Sharyn I. Katz
  • Despina Kontos
2019 Conference Paper, cited 0 times
Background: Lung cancer is one of the most common cancers in the United States and the most fatal, with 142,670 deaths in 2019. Accurately determining tumor response is critical to clinical treatment decisions, ultimately impacting patient survival. To better differentiate between non-small cell lung cancer (NSCLC) responders and non-responders to therapy, radiomic analysis is emerging as a promising approach to identify associated imaging features undetectable by the human eye. However, the plethora of variables extracted from an image may actually undermine the performance of computer-aided prognostic assessment, known as the curse of dimensionality. In the present study, we show that correlative-driven hierarchical clustering improves high-dimensional radiomics-based feature selection and dimensionality reduction, ultimately predicting overall survival in NSCLC patients. Methods: To select features for high-dimensional radiomics data, a correlation-incorporated hierarchical clustering algorithm automatically categorizes features into several groups. The truncation distance in the resulting dendrogram graph is used to control the categorization of the features, initiating low-rank dimensionality reduction in each cluster, and providing descriptive features for Cox proportional hazards (CPH)-based survival analysis. Using a publicly available non- NSCLC radiogenomic dataset of 204 patients’ CT images, 429 established radiomics features were extracted. Low-rank dimensionality reduction via principal component analysis (PCA) was employed (k=1, n<1) to find the representative components of each cluster of features and calculate cluster robustness using the relative weighted consistency metric. Results: Hierarchical clustering categorized radiomic features into several groups without primary initialization of cluster numbers using the correlation distance metric (as a function) to truncate the resulting dendrogram into different distances. The dimensionality was reduced from 429 to 67 features (for truncation distance of 0.1). The robustness within the features in clusters was varied from -1.12 to -30.02 for truncation distances of 0.1 to 1.8, respectively, which indicated that the robustness decreases with increasing truncation distance when smaller number of feature classes (i.e., clusters) are selected. The best multivariate CPH survival model had a C-statistic of 0.71 for truncation distance of 0.1, outperforming conventional PCA approaches by 0.04, even when the same number of principal components was considered for feature dimensionality. Conclusions: Correlative hierarchical clustering algorithm truncation distance is directly associated with robustness of the clusters of features selected and can effectively reduce feature dimensionality while improving outcome prediction.

A Novel Deep Learning Framework for Standardizing the Label of OARs in CT

  • Yang, Qiming
  • Chao, Hongyang
  • Nguyen, Dan
  • Jiang, Steve
2019 Conference Paper, cited 0 times
When organs at risk (OARs) are contoured in computed tomography (CT) images for radiotherapy treatment planning, the labels are often inconsistent, which severely hampers the collection and curation of clinical data for research purpose. Currently, data cleaning is mainly done manually, which is time-consuming. The existing methods for automatically relabeling OARs remain unpractical with real patient data, due to the inconsistent delineation and similar small-volume OARs. This paper proposes an improved data augmentation technique according to the characteristics of clinical data. Besides, a novel 3D non-local convolutional neural network is proposed, which includes a decision making network with voting strategy. The resulting model can automatically identify OARs and solve the problems in existing methods, achieving the accurate OAR re-labeling goal. We used partial data from a public head-and-neck dataset (HN_PETCT) for training, and then tested the model on datasets from three different medical institutions. We have obtained the state-of-the-art results for identifying 28 OARs in the head-and-neck region, and also our model is capable of handling multi-center datasets indicating strong generalization ability. Compared to the baseline, the final result of our model achieved a significant improvement in the average true positive rate (TPR) on the three test datasets (+8.27%, +2.39%, +5.53%, respectively). More importantly, the F1 score of small-volume OAR with only 9 training samples increased from 28.63% to 91.17%.

Computational Models for Automated Histopathological Assessment of Colorectal Liver Metastasis Progression

  • XU, Xiaoyang
2019 Thesis, cited 0 times
Histopathology imaging is a type of microscopy imaging commonly used for the microlevel clinical examination of a patient’s pathology. Due to the extremely large size of histopathology images, especially whole slide images (WSIs), it is difficult for pathologists to make a quantitative assessment by inspecting the details of a WSI. Hence, a computeraided system is necessary to provide a subjective and consistent assessment of the WSI for personalised treatment decisions. In this thesis, a deep learning framework for the automatic analysis of whole slide histopathology images is presented for the first time, which aims to address the challenging task of assessing and grading colorectal liver metastasis (CRLM). Quantitative evaluations of a patient’s condition with CRLM are conducted through quantifying different tissue components in resected tumorous specimens. This study mimics the visual examination process of human experts, by focusing on three levels of information, the tissue level, cell level and pixel level, to achieve the step by step segmentation of histopathology images. At the tissue level, patches with category information are utilised to analyse the WSIs. Both classification-based approaches and segmentation-based approaches are investigated to locate the metastasis region and quantify different components of the WSI. For the classification-based method, different factors that might affect the classification accuracy are explored using state-of-the-art deep convolutional neural networks (DCNNs). Furthermore, a novel network is proposed to merge the information from different magnification levels to include contextual information to support the final decision. With the support by the segmentation-based method, edge information from the image is integrated with the proposed fully convolutional neural network to further enhance the segmentation results. At the cell level, nuclei related information is examined to tackle the challenge of inadequate annotations. The problem is approached from two aspects: a weakly supervised nuclei detection and classification method is presented to model the nuclei in the CRLM by integrating a traditional image processing method and variational auto-encoder (VAE). A novel nuclei instance segmentation framework is proposed to boost the accuracy of the nuclei detection and segmentation using the idea of transfer learning. Afterwards, a fusion framework is proposed to enhance the tissue level segmentation results by leveraging the statistical and spatial properties of the cells. At the pixel level, the segmentation problem is tackled by introducing the information from the immunohistochemistry (IHC) stained images. Firstly, two data augmentation approaches, synthesis-based and transfer-based, are proposed to address the problem of insufficient pixel level segmentation. Afterwards, with the paired image and masks having been obtained, an end-to-end model is trained to achieve pixel level segmentation. Secondly, another novel weakly supervised approach based on the generative adversarial network (GAN) is proposed to explore the feasibility of transforming unpaired haematoxylin and eosin (HE) images to IHC stained images. Extensive experiments reveal that the virtually stained images can also be used for pixel level segmentation.

Prostate cancer detection using residual networks

  • Xu, Helen
  • Baxter, John S H
  • Akin, Oguz
  • Cantor-Rivera, Diego
Int J Comput Assist Radiol Surg 2019 Journal Article, cited 0 times
PURPOSE: To automatically identify regions where prostate cancer is suspected on multi-parametric magnetic resonance images (mp-MRI). METHODS: A residual network was implemented based on segmentations from an expert radiologist on T2-weighted, apparent diffusion coefficient map, and high b-value diffusion-weighted images. Mp-MRIs from 346 patients were used in this study. RESULTS: The residual network achieved a hit or miss accuracy of 93% for lesion detection, with an average Jaccard score of 71% that compared the agreement between network and radiologist segmentations. CONCLUSION: This paper demonstrated the ability for residual networks to learn features for prostate lesion segmentation.

Semi-supervised Adversarial Model for Benign-Malignant Lung Nodule Classification on Chest CT

  • Xie, Yutong
  • Zhang, Jianpeng
  • Xia, Yong
Medical Image Analysis 2019 Journal Article, cited 0 times
Classification of benign-malignant lung nodules on chest CT is the most critical step in the early detection of lung cancer and prolongation of patient survival. Despite their success in image classification, deep convolutional neural networks (DCNNs) always require a large number of labeled training data, which are not available for most medical image analysis applications due to the work required in image acquization and particularly image annotation. In this paper, we propose a semi-supervised adversarial classification (SSAC) model that can be trained by using both labeled and unlabeled data for benign-malignant lung nodule classification. This model consists of an adversarial autoencoder-based unsupervised reconstruction network R, a supervised classification network C, and learnable transition layers that enable the adaption of the image representation ability learned by R to C. The SSAC model has been extended to the multi-view knowledge-based collaborative learning, aiming to employ three SSACs to characterize each nodule’s overall appearance, heterogeneity in shape and texture, respectively, and to perform such characterization on nine planar views. The MK-SSAC model has been evaluated on the benchmark LIDC-IDRI dataset and achieves an accuracy of 92.53% and an AUC of 95.81%, which are superior to the performance of other lung nodule classification and semi-supervised learning approaches.

Efficient copyright protection for three CT images based on quaternion polar harmonic Fourier moments

  • Xia, Zhiqiu
  • Wang, Xingyuan
  • Li, Xiaoxiao
  • Wang, Chunpeng
  • Unar, Salahuddin
  • Wang, Mingxu
  • Zhao, Tingting
Signal Processing 2019 Journal Article, cited 0 times

Automatic glioma segmentation based on adaptive superpixel

  • Wu, Yaping
  • Zhao, Zhe
  • Wu, Weiguo
  • Lin, Yusong
  • Wang, Meiyun
BMC Med Imaging 2019 Journal Article, cited 0 times
BACKGROUND: The automatic glioma segmentation is of great significance for clinical practice. This study aims to propose an automatic method based on superpixel for glioma segmentation from the T2 weighted Magnetic Resonance Imaging. METHODS: The proposed method mainly includes three steps. First, we propose an adaptive superpixel generation algorithm based on simple linear iterative clustering version with 0 parameter (ASLIC0). This algorithm can acquire a superpixel image with fewer superpixels and better fit the boundary of region of interest (ROI) by automatically selecting the optimal number of superpixels. Second, we compose a training set by calculating the statistical, texture, curvature and fractal features for each superpixel. Third, Support Vector Machine (SVM) is used to train classification model based on the features of the second step. RESULTS: The experimental results on Multimodal Brain Tumor Image Segmentation Benchmark 2017 (BraTS2017) show that the proposed method has good segmentation performance. The average Dice, Hausdorff distance, sensitivity, and specificity for the segmented tumor against the ground truth are 0.8492, 3.4697 pixels, 81.47, and 99.64%, respectively. The proposed method shows good stability on high- and low-grade glioma samples. Comparative experimental results show that the proposed method has superior performance. CONCLUSIONS: This provides a close match to expert delineation across all grades of glioma, leading to a fast and reproducible method of glioma segmentation.

Development of a method for automating effective patient diameter estimation for digital radiography

  • Worrall, Mark
2019 Thesis, cited 0 times
National patient dose audit of paediatric radiographic examinations is complicated by a lack of data containing a direct measurement of the patient diameter in the examination orientation or height and weight. This has meant that National Diagnostic Reference Levels (NDRLs) for paediatric radiographic examinations have not been updated in the UK since 2000, despite significant changes in imaging technology over that period. This work is the first step in the development of a computational model intended to automate an estimate of paediatric patient diameter. Whilst the application is intended for a paediatric population, its development within this thesis uses an adult cohort. The computational model uses the radiographic image, the examination exposure factors and a priori information relating to the x-ray system and the digital detector. The computational model uses the Beer-Lambert law. A hypothesis was developed that this would work for clinical exposures despite its single energy photon basis. Values of initial air kerma are estimated from the examination exposure factors and measurements made on the x-ray system. Values of kerma at the image receptor are estimated from a measurement of pixel value made at the centre of the radiograph and the measured calibration between pixel value and kerma for the image receptor. Values of effective linear attenuation coefficient are estimated from Monte Carlo simulations. Monte Carlo simulations were created for two x-ray systems. The simulations were optimised and thoroughly validated to ensure that any result obtained is accurate. The validation process compared simulation results with measurements made on the x-ray units themselves, producing values for effective linear attenuation coefficient that were demonstrated to be accurate. Estimates of attenuator thickness can be made using the estimated values for each variable. The computational model was demonstrated to accurately estimate the thickness of single composition attenuators across a range of thicknesses and exposure factors on three different x-ray systems. The computational model was used in a clinical validation study of 20 adult patients undergoing AP abdominal x-ray examinations. For 19 of these examinations, it estimated the true patient thickness to within ±9%. This work presents a feasible computational model that could be used to automate the estimation of paediatric patient thickness during radiographic examinations allowing for automation of paediatric radiographic dose audit.

Comparing deep learning-based auto-segmentation of organs at risk and clinical target volumes to expert inter-observer variability in radiotherapy planning

  • Wong, Jordan
  • Fong, Allan
  • McVicar, Nevin
  • Smith, Sally
  • Giambattista, Joshua
  • Wells, Derek
  • Kolbeck, Carter
  • Giambattista, Jonathan
  • Gondara, Lovedeep
  • Alexander, Abraham
Radiother Oncol 2019 Journal Article, cited 0 times
BACKGROUND: Deep learning-based auto-segmented contours (DC) aim to alleviate labour intensive contouring of organs at risk (OAR) and clinical target volumes (CTV). Most previous DC validation studies have a limited number of expert observers for comparison and/or use a validation dataset related to the training dataset. We determine if DC models are comparable to Radiation Oncologist (RO) inter-observer variability on an independent dataset. METHODS: Expert contours (EC) were created by multiple ROs for central nervous system (CNS), head and neck (H&N), and prostate radiotherapy (RT) OARs and CTVs. DCs were generated using deep learning-based auto-segmentation software trained by a single RO on publicly available data. Contours were compared using Dice Similarity Coefficient (DSC) and 95% Hausdorff distance (HD). RESULTS: Sixty planning CT scans had 2-4 ECs, for a total of 60 CNS, 53 H&N, and 50 prostate RT contour sets. The mean DC and EC contouring times were 0.4 vs 7.7 min for CNS, 0.6 vs 26.6 min for H&N, and 0.4 vs 21.3 min for prostate RT contours. There were minimal differences in DSC and 95% HD involving DCs for OAR comparisons, but more noticeable differences for CTV comparisons. CONCLUSIONS: The accuracy of DCs trained by a single RO is comparable to expert inter-observer variability for the RT planning contours in this study. Use of deep learning-based auto-segmentation in clinical practice will likely lead to significant benefits to RT planning workflow and resources.

General purpose radiomics for multi-modal clinical research

  • Wels, Michael G.
  • Suehling, Michael
  • Muehlberg, Alexander
  • Lades, Félix
2019 Conference Proceedings, cited 0 times
In this paper we present an integrated software solution∗ targeting clinical researchers for discovering relevant radiomic biomarkers covering the entire value chain of clinical radiomics research. Its intention is to make this kind of research possible even for less experienced scientists. The solution provides means to create, collect, manage, and statistically analyze patient cohorts consisting of potentially multimodal 3D medical imaging data, associated volume of interest annotations, and radiomic features. Volumes of interest can be created by an extensive set of semi-automatic segmentation tools. Radiomic feature computation relies on the de facto standard library PyRadiomics and ensures comparability and reproducibility of carried out studies. Tabular cohort studies containing the radiomics of the volumes of interest can be managed directly within the software solution. The integrated statistical analysis capabilities introduce an additional layer of abstraction allowing non-experts to benefit from radiomics research as well. There are ready-to-use methods for clustering, uni- and multivariate statistics, and machine learning to be applied to the collected cohorts. They are validated in two case studies: for one thing, on a subset of the publicly available NSCLC-Radiomics data collection containing pretreatment CT scans of 317 non-small cell lung cancer (NSCLC) patients and for another, on the Lung Image Database Consortium imaging study with diagnostic and lung cancer screening CT scans including 2,753 distinct lesions from 870 patients. Integrated software solutions with optimized workflows like the one presented and further developments thereof may play an important role in making precision medicine come to life in clinical environments.

IILS: Intelligent imaging layout system for automatic imaging report standardization and intra-interdisciplinary clinical workflow optimization

  • Wang, Yang
  • Yan, Fangrong
  • Lu, Xiaofan
  • Zheng, Guanming
  • Zhang, Xin
  • Wang, Chen
  • Zhou, Kefeng
  • Zhang, Yingwei
  • Li, Hui
  • Zhao, Qi
  • Zhu, Hu
  • Chen, Fei
  • Gao, Cailiang
  • Qing, Zhao
  • Ye, Jing
  • Li, Aijing
  • Xin, Xiaoyan
  • Li, Danyan
  • Wang, Han
  • Yu, Hongming
  • Cao, Lu
  • Zhao, Chaowei
  • Deng, Rui
  • Tan, Libo
  • Chen, Yong
  • Yuan, Lihua
  • Zhou, Zhuping
  • Yang, Wen
  • Shao, Mingran
  • Dou, Xin
  • Zhou, Nan
  • Zhou, Fei
  • Zhu, Yue
  • Lu, Guangming
  • Zhang, Bing
EBioMedicine 2019 Journal Article, cited 1 times
BACKGROUND: To achieve imaging report standardization and improve the quality and efficiency of the intra-interdisciplinary clinical workflow, we proposed an intelligent imaging layout system (IILS) for a clinical decision support system-based ubiquitous healthcare service, which is a lung nodule management system using medical images. METHODS: We created a lung IILS based on deep learning for imaging report standardization and workflow optimization for the identification of nodules. Our IILS utilized a deep learning plus adaptive auto layout tool, which trained and tested a neural network with imaging data from all the main CT manufacturers from 11,205 patients. Model performance was evaluated by the receiver operating characteristic curve (ROC) and calculating the corresponding area under the curve (AUC). The clinical application value for our IILS was assessed by a comprehensive comparison of multiple aspects. FINDINGS: Our IILS is clinically applicable due to the consistency with nodules detected by IILS, with its highest consistency of 0.94 and an AUC of 90.6% for malignant pulmonary nodules versus benign nodules with a sensitivity of 76.5% and specificity of 89.1%. Applying this IILS to a dataset of chest CT images, we demonstrate performance comparable to that of human experts in providing a better layout and aiding in diagnosis in 100% valid images and nodule display. The IILS was superior to the traditional manual system in performance, such as reducing the number of clicks from 14.45+/-0.38 to 2, time consumed from 16.87+/-0.38s to 6.92+/-0.10s, number of invalid images from 7.06+/-0.24 to 0, and missing lung nodules from 46.8% to 0%. INTERPRETATION: This IILS might achieve imaging report standardization, and improve the clinical workflow therefore opening a new window for clinical application of artificial intelligence. FUND: The National Natural Science Foundation of China.

An Appraisal of Lung Nodules Automatic Classification Algorithms for CT Images

  • Xinqi Wang
  • Keming Mao
  • Lizhe Wang
  • Peiyi Yang
  • Duo Lu
  • Ping He
Sensors (Basel) 2019 Journal Article, cited 0 times
Lung cancer is one of the most deadly diseases around the world representing about 26% of all cancers in 2017. The five-year cure rate is only 18% despite great progress in recent diagnosis and treatment. Before diagnosis, lung nodule classification is a key step, especially since automatic classification can help clinicians by providing a valuable opinion. Modern computer vision and machine learning technologies allow very fast and reliable CT image classification. This research area has become very hot for its high efficiency and labor saving. The paper aims to draw a systematic review of the state of the art of automatic classification of lung nodules. This research paper covers published works selected from the Web of Science, IEEEXplore, and DBLP databases up to June 2018. Each paper is critically reviewed based on objective, methodology, research dataset, and performance evaluation. Mainstream algorithms are conveyed and generic structures are summarized. Our work reveals that lung nodule classification based on deep learning becomes dominant for its excellent performance. It is concluded that the consistency of the research objective and integration of data deserves more attention. Moreover, collaborative works among developers, clinicians, and other parties should be strengthened.

Deep Learning for Automatic Identification of Nodule Morphology Features and Prediction of Lung Cancer

  • Wang, Weilun
  • Chakraborty, Goutam
2019 Conference Paper, cited 0 times
Lung Cancer is the most common and deadly cancer in the world. Correct prognosis affects the survival rate of patient. The most important symptom for early diagnosis is nodules images in CT scan. Diagnosis performed in hospital is divided into 2 steps : (1) Firstly, detect nodules from CT scan. (2) Secondly, evaluate the morphological features of nodules and give the diagnostic results. In this work, we proposed an automatic lung cancer prognosis system. The system has 3 steps : (1) In the first step, we trained two models, one based on convolutional neural network (CNN), and the other recurrent neural network (RNN), to detect nodules in CT scan. (2) In the second step, convolutional neural networks (CNN) are trained to evaluate the value of nine morphological features of nodules. (3) In the final step, logistic regression between values of features and cancer probability is trained using XGBoost model. In addition, we give an analysis of which features are important for cancer prediction. Overall, we achieved 82.39% accuracy for lung cancer prediction. By logistic regression analysis, we find that features of diameter, spiculation and lobulation are useful for reducing false positive.

Evaluation of Malignancy of Lung Nodules from CT Image Using Recurrent Neural Network

  • Wang, Weilun
  • Chakraborty, Goutam
2019 Journal Article, cited 0 times
The efficacy of treatment of cancer depends largely on early detection and correct prognosis. It is more important in case of pulmonary cancer, where the detection is based on identifying malignant nodules in the Computed Tomography (CT) scans of the lung. There are two problems for making correct decision about malignancy: (1) At early stage, the nodule size is small (length 5 to 10 mm). As the CT scan covers a volume of 30cm.×30cm.×40cm., manually searching for nodules takes a very long time (approximately 10 minutes for an expert). (2) There are benign nodules and nodules due to other ailments like bronchitis, pneumonia, tuberculosis. To identify whether the nodule is carcinogenic needs long experience and expertise.In recent years, several works have been reported to classify lung cancer using not only the CT scan image, but also other features causing or related to cancer. In all recent works, for CT image analysis, 3-D Convolution Neural Network (CNN) is used to identify cancerous nodules. In spite of various preprocessing used to improve training efficiency, 3-D CNN is extremely slow. The aim of this work is to improve training efficiency by proposing a new deep NN model. It consists of a hierarchical (sliced) structure of recurrent neural network (RNN), where different layers of the hierarchy can be trained simultaneously, decreasing training time. In addition, selective attention (alignment) during training improves convergence rate. The result shows a 3-fold increase in training efficiency, compared to recent state-of-the-art work using 3-D CNN.

Correlation between CT based radiomics features and gene expression data in non-small cell lung cancer

  • Wang, Ting
  • Gong, Jing
  • Duan, Hui-Hong
  • Wang, Li-Jia
  • Ye, Xiao-Dan
  • Nie, Sheng-Dong
Journal of X-ray science and technology 2019 Journal Article, cited 0 times

Inter-rater agreement in glioma segmentations on longitudinal MRI

  • Visser, M.
  • Muller, D. M. J.
  • van Duijn, R. J. M.
  • Smits, M.
  • Verburg, N.
  • Hendriks, E. J.
  • Nabuurs, R. J. A.
  • Bot, J. C. J.
  • Eijgelaar, R. S.
  • Witte, M.
  • van Herk, M. B.
  • Barkhof, F.
  • de Witt Hamer, P. C.
  • de Munck, J. C.
Neuroimage Clin 2019 Journal Article, cited 0 times
BACKGROUND: Tumor segmentation of glioma on MRI is a technique to monitor, quantify and report disease progression. Manual MRI segmentation is the gold standard but very labor intensive. At present the quality of this gold standard is not known for different stages of the disease, and prior work has mainly focused on treatment-naive glioblastoma. In this paper we studied the inter-rater agreement of manual MRI segmentation of glioblastoma and WHO grade II-III glioma for novices and experts at three stages of disease. We also studied the impact of inter-observer variation on extent of resection and growth rate. METHODS: In 20 patients with WHO grade IV glioblastoma and 20 patients with WHO grade II-III glioma (defined as non-glioblastoma) both the enhancing and non-enhancing tumor elements were segmented on MRI, using specialized software, by four novices and four experts before surgery, after surgery and at time of tumor progression. We used the generalized conformity index (GCI) and the intra-class correlation coefficient (ICC) of tumor volume as main outcome measures for inter-rater agreement. RESULTS: For glioblastoma, segmentations by experts and novices were comparable. The inter-rater agreement of enhancing tumor elements was excellent before surgery (GCI 0.79, ICC 0.99) poor after surgery (GCI 0.32, ICC 0.92), and good at progression (GCI 0.65, ICC 0.91). For non-glioblastoma, the inter-rater agreement was generally higher between experts than between novices. The inter-rater agreement was excellent between experts before surgery (GCI 0.77, ICC 0.92), was reasonable after surgery (GCI 0.48, ICC 0.84), and good at progression (GCI 0.60, ICC 0.80). The inter-rater agreement was good between novices before surgery (GCI 0.66, ICC 0.73), was poor after surgery (GCI 0.33, ICC 0.55), and poor at progression (GCI 0.36, ICC 0.73). Further analysis showed that the lower inter-rater agreement of segmentation on postoperative MRI could only partly be explained by the smaller volumes and fragmentation of residual tumor. The median interquartile range of extent of resection between raters was 8.3% and of growth rate was 0.22mm/year. CONCLUSION: Manual tumor segmentations on MRI have reasonable agreement for use in spatial and volumetric analysis. Agreement in spatial overlap is of concern with segmentation after surgery for glioblastoma and with segmentation of non-glioblastoma by non-experts.

An intelligent lung tumor diagnosis system using whale optimization algorithm and support vector machine

  • Vijh, Surbhi
  • Gaur, Deepak
  • Kumar, Sushil
International Journal of System Assurance Engineering and Management 2019 Journal Article, cited 0 times
Medical image processing technique are widely used for detection of tumor to increase the survival rate of patients. The development of computer-aided diagnosis system shows improvement in observing the medical image and determining the treatment stages. The earlier detection of tumor reduces the mortality of lung cancer by increasing the probability of successful treatment. In this paper, the intelligent lung tumor diagnosis system is developed using various image processing technique. The simulated steps involve image enhancement, image segmentation, post-processing, feature extraction, feature selection and classification using support vector machine (SVM) kernel. Gray level co-occurrence matrix method is used for extracting the 19 texture and statistical features of lung computed tomography (CT) image. Whale optimization algorithm (WOA) is considered for selection of best prominent feature subset. The contribution provided in this paper is the development of WOA_SVM to automate the aided diagnosis system for determining whether the lung CT image is normal or abnormal. An improved technique is developed using whale optimization algorithm for optimal feature selection to obtain accurate results and constructing the robust model. The performance of proposed methodology is evaluated using accuracy, sensitivity and specificity and obtained as 95%, 100% and 92% using radial bias function support vector kernel.

Identification and classification of DICOM files with burned-in text content

  • Vcelak, Petr
  • Kryl, Martin
  • Kratochvil, Michal
  • Kleckova, Jana
International Journal of Medical Informatics 2019 Journal Article, cited 0 times
Background Protected health information burned in pixel data is not indicated for various reasons in DICOM. It complicates the secondary use of such data. In recent years, there have been several attempts to anonymize or de-identify DICOM files. Existing approaches have different constraints. No completely reliable solution exists. Especially for large datasets, it is necessary to quickly analyse and identify files potentially violating privacy. Methods Classification is based on adaptive-iterative algorithm designed to identify one of three classes. There are several image transformations, optical character recognition, and filters; then a local decision is made. A confirmed local decision is the final one. The classifier was trained on a dataset composed of 15,334 images of various modalities. Results The false positive rates are in all cases below 4.00%, and 1.81% in the mission-critical problem of detecting protected health information. The classifier's weighted average recall was 94.85%, the weighted average inverse recall was 97.42% and Cohen's Kappa coefficient was 0.920. Conclusion The proposed novel approach for classification of burned-in text is highly configurable and able to analyse images from different modalities with a noisy background. The solution was validated and is intended to identify DICOM files that need to have restricted access or be thoroughly de-identified due to privacy issues. Unlike with existing tools, the recognised text, including its coordinates, can be further used for de-identification.

Predicting the 1p/19q co-deletion status of presumed low grade glioma with an externally validated machine learning algorithm

  • van der Voort, Sebastian R
  • Incekara, Fatih
  • Wijnenga, Maarten MJ
  • Kapsas, Georgios
  • Gardeniers, Mayke
  • Schouten, Joost W
  • Starmans, Martijn PA
  • Tewarie, Rishie Nandoe
  • Lycklama, Geert J
  • French, Pim J
Clinical Cancer Research 2019 Journal Article, cited 0 times

Eliminating biasing signals in lung cancer images for prognosis predictions with deep learning.

  • van Amsterdam, W. A. C.
  • Verhoeff, J. J. C.
  • de Jong, P. A.
  • Leiner, T.
  • Eijkemans, M. J. C.
NPJ Digit Med 2019 Journal Article, cited 0 times
Deep learning has shown remarkable results for image analysis and is expected to aid individual treatment decisions in health care. Treatment recommendations are predictions with an inherently causal interpretation. To use deep learning for these applications in the setting of observational data, deep learning methods must be made compatible with the required causal assumptions. We present a scenario with real-world medical images (CT-scans of lung cancer) and simulated outcome data. Through the data simulation scheme, the images contain two distinct factors of variation that are associated with survival, but represent a collider (tumor size) and a prognostic factor (tumor heterogeneity), respectively. When a deep network would use all the information available in the image to predict survival, it would condition on the collider and thereby introduce bias in the estimation of the treatment effect. We show that when this collider can be quantified, unbiased individual prognosis predictions are attainable with deep learning. This is achieved by (1) setting a dual task for the network to predict both the outcome and the collider and (2) enforcing a form of linear independence of the activation distributions of the last layer. Our method provides an example of combining deep learning and structural causal models to achieve unbiased individual prognosis predictions. Extensions of machine learning methods for applications to causal questions are required to attain the long-standing goal of personalized medicine supported by artificial intelligence.

Novel approaches for glioblastoma treatment: Focus on tumor heterogeneity, treatment resistance, and computational tools

  • Valdebenito, Silvana
  • D'Amico, Daniela
  • Eugenin, Eliseo
Cancer Reports 2019 Journal Article, cited 0 times
Background Glioblastoma (GBM) is a highly aggressive primary brain tumor. Currently, the suggested line of action is the surgical resection followed by radiotherapy and treatment with the adjuvant temozolomide, a DNA alkylating agent. However, the ability of tumor cells to deeply infiltrate the surrounding tissue makes complete resection quite impossible, and, in consequence, the probability of tumor recurrence is high, and the prognosis is not positive. GBM is highly heterogeneous and adapts to treatment in most individuals. Nevertheless, these mechanisms of adaption are unknown. Recent findings In this review, we will discuss the recent discoveries in molecular and cellular heterogeneity, mechanisms of therapeutic resistance, and new technological approaches to identify new treatments for GBM. The combination of biology and computer resources allow the use of algorithms to apply artificial intelligence and machine learning approaches to identify potential therapeutic pathways and to identify new drug candidates. Conclusion These new approaches will generate a better understanding of GBM pathogenesis and will result in novel treatments to reduce or block the devastating consequences of brain cancers.

Enabling machine learning in X-ray-based procedures via realistic simulation of image formation

  • Unberath, Mathias
  • Zaech, Jan-Nico
  • Gao, Cong
  • Bier, Bastian
  • Goldmann, Florian
  • Lee, Sing Chun
  • Fotouhi, Javad
  • Taylor, Russell
  • Armand, Mehran
  • Navab, Nassir
International journal of computer assisted radiology and surgery 2019 Journal Article, cited 0 times

Impact of image preprocessing on the scanner dependence of multi-parametric MRI radiomic features and covariate shift in multi-institutional glioblastoma datasets

  • Um, Hyemin
  • Tixier, Florent
  • Bermudez, Dalton
  • Deasy, Joseph O
  • Young, Robert J
  • Veeraraghavan, Harini
Physics in Medicine & Biology 2019 Journal Article, cited 0 times
Recent advances in radiomics have enhanced the value of medical imaging in various aspects of clinical practice, but a crucial component that remains to be investigated further is the robustness of quantitative features to imaging variations and across multiple institutions. In the case of MRI, signal intensity values vary according to the acquisition parameters used, yet no consensus exists on which preprocessing techniques are favorable in reducing scanner-dependent variability of image-based features. Hence, the purpose of this study was to assess the impact of common image preprocessing methods on the scanner dependence of MRI radiomic features in multi-institutional glioblastoma multiforme (GBM) datasets. Two independent GBM cohorts were analyzed: 50 cases from the TCGA-GBM dataset and 111 cases acquired in our institution, and each case consisted of 3 MRI sequences viz. FLAIR, T1-weighted, and T1-weighted post-contrast. Five image preprocessing techniques were examined: 8-bit global rescaling, 8-bit local rescaling, bias field correction, histogram standardization, and isotropic resampling. A total of 420 features divided into 8 categories representing texture, shape, edge, and intensity histogram were extracted. Two distinct imaging parameters were considered: scanner manufacturer and scanner magnetic field strength. Wilcoxon tests identified features robust to the considered acquisition parameters under the selected image preprocessing techniques. A machine learning-based strategy was implemented to measure the covariate shift between the analyzed datasets using features computed using the aforementioned preprocessing methods. Finally, radiomic scores (rad-scores) were constructed by identifying features relevant to patients' overall survival after eliminating those impacted by scanner variability. These were then evaluated for their prognostic significance through Kaplan-Meier and Cox hazards regression analyses. Our results demonstrate that overall, histogram standardization contributes the most in reducing radiomic feature variability as it is the technique to reduce the covariate shift for 3 feature categories and successfully discriminate patients into groups of different survival risks.

Extraction of Tumor in Brain MRI using Support Vector Machine and Performance Evaluation

  • Tunga, Prakash
Visvesvaraya Technological University Journal of Engineering Sciences and Management 2019 Journal Article, cited 0 times
In this article, we discuss mainly the extraction of tumor in brain MRI (Magnetic Resonance Imaging) images based on Support Vector Machine (SVM) technique. The work forms computer assisted demarcation of tumor from brain MRI and aims to be a part of routine which would otherwise performed manually by specialists. Here we focus on one of the common types of brain tumors, the Gliomas. These tumors have proved to be life threatening in advanced stages. MRI being a non-invasive procedure, can provide very good soft tissue contrast and so, forms a suitable imaging method for processing which leads to brain tumor detection and description. At first, we preprocess the given MRI image using anisotropic diffusion method, and then SVM technique is applied which classifies the image into tumor and non-tumorous regions. Next, we do the extraction of tumor, referred as Region of Interest (ROI) and describe it by calculating its size and position in the image. The remaining part, i.e., brain region with no tumor presence, refers to Non Region of Interest (NROI). Separation of ROI and NROI parts aids further processing such as ROI based compression. We also calculate the parameters that reflect the performance of the approach.

Stability and reproducibility of computed tomography radiomic features extracted from peritumoral regions of lung cancer lesions

  • Tunali, Ilke
  • Hall, Lawrence O
  • Napel, Sandy
  • Cherezov, Dmitry
  • Guvenis, Albert
  • Gillies, Robert J
  • Schabath, Matthew B
Med Phys 2019 Journal Article, cited 0 times
PURPOSE: Recent efforts have demonstrated that radiomic features extracted from the peritumoral region, the area surrounding the tumor parenchyma, have clinical utility in various cancer types. However, as like any radiomic features, peritumoral features could also be unstable and/or nonreproducible. Hence, the purpose of this study was to assess the stability and reproducibility of computed tomography (CT) radiomic features extracted from the peritumoral regions of lung lesions where stability was defined as the consistency of a feature by different segmentations, and reproducibility was defined as the consistency of a feature to different image acquisitions. METHODS: Stability was measured utilizing the "moist run" dataset and reproducibility was measured utilizing the Reference Image Database to Evaluate Therapy Response test-retest dataset. Peritumoral radiomic features were extracted from incremental distances of 3-12 mm outside the tumor segmentation. A total of 264 statistical, histogram, and texture radiomic features were assessed from the selected peritumoral region-of-interests (ROIs). All features (except wavelet texture features) were extracted using standardized algorithms defined by the Image Biomarker Standardisation Initiative. Stability and reproducibility of features were assessed using the concordance correlation coefficient. The clinical utility of stable and reproducible peritumoral features was tested in three previously published lung cancer datasets using overall survival as the endpoint. RESULTS: Features found to be stable and reproducible, regardless of the peritumoral distances, included statistical, histogram, and a subset of texture features suggesting that these features are less affected by changes (e.g., size or shape) of the peritumoral region due to different segmentations and image acquisitions. The stability and reproducibility of Laws and wavelet texture features were inconsistent across all peritumoral distances. The analyses also revealed that a subset of features were consistently stable irrespective of the initial parameters (e.g., seed point) for a given segmentation algorithm. No significant differences were found in stability for features that were extracted from ROIs bounded by a lung parenchyma mask versus ROIs that were not bounded by a lung parenchyma mask (i.e., peritumoral regions that extended outside of lung parenchyma). After testing the clinical utility of peritumoral features, stable and reproducible features were shown to be more likely to create repeatable models than unstable and nonreproducible features. CONCLUSIONS: This study identified a subset of stable and reproducible CT radiomic features extracted from the peritumoral region of lung lesions. The stable and reproducible features identified in this study could be applied to a feature selection pipeline for CT radiomic analyses. According to our findings, top performing features in survival models were more likely to be stable and reproducible hence, it may be best practice to utilize them to achieve repeatable studies and reduce the chance of overfitting.

Detection of lung cancer on chest CT images using minimum redundancy maximum relevance feature selection method with convolutional neural networks

  • Toğaçar, Mesut
  • Ergen, Burhan
  • Cömert, Zafer
Biocybernetics and Biomedical Engineering 2019 Journal Article, cited 0 times
Lung cancer is a disease caused by the involuntary increase of cells in the lung tissue. Early detection of cancerous cells is of vital importance in the lungs providing oxygen to the human body and excretion of carbon dioxide in the body as a result of vital activities. In this study, the detection of lung cancers is realized using LeNet, AlexNet and VGG-16 deep learning models. The experiments were carried out on an open dataset composed of Computed Tomography (CT) images. In the experiment, convolutional neural networks (CNNs) were used for feature extraction and classification purposes. In order to increase the success rate of the classification, the image augmentation techniques, such as cutting, zooming, horizontal turning and filling, were applied to the dataset during the training of the models. Because of the outstanding success of AlexNet model, the features obtained from the last fully-connected layer of the model were separately applied as the input to linear regression (LR), linear discriminant analysis (LDA), decision tree (DT), support vector machine (SVM), -nearest neighbor (kNN) and softmax classifiers. A combination of AlexNet model and NN classifier achieved the most efficient classification accuracy as 98.74 %. Then, the minimum redundancy maximum relevance (mRMR) feature selection method was applied to the deep feature set to choose the most efficient features. Consequently, the success rate was yielded as 99.51 % by reclassifying the dataset with the selected features and NN model. The proposed model is consistent diagnosis model for lung cancer detection using chest CT images.

Reliability of tumor segmentation in glioblastoma: impact on the robustness of MRI‐radiomic features

  • Tixier, Florent
  • Um, Hyemin
  • Young, Robert J
  • Veeraraghavan, Harini
Med Phys 2019 Journal Article, cited 0 times
Purpose The use of radiomic features as biomarkers of treatment response and outcome or as correlates to genomic variations requires that the computed features are robust and reproducible. Segmentation, a crucial step in radiomic analysis, is a major source of variability in the computed radiomic features. Therefore, we studied the impact of tumor segmentation variability on the robustness of MRI radiomic features. Method Fluid‐attenuated inversion recovery (FLAIR) and contrast‐enhanced T1‐weighted (T1WICE) MRI of 90 patients diagnosed with glioblastoma were segmented using a semi‐automatic algorithm and an interactive segmentation with two different raters. We analyzed the robustness of 108 radiomic features from 5 categories (intensity histogram, gray‐level co‐occurrence matrix, gray‐level size‐zone matrix (GLSZM), edge maps and shape) using intra‐class correlation coefficient (ICC) and Bland and Altman analysis. Results Our results show that both segmentation methods are reliable with ICC ≥ 0.96 and standard deviation (SD) of mean differences between the two raters (SDdiffs) ≤ 30%. Features computed from the histogram and co‐occurrence matrices were found to be the most robust (ICC ≥ 0.8 and SDdiffs ≤ 30% for most features in these groups). Features from GLSZM were shown to have mixed robustness. Edge, shape and GLSZM features were the most impacted by the choice of segmentation method with the interactive method resulting in more robust features than the semi‐automatic method. Finally, features computed from T1WICE and FLAIR images were found to have similar robustness when computed with the interactive segmentation method. Conclusion Semi‐automatic and interactive segmentation methods using two raters are both reliable. The interactive method produced more robust features than the semi‐automatic method. We also found that the robustness of radiomic features varied by categories. Therefore, this study could help motivate segmentation methods and feature selection in MRI radiomic studies.

Proton vs photon: A model-based approach to patient selection for reduction of cardiac toxicity in locally advanced lung cancer

  • Teoh, S.
  • Fiorini, F.
  • George, B.
  • Vallis, K. A.
  • Van den Heuvel, F.
Radiother Oncol 2019 Journal Article, cited 0 times
PURPOSE/OBJECTIVE: To use a model-based approach to identify a sub-group of patients with locally advanced lung cancer who would benefit from proton therapy compared to photon therapy for reduction of cardiac toxicity. MATERIAL/METHODS: Volumetric modulated arc photon therapy (VMAT) and robust-optimised intensity modulated proton therapy (IMPT) plans were generated for twenty patients with locally advanced lung cancer to give a dose of 70Gy (relative biological effectiveness (RBE)) in 35 fractions. Cases were selected to represent a range of anatomical locations of disease. Contouring, treatment planning and organs-at-risk constraints followed RTOG-1308 protocol. Whole heart and ub-structure doses were compared. Risk estimates of grade3 cardiac toxicity were calculated based on normal tissue complication probability (NTCP) models which incorporated dose metrics and patients baseline risk-factors (pre-existing heart disease (HD)). RESULTS: There was no statistically significant difference in target coverage between VMAT and IMPT. IMPT delivered lower doses to the heart and cardiac substructures (mean, heart V5 and V30, P<.05). In VMAT plans, there were statistically significant positive correlations between heart dose and the thoracic vertebral level that corresponded to the most inferior limit of the disease. The median level at which the superior aspect of the heart contour began was the T7 vertebrae. There was a statistically significant difference in dose (mean, V5 and V30) to the heart and all substructures (except mean dose to left coronary artery and V30 to sino-atrial node) when disease overlapped with or was inferior to the T7 vertebrae. In the presence of pre-existing HD and disease overlapping with or inferior to the T7 vertebrae, the mean estimated relative risk reduction of grade3 toxicities was 24-59%. CONCLUSION: IMPT is expected to reduce cardiac toxicity compared to VMAT by reducing dose to the heart and substructures. Patients with both pre-existing heart disease and tumour and nodal spread overlapping with or inferior to the T7 vertebrae are likely to benefit most from proton over photon therapy.

Is an analytical dose engine sufficient for intensity modulated proton therapy in lung cancer?

  • Teoh, S.
  • Fiorini, F.
  • George, B.
  • Vallis, K. A.
  • Van den Heuvel, F.
Br J Radiol 2019 Journal Article, cited 0 times
OBJECTIVE: To identify a subgroup of lung cancer plans where the analytical dose calculation (ADC) algorithm may be clinically acceptable compared to Monte Carlo (MC) dose calculation in intensity modulated proton therapy (IMPT). METHODS: Robust-optimised IMPT plans were generated for 20 patients to a dose of 70 Gy (relative biological effectiveness) in 35 fractions in Raystation. For each case, four plans were generated: three with ADC optimisation using the pencil beam (PB) algorithm followed by a final dose calculation with the following algorithms: PB (PB-PB), MC (PB-MC) and MC normalised to prescription dose (PB-MC scaled). A fourth plan was generated where MC optimisation and final dose calculation was performed (MC-MC). Dose comparison and gamma analysis (PB-PB vs PB-MC) at two dose thresholds were performed: 20% (D20) and 99% (D99) with PB-PB plans as reference. RESULTS: Overestimation of the dose to 99% and mean dose of the clinical target volume was observed in all PB-MC compared to PB-PB plans (median: 3.7 Gy(RBE) (5%) (range: 2.3 to 6.9 Gy(RBE)) and 1.8 Gy(RBE) (3%) (0.5 to 4.6 Gy(RBE))). PB-MC scaled plans resulted in significantly higher CTVD2 compared to PB-PB (median difference: -4 Gy(RBE) (-6%) (-5.3 to -2.4 Gy(RBE)), p </= .001). The overall median gamma pass rates (3%-3 mm) at D20 and D99 were 93.2% (range:62.2-97.5%) and 71.3 (15.4-92.0%). On multivariate analysis, presence of mediastinal disease and absence of range shifters were significantly associated with high gamma pass rates. Median D20 and D99 pass rates with these predictors were 96.0% (95.3-97.5%) and 85.4% (75.1-92.0%). MC-MC achieved similar target coverage and doses to OAR compared to PB-PB plans. CONCLUSION: In the presence of mediastinal involvement and absence of range shifters Raystation ADC may be clinically acceptable in lung IMPT. Otherwise, MC algorithm would be recommended to ensure accuracy of treatment plans. ADVANCES IN KNOWLEDGE: Although MC algorithm is more accurate compared to ADC in lung IMPT, ADC may be clinically acceptable where there is mediastinal involvement and absence of range shifters.

Automated Detection of Early Pulmonary Nodule in Computed Tomography Images

  • Tariq, Ahmed Usama
2019 Thesis, cited 0 times
Classification of lung cancer in CT scans majorly have two steps, detect all suspicious lesions also known as pulmonary nodules and calculate the malignancy. Currently, a lot of studies are about nodules detection, but some are about the evaluation of nodule malignancy. Since the presence of nodule does not unquestionably define the presence lung cancer and the morphology of nodule has a complex association with malignant growth, the diagnosis of lung cancer requests cautious examinations on each suspicious nodule and integrateed information every nodule. We propose a 3D CNN CAD systemto solve this problem. The system consists of two modules a 3D CNN for nodule detection, which outputs all suspicious nodules for a subject and second module train on XGBoost classifier with selective data to acquire the probability of lung malignancy for the subject.

Clinically applicable deep learning framework for organs at risk delineation in CT images

  • Tang, Hao
  • Chen, Xuming
  • Liu, Yang
  • Lu, Zhipeng
  • You, Junhua
  • Yang, Mingzhou
  • Yao, Shengyu
  • Zhao, Guoqi
  • Xu, Yi
  • Chen, Tingfeng
  • Liu, Yong
  • Xie, Xiaohui
Nature Machine Intelligence 2019 Journal Article, cited 0 times
Radiation therapy is one of the most widely used therapies for cancer treatment. A critical step in radiation therapy planning is to accurately delineate all organs at risk (OARs) to minimize potential adverse effects to healthy surrounding organs. However, manually delineating OARs based on computed tomography images is time-consuming and error-prone. Here, we present a deep learning model to automatically delineate OARs in head and neck, trained on a dataset of 215 computed tomography scans with 28 OARs manually delineated by experienced radiation oncologists. On a hold-out dataset of 100 computed tomography scans, our model achieves an average Dice similarity coefficient of 78.34% across the 28 OARs, significantly outperforming human experts and the previous state-of-the-art method by 10.05% and 5.18%, respectively. Our model takes only a few seconds to delineate an entire scan, compared to over half an hour by human experts. These findings demonstrate the potential for deep learning to improve the quality and reduce the treatment planning time of radiation therapy.

Five Classifications of Mammography IMages Based on Deep Cooperation Convolutional Neural Network

  • Tang, Chun-ming
  • Cui, Xiao-Mei
  • Yu, Xiang
  • Yang, Fan
American Scientific Research Journal for Engineering, Technology, and Sciences (ASRJETS) 2019 Journal Article, cited 0 times
Mammography is currently the preferred imaging method for breast cancer screening. Masses and calcificationare the main positive signs of mammography. Due to the variable appearance of masses and calcification, asignificant number of breast cancer cases are missed or misdiagnosed if it is only depended on the radiologists’subjective judgement. At present, most of the studies are based on the classical Convolutional Neural Networks(CNN), which uses the transfer learning to classify the benign and malignant masses in the mammographyimages. However, the CNN is designed for natural images which are substantially different from medicalimages. Therefore, we propose a Deep Cooperation CNN (DCCNN) to classify mammography images of a dataset into five categories including benign calcification, benign mass, malignant calcification, malignant mass andnormal breast. The data set consists of 695 normal cases from DDSM, 753 calcification cases and 891 masscases from CBIS-DDSM. Finally, DCCNN achieves 91% accuracy and 0.98 AUC on the test set, whoseperformance is superior to VGG16, GoogLeNet and InceptionV3 models. Therefore, DCCNN can aidradiologists to make more accurate judgments, greatly reducing the rate of missed and misdiagnosis.

Investigation of thoracic four-dimensional CT-based dimension reduction technique for extracting the robust radiomic features

  • Tanaka, S.
  • Kadoya, N.
  • Kajikawa, T.
  • Matsuda, S.
  • Dobashi, S.
  • Takeda, K.
  • Jingu, K.
Phys Med 2019 Journal Article, cited 0 times
Robust feature selection in radiomic analysis is often implemented using the RIDER test-retest datasets. However, the CT Protocol between the facility and test-retest datasets are different. Therefore, we investigated possibility to select robust features using thoracic four-dimensional CT (4D-CT) scans that are available from patients receiving radiation therapy. In 4D-CT datasets of 14 lung cancer patients who underwent stereotactic body radiotherapy (SBRT) and 14 test-retest datasets of non-small cell lung cancer (NSCLC), 1170 radiomic features (shape: n = 16, statistics: n = 32, texture: n = 1122) were extracted. A concordance correlation coefficient (CCC) > 0.85 was used to select robust features. We compared the robust features in various 4D-CT group with those in test-retest. The total number of robust features was a range between 846/1170 (72%) and 970/1170 (83%) in all 4D-CT groups with three breathing phases (40%–60%); however, that was a range between 44/1170 (4%) and 476/ 1170 (41%) in all 4D-CT groups with 10 breathing phases. In test-retest, the total number of robust features was 967/1170 (83%); thus, the number of robust features in 4D-CT was almost equal to that in test-retest by using 40–60% breathing phases. In 4D-CT, respiratory motion is a factor that greatly affects the robustness of features, thus by using only 40–60% breathing phases, excessive dimension reduction will be able to be prevented in any 4D-CT datasets, and select robust features suitable for CT protocol of your own facility.

Automatic estimation of the aortic lumen geometry by ellipse tracking

  • Tahoces, Pablo G
  • Alvarez, Luis
  • González, Esther
  • Cuenca, Carmelo
  • Trujillo, Agustín
  • Santana-Cedrés, Daniel
  • Esclarín, Julio
  • Gomez, Luis
  • Mazorra, Luis
  • Alemán-Flores, Miguel
International journal of computer assisted radiology and surgery 2019 Journal Article, cited 0 times

Advancing Semantic Interoperability of Image Annotations: Automated Conversion of Non-standard Image Annotations in a Commercial PACS to the Annotation and Image Markup

  • Swinburne, Nathaniel C
  • Mendelson, David
  • Rubin, Daniel L
J Digit Imaging 2019 Journal Article, cited 0 times
Sharing radiologic image annotations among multiple institutions is important in many clinical scenarios; however, interoperability is prevented because different vendors’ PACS store annotations in non-standardized formats that lack semantic interoperability. Our goal was to develop software to automate the conversion of image annotations in a commercial PACS to the Annotation and Image Markup (AIM) standardized format and demonstrate the utility of this conversion for automated matching of lesion measurements across time points for cancer lesion tracking. We created a software module in Java to parse the DICOM presentation state (DICOM-PS) objects (that contain the image annotations) for imaging studies exported from a commercial PACS (GE Centricity v3.x). Our software identifies line annotations encoded within the DICOM-PS objects and exports the annotations in the AIM format. A separate Python script processes the AIM annotation files to match line measurements (on lesions) across time points by tracking the 3D coordinates of annotated lesions. To validate the interoperability of our approach, we exported annotations from Centricity PACS into ePAD ( (Rubin et al., Transl Oncol 7(1):23–35, 2014), a freely available AIM-compliant workstation, and the lesion measurement annotations were correctly linked by ePAD across sequential imaging studies. As quantitative imaging becomes more prevalent in radiology, interoperability of image annotations gains increasing importance. Our work demonstrates that image annotations in a vendor system lacking standard semantics can be automatically converted to a standardized metadata format such as AIM, enabling interoperability and potentially facilitating large-scale analysis of image annotations and the generation of high-quality labels for deep learning initiatives. This effort could be extended for use with other vendors’ PACS.

Image Correction in Emission Tomography Using Deep Convolution Neural Network

  • Suzuki, T
  • Kudo, H
2019 Conference Proceedings, cited 0 times
We propose a new approach using Deep Convolution Neural Network (DCNN) to correct for image degradations due to statistical noise and photon attenuation in Emission Tomography (ET). The proposed approach first reconstructs an image by the standard Filtered Backprojection (FBP) without correcting for the degradations followed by inputting the degraded image into DCNN to obtain an improved image. We consider two different scenarios. The first scenario inputs an ET image only into DCNN, whereas the second scenario inputs a pair of degraded ET image and CT/MRI image to improve accuracy of the correction. The simulation result demonstrates that both the scenarios can improve image quality compared to the FBP without correction, and, in particular, accuracy of the second scenario is comparable to that of the standard iterative reconstruction such as Maximum Likelihood Expectation Maximization (MLEM) and Ordered-Subsets EM (OSEM) methods. The proposed method is able to output an image in very short time, because it does not rely on iterative computations.

Machine learning to predict lung nodule biopsy method using CT image features: A pilot study

  • Sumathipala, Yohan
  • Shafiq, Majid
  • Bongen, Erika
  • Brinton, Connor
  • Paik, David
Computerized Medical Imaging and Graphics 2019 Journal Article, cited 0 times

Context Dependent Fuzzy Associated Statistical Model for Intensity Inhomogeneity Correction from Magnetic Resonance Images

  • Subudhi, BN
  • Veerakumar, T
  • Esakkirajan, S
  • Ghosh, A
IEEE Journal of Translational Engineering in Health and Medicine 2019 Journal Article, cited 0 times
In this article, a novel context dependent fuzzy set associated statistical model based intensity inhomogeneity correction technique for Magnetic Resonance Image (MRI) is proposed. The observed MRI is considered to be affected by intensity inhomogeneity and it is assumed to be a multiplicative quantity. In the proposed scheme the intensity inhomogeneity correction and MRI segmentation is considered as a combined task. The maximum a posteriori probability (MAP) estimation principle is explored to solve this problem. A fuzzy set associated Gibbs' Markov random field (MRF) is considered to model the spatio-contextual information of an MRI. It is observed that the MAP estimate of the MRF model does not yield good results with any local searching strategy, as it gets trapped to local optimum. Hence, we have exploited the advantage of variable neighborhood searching (VNS) based iterative global convergence criterion for MRF-MAP estimation. The effectiveness of the proposed scheme is established by testing it on different MRIs. Three performance evaluation measures are considered to evaluate the performance of the proposed scheme against existing state-of-the-art techniques. Simulation results establish the effectiveness of the proposed technique.

ALTIS: A fast and automatic lung and trachea CT-image segmentation method

  • Sousa, A. M.
  • Martins, S. B.
  • Falcão, A. X.
  • Reis, F.
  • Bagatin, E.
  • Irion, K.
Med Phys 2019 Journal Article, cited 0 times
PURPOSE: The automated segmentation of each lung and trachea in CT scans is commonly taken as a solved problem. Indeed, existing approaches may easily fail in the presence of some abnormalities caused by a disease, trauma, or previous surgery. For robustness, we present ALTIS (implementation is available at - a fast automatic lung and trachea CT-image segmentation method that relies on image features and relative shape- and intensity-based characteristics less affected by most appearance variations of abnormal lungs and trachea. METHODS: ALTIS consists of a sequence of image foresting transforms (IFTs) organized in three main steps: (a) lung-and-trachea extraction, (b) seed estimation inside background, trachea, left lung, and right lung, and (c) their delineation such that each object is defined by an optimum-path forest rooted at its internal seeds. We compare ALTIS with two methods based on shape models (SOSM-S and MALF), and one algorithm based on seeded region growing (PTK). RESULTS: The experiments involve the highest number of scans found in literature - 1255 scans, from multiple public data sets containing many anomalous cases, being only 50 normal scans used for training and 1205 scans used for testing the methods. Quantitative experiments are based on two metrics, DICE and ASSD. Furthermore, we also demonstrate the robustness of ALTIS in seed estimation. Considering the test set, the proposed method achieves an average DICE of 0.987 for both lungs and 0.898 for the trachea, whereas an average ASSD of 0.938 for the right lung, 0.856 for the left lung, and 1.316 for the trachea. These results indicate that ALTIS is statistically more accurate and considerably faster than the compared methods, being able to complete segmentation in a few seconds on modern PCs. CONCLUSION: ALTIS is the most effective and efficient choice among the compared methods to segment left lung, right lung, and trachea in anomalous CT scans for subsequent detection, segmentation, and quantitative analysis of abnormal structures in the lung parenchyma and pleural space.

Dynamic Co-occurrence of Local Anisotropic Gradient Orientations (DyCoLIAGe) Descriptors from Pre-treatment Perfusion DSC-MRI to Predict Overall Survival in Glioblastoma

  • Song, Bolin
2019 Thesis, cited 0 times
A significant clinical challenge in glioblastoma is to risk-stratify patients for clinical trials, preferably using MRI scans. Radiomics involves mining of sub-visual features that could serve as surrogate markers of tumor heterogeneity from routine imaging. Previously our group had developed a new gradient-based radiomic descriptor, Co-occurrence of Local Anisotropic Gradient Orientations (COLLAGE), to capture tumor heterogeneity on structural MRI. I present an extension of CoLLAGE on perfusion MRI, termed dynamic COLLAGE (DyCoLIAGe), and demonstrate its application in predicting overall survival in glioblastoma. Following manual segmentation, 52 CoLIAGe features were extracted from edema and enhancing tumor at different time phases during contrast administration of perfusion MRI. Each feature was separately plotted across different time-points, and a 3rd-order polynomial was fit to each feature curve. The corresponding polynomial coefficients were evaluated in terms of their prognosis performance. My results suggest that DyCoLIAGe may be prognostic of overall survival in glioblastoma.

Recovering Physiological Changes in Nasal Anatomy with Confidence Estimates

  • Sinha, A.
  • Liu, X.
  • Ishii, M.
  • Hager, G. D.
  • Taylor, Russell H
2019 Conference Proceedings, cited 0 times
Purpose Clinical examinations that involve endoscopic exploration of the nasal cavity and sinuses often do not have a reference preoperative image, like a computed tomography (CT) scan, to provide structural context to the clinician. The aim of this work is to provide structural context during clinical exploration without requiring additional CT acquisition. Methods We present a method for registration during clinical endoscopy in the absence of CT scans by making use of shape statistics from past CT scans. Using a deformable registration algorithm that uses these shape statistics along with dense point clouds from video, we simultaneously achieve two goals: (1) register the statistically mean shape of the target anatomy with the video point cloud, and (2) estimate patient shape by deforming the mean shape to fit the video point cloud. Finally, we use statistical tests to assign confidence to the computed registration. Results We are able to achieve submillimeter errors in registrations and patient shape reconstructions using simulated data. We establish and evaluate the confidence criteria for our registrations using simulated data. Finally, we evaluate our registration method on in vivo clinical data and assign confidence to these registrations using the criteria established in simulation. All registrations that are not rejected by our criteria produce submillimeter residual errors. Conclusion Our deformable registration method can produce submillimeter registrations and reconstructions as well as statistical scores that can be used to assign confidence to the registrations.

Endoscopic navigation in the clinic: registration in the absence of preoperative imaging

  • Sinha, A.
  • Ishii, M.
  • Hager, G. D.
  • Taylor, R. H.
Int J Comput Assist Radiol Surg 2019 Journal Article, cited 0 times
PURPOSE: Clinical examinations that involve endoscopic exploration of the nasal cavity and sinuses often do not have a reference preoperative image, like a computed tomography (CT) scan, to provide structural context to the clinician. The aim of this work is to provide structural context during clinical exploration without requiring additional CT acquisition. METHODS: We present a method for registration during clinical endoscopy in the absence of CT scans by making use of shape statistics from past CT scans. Using a deformable registration algorithm that uses these shape statistics along with dense point clouds from video, we simultaneously achieve two goals: (1) register the statistically mean shape of the target anatomy with the video point cloud, and (2) estimate patient shape by deforming the mean shape to fit the video point cloud. Finally, we use statistical tests to assign confidence to the computed registration. RESULTS: We are able to achieve submillimeter errors in registrations and patient shape reconstructions using simulated data. We establish and evaluate the confidence criteria for our registrations using simulated data. Finally, we evaluate our registration method on in vivo clinical data and assign confidence to these registrations using the criteria established in simulation. All registrations that are not rejected by our criteria produce submillimeter residual errors. CONCLUSION: Our deformable registration method can produce submillimeter registrations and reconstructions as well as statistical scores that can be used to assign confidence to the registrations.

The deformable most-likely-point paradigm

  • Sinha, A.
  • Billings, S. D.
  • Reiter, A.
  • Liu, X.
  • Ishii, M.
  • Hager, G. D.
  • Taylor, R. H.
Med Image Anal 2019 Journal Article, cited 1 times
In this paper, we present three deformable registration algorithms designed within a paradigm that uses 3D statistical shape models to accomplish two tasks simultaneously: 1) register point features from previously unseen data to a statistically derived shape (e.g., mean shape), and 2) deform the statistically derived shape to estimate the shape represented by the point features. This paradigm, called the deformable most-likely-point paradigm, is motivated by the idea that generative shape models built from available data can be used to estimate previously unseen data. We developed three deformable registration algorithms within this paradigm using statistical shape models built from reliably segmented objects with correspondences. Results from several experiments show that our algorithms produce accurate registrations and reconstructions in a variety of applications with errors up to CT resolution on medical datasets. Our code is available at

Brain Tumor Extraction from MRI Using Clustering Methods and Evaluation of Their Performance

  • Singh, Vipula
  • Tunga, P. Prakash
2019 Conference Paper, cited 0 times
In this paper, we consider the extraction of brain tumor from MRI (Magnetic Resonance Imaging) images using K-means, Fuzzy c-means and Region growing clustering methods. After extraction, various parameters related to performance of clustering methods, and also, parameters related to description of tumor are calculated. MRI is a non-invasive method which provides the view of structural features of tissues in the body at very high resolution (typically on 100 μm scale). Therefore, it will be advantageous if the detection and segmentation of brain tumors are based on MRI. This work is in the direction of replacing the manual identification and separation of tumor structures from brain MRI by computer aided techniques, which would add great value with respect to accuracy, reproducibility, diagnosis and treatment planning. The brain tumor separated from original image is referred as Region of Interest (ROI) and remaining portion of original image is referred as Non-region of Interest (NROI).

Tumor Heterogeneity and Genomics to Predict Radiation Therapy Outcome for Head-and-Neck Cancer: A Machine Learning Approach

  • Singh, A.
  • Goyal, S.
  • Rao, Y. J.
  • Loew, M.
International Journal of Radiation Oncology*Biology*Physics 2019 Journal Article, cited 0 times
Head and Neck Squamous Cell Carcinoma (HNSCC) is usually treated with Radiation Therapy (RT). Recurrence of the tumor occurs in some patients. The purpose of this study was to determine whether information present in the heterogeneity of tumor regions in the pre-treatment PET scans of HNSCC patients can be used to predict recurrence. We then extended our study to include gene mutation information of a patient group to assess its value as an additional feature to determine treatment efficacy. Materials/Methods Pre-treatment PET scans of 20 patients from the first database (HNSCC), included in The Cancer Imaging Archive (TCIA), were analyzed. The follow-up duration for those patients varied between two and ten years. Accompanying clinical data were used to divide the patients into two categories according to whether they had a recurrence of the tumor. Radiation structures included in the database were overlain on the PET scans to delineate the tumor, whose heterogeneity is measured by texture analysis. The classification is carried out in two ways: making a decision for each image slice, and treating the collection of slices as a 3D volume. This approach was tested on an independent set of 53 patients from a second TCIA database (Head-Neck-PET-CT [HNPC]). The Cancer Genome Atlas (TCGA) identified frequent mutations in the expression of PIK3CA, CDKN2A and TP53 genes in HNSCC patients. We combined gene expression features with texture features for 11 patients of the third database (TCGA-HNSC), and re-evaluated the classification accuracies.

A Novel Imaging-Genomic Approach to Predict Outcomes of Radiation Therapy

  • Singh, Apurva
  • Goyal, Sharad
  • Rao, Yuan James
  • Loew, Murray
2019 Thesis, cited 0 times
Introduction: Tumor regions are populated by various cellular species. Intra-tumor radiogenomic heterogeneity can be attributed to factors including variations in the blood flow to the different parts of the tumor and variations in the gene mutation frequencies. This heterogeneity is further propagated by cancer cells which adopt an “evolutionarily enlightened” growth approach. This growth, which focuses on developing an adaptive mechanism to progressively develop a strong resistance to therapy, follows a unique pattern in each patient. This makes the development of a uniform treatment technique very challenging and makes the concept of “precision medicine”, which is developed using information unique to each patient, very crucial to the development of effective cancer treatment methods. Our study aims to determine whether information present in the heterogeneity of tumor regions in the pre-treatment PET scans of patients and in their gene mutation status can measure the efficacy of radiation therapy in their treatment. We wish to develop a scheme which could predict the effectiveness of therapy at the pre-treatment stage, reduce the unnecessary exposure of the patient to radiation which would ultimately not be helpful in curing the patient and thus help in choosing alternative cancer therapy measures for the patients under consideration. Materials and methods: Our radiomics analysis was developed using PET scans for 20 patients from the HNSCC database from TCIA (The Cancer Imaging Archive). Clinical data were used to divide the patients into two categories based on the recurrence status of the tumor. Radiation structures are overlain on the PET scans for tumor delineation. Texture features extracted from tumor regions are reduced using correlation matrix-based technique and are classified by methods including Weighted KNN, Linear SVM and Bagged Trees. Slice-wise classification results are computed, treating each slice as a 2D image and treating the collection of slices as a 3D volume. Patient-wise results are computed by a voting scheme which assigns to each patient the class label possessed by more than half of its slices. After the voting is complete, the assigned labels are compared to the actual labels to compute the patient-wise classification accuracies. This workflow was tested on a group of 53 patients of the database- Head-Neck-PET-CT. We further proceeded to develop a radiogenomic workflow by combining gene expression features with tumor texture features for a group of 11 patients of our third database: TCGA-HNSC. We developed geometric transform-based database augmentation method and used it to generate PET scans using images from the existing dataset. To evaluate our analysis, we decided to test our workflow on patients with tumors at different sites, using scans of different modalities. We included PET scans for 24 lung cancer patients (15 from TCGA-LUSC (Lung Squamous Cell Carcinoma) and 9 from TCGA-LUAD (Lung Adenocarcinoma) databases). We used wavelet features along with the existing group of texture features to improve the classification scores. Further, we used non-rigid transform-based techniques for database augmentation. We also included MR scans for 54 cervical cancer patients (from TCGA-CESC (Cervical Squamous Cell Carcinoma and Endocervical Carcinoma) database) in our study and employed Fisher based selection technique for reduction of the high dimensional feature space. Results: The classification accuracy obtained by the 2D and 3D texture analysis is about 70% for slice-wise classification and 80% for patient-wise classification for the head and neck cancer patients (HNSCC and Head-Neck-PT-CT databases). The overall classification accuracies obtained from the transformed tumor slices are comparable to the original tumor slices. Thus, geometric transformation is an effective method for database augmentation. The addition of binary genomic features to the texture features (TCGA-HNSC patients) increases the classification accuracies (from 80%-100% for 2D and from 60%-100% for 3D patient-wise classification). The classification accuracies increase from 58% to 84% (2D slice-wise) and from 58% to 70% (2D patient-wise) in the case of lung cancer patients with the inclusion of wavelet features to the existing texture feature group and by augmenting the database (non-rigid transformation) to include equal number of patients and slices in the recurrent and non-recurrent categories. The accuracies are about 64% for 2D slice-wise and patient-wise classification for cervical cancer patients (using correlation-matrix based feature selection) and increase to about 72% using Fisher- based selection criteria Conclusion: Our study has introduced the novel approach of fusing the information present in The Cancer Imaging Archive (TCIA) and TCGA to develop a combined imaging phenotype and genotype expression for therapy personalization. Texture measures provide a measure of tumor heterogeneity, which can be used to predict recurrence status. Information from gene expression patterns of the patients, when combined with texture measures, provides a unique radiogenomic feature which substantially improves therapy response prediction scores.

Predicting Lung Cancer Patients’ Survival Time via Logistic Regression-based Models in a Quantitative Radiomic Framework

  • Shayesteh, S. P.
  • Shiri, I.
  • Karami, A. H.
  • Hashemian, R.
  • Kooranifar, S.
  • Ghaznavi, H.
  • Shakeri-Zadeh, A.
Journal of Biomedical Physics and Engineering 2019 Journal Article, cited 0 times
Objectives: The aim of this study was to predict the survival time of lung cancer patients using the advantages of both radiomics and logistic regression-based classification models. Material and Methods: Fifty-nine patients with primary lung adenocarcinoma were included in this retrospective study and pre-treatment contrast-enhanced CT images were acquired. The patients lived more than 2 years were classified as the ‘Alive’ class and otherwise as the ‘Dead’ class. In our proposed quantitative radiomic framework, we first extracted the associated regions of each lung lesion from pre-treatment CT images for each patient via grow cut segmentation algorithm. Then, 40 radiomic features were extracted from the segmented lung lesions. In order to enhance the generalizability of the classification models, the mutual information-based feature selection method was applied to each feature vector. We investigated the performance of six logistic regression-based classification models with consider to acceptable evaluation measures such as F1 score and accuracy. Results: It was observed that the mutual information feature selection method can help the classifier to achieve better predictive results. In our study, the Logistic regression (LR) and Dual Coordinate Descent method for Logistic Regression (DCD-LR) models achieved the best results indicating that these classification models have strong potential for classifying the more important class (i.e., the ‘Alive’ class). Conclusion: The proposed quantitative radiomic framework yielded promising results, which can guide physicians to make better and more precise decisions and increase the chance of treatment success.

A Block Adaptive Near-Lossless Compression Algorithm for Medical Image Sequences and Diagnostic Quality Assessment

  • Sharma, Urvashi
  • Sood, Meenakshi
  • Puthooran, Emjee
J Digit Imaging 2019 Journal Article, cited 0 times
The near-lossless compression technique has better compression ratio than lossless compression technique while maintaining a maximum error limit for each pixel. It takes the advantage of both the lossy and lossless compression methods providing high compression ratio, which can be used for medical images while preserving diagnostic information. The proposed algorithm uses a resolution and modality independent threshold-based predictor, optimal quantization (q) level, and adaptive block size encoding. The proposed method employs resolution independent gradient edge detector (RIGED) for removing inter-pixel redundancy and block adaptive arithmetic encoding (BAAE) is used after quantization to remove coding redundancy. Quantizer with an optimum q level is used to implement the proposed method for high compression efficiency and for the better quality of the recovered images. The proposed method is implemented on volumetric 8-bit and 16-bit standard medical images and also validated on real time 16-bit-depth images collected from government hospitals. The results show the proposed algorithm yields a high coding performance with BPP of 1.37 and produces high peak signal-to-noise ratio (PSNR) of 51.35 dB for 8-bit-depth image dataset as compared with other near-lossless compression. The average BPP values of 3.411 and 2.609 are obtained by the proposed technique for 16-bit standard medical image dataset and real-time medical dataset respectively with maintained image quality. The improved near-lossless predictive coding technique achieves high compression ratio without losing diagnostic information from the image.

Technical Note‐In silico imaging tools from the VICTRE clinical trial

  • Sharma, Diksha
  • Graff, Christian G.
  • Badal, Andreu
  • Zeng, Rongping
  • Sawant, Purva
  • Sengupta, Aunnasha
  • Dahal, Eshan
  • Badano, Aldo
Medical physics 2019 Journal Article, cited 0 times
PURPOSE: In silico imaging clinical trials are emerging alternative sources of evidence for regulatory evaluation and are typically cheaper and faster than human trials. In this Note, we describe the set of in silico imaging software tools used in the VICTRE (Virtual Clinical Trial for Regulatory Evaluation) which replicated a traditional trial using a computational pipeline. MATERIALS AND METHODS: We describe a complete imaging clinical trial software package for comparing two breast imaging modalities (digital mammography and digital breast tomosynthesis). First, digital breast models were developed based on procedural generation techniques for normal anatomy. Second, lesions were inserted in a subset of breast models. The breasts were imaged using GPU-accelerated Monte Carlo transport methods and read using image interpretation models for the presence of lesions. All in silico components were assembled into a computational pipeline. The VICTRE images were made available in DICOM format for ease of use and visualization. RESULTS: We describe an open-source collection of in silico tools for running imaging clinical trials. All tools and source codes have been made freely available. CONCLUSION: The open-source tools distributed as part of the VICTRE project facilitate the design and execution of other in silico imaging clinical trials. The entire pipeline can be run as a complete imaging chain, modified to match needs of other trial designs, or used as independent components to build additional pipelines.

Content based medical image retrieval using topic and location model

  • Shamna, P.
  • Govindan, V. K.
  • Abdul Nazeer, K. A.
Journal of biomedical informatics 2019 Journal Article, cited 0 times
Background and objective Retrieval of medical images from an anatomically diverse dataset is a challenging task. Objective of our present study is to analyse the automated medical image retrieval system incorporating topic and location probabilities to enhance the performance. Materials and methods In this paper, we present an automated medical image retrieval system using Topic and Location Model. The topic information is generated using Guided Latent Dirichlet Allocation (GuidedLDA) method. A novel Location Model is proposed to incorporate the spatial information of visual words. We also introduce a new metric called position weighted Precision (wPrecision) to measure the rank order of the retrieved images. Results Experiments on two large medical image datasets - IRMA 2009 and Multimodal dataset - revealed that the proposed method outperforms existing medical image retrieval systems in terms of Precision and Mean Average Precision. The proposed method achieved better Mean Average Precision (86.74%) compared to the recent medical image retrieval systems using the Multimodal dataset with 7200 images. The proposed system achieved better Precision (97.5%) for top ten images compared to the recent medical image retrieval systems using IRMA 2009 dataset with 14,410 images. Conclusion Supplementing spatial details of visual words to the Topic Model enhances the retrieval efficiency of medical images from large repositories. Such automated medical image retrieval systems can be used to assist physician to retrieve medical images with better precision compared to the state-of-the-art retrieval systems.

Radiomics based likelihood functions for cancer diagnosis

  • Shakir, Hina
  • Deng, Yiming
  • Rasheed, Haroon
  • Khan, Tariq Mairaj Rasool
Sci RepScientific reports 2019 Journal Article, cited 0 times
Radiomic features based classifiers and neural networks have shown promising results in tumor classification. The classification performance can be further improved greatly by exploring and incorporating the discriminative features towards cancer into mathematical models. In this research work, we have developed two radiomics driven likelihood models in Computed Tomography(CT) images to classify lung, colon, head and neck cancer. Initially, two diagnostic radiomic signatures were derived by extracting 105 3-D features from 200 lung nodules and by selecting the features with higher average scores from several supervised as well as unsupervised feature ranking algorithms. The signatures obtained from both the ranking approaches were integrated into two mathematical likelihood functions for tumor classification. Validation of the likelihood functions was performed on 265 public data sets of lung, colon, head and neck cancer with high classification rate. The achieved results show robustness of the models and suggest that diagnostic mathematical functions using general tumor phenotype can be successfully developed for cancer diagnosis.

A new approach for brain tumor diagnosis system: Single image super resolution based maximum fuzzy entropy segmentation and convolutional neural network

  • Sert, Eser
  • Özyurt, Fatih
  • Doğantekin, Akif
Med Hypotheses 2019 Journal Article, cited 0 times
Magnetic resonance imaging (MRI) images can be used to diagnose brain tumors. Thanks to these images, some methods have so far been proposed in order to distinguish between benign and malignant brain tumors. Many systems attempting to define these tumors are based on tissue analysis methods. However, various factors such as the quality of an MRI device, noisy images and low image resolution may decrease the quality of MRI images. To eliminate these problems, super resolution approaches are preferred as a complementary source for brain tumor images. The proposed method benefits from single image super resolution (SISR) and maximum fuzzy entropy segmentation (MFES) for brain tumor segmentation on an MRI image. Later, pre-trained ResNet architecture, which is a convolutional neural network (CNN) architecture, and support vector machine (SVM) are used to perform feature extraction and classification, respectively. It was observed in experimental studies that SISR displayed a higher performance in terms of brain tumor segmentation. Similarly, it displayed a higher performance in terms of classifying brain tumor regions as well as benign and malignant brain tumors. As a result, the present study indicated that SISR yielded an accuracy rate of 95% in the diagnosis of segmented brain tumors, which exceeds brain tumor segmentation using MFES without SISR by 7.5%.

Deep Learning Architectures for Automated Image Segmentation

  • Sengupta, Debleena
2019 Thesis, cited 0 times
Image segmentation is widely used in a variety of computer vision tasks, such as object localization and recognition, boundary detection, and medical imaging. This thesis proposes deep learning architectures to improve automatic object localization and boundary delineation for salient object segmentation in natural images and for 2D medical image segmentation. First, we propose and evaluate a novel dilated dense encoder-decoder architecture with a custom dilated spatial pyramid pooling block to accurately localize and delineate boundaries for salient object segmentation. The dilation offers better spatial understanding and the dense connectivity preserves features learned at shallower levels of the network for better localization. Tested on three publicly available datasets, our architecture outperforms the state-of-the-art for one and is very competitive on the other two. Second, we propose and evaluate a custom 2D dilated dense UNet architecture for accurate lesion localization and segmentation in medical images. This architecture can be utilized as a stand alone segmentation framework or used as a rich feature extracting backbone to aid other models in medical image segmentation. Our architecture outperforms all baseline models for accurate lesion localization and segmentation on a new dataset. We furthermore explore the main considerations that should be taken into account for 3D medical image segmentation, among them preprocessing techniques and specialized loss functions.

Repeatability of Multiparametric Prostate MRI Radiomics Features

  • Schwier, Michael
  • van Griethuysen, Joost
  • Vangel, Mark G
  • Pieper, Steve
  • Peled, Sharon
  • Tempany, Clare
  • Aerts, Hugo J W L
  • Kikinis, Ron
  • Fennessy, Fiona M
  • Fedorov, Andriy
Sci RepScientific reports 2019 Journal Article, cited 46 times
In this study we assessed the repeatability of radiomics features on small prostate tumors using test-retest Multiparametric Magnetic Resonance Imaging (mpMRI). The premise of radiomics is that quantitative image-based features can serve as biomarkers for detecting and characterizing disease. For such biomarkers to be useful, repeatability is a basic requirement, meaning its value must remain stable between two scans, if the conditions remain stable. We investigated repeatability of radiomics features under various preprocessing and extraction configurations including various image normalization schemes, different image pre-filtering, and different bin widths for image discretization. Although we found many radiomics features and preprocessing combinations with high repeatability (Intraclass Correlation Coefficient > 0.85), our results indicate that overall the repeatability is highly sensitive to the processing parameters. Neither image normalization, using a variety of approaches, nor the use of pre-filtering options resulted in consistent improvements in repeatability. We urge caution when interpreting radiomics features and advise paying close attention to the processing configuration details of reported results. Furthermore, we advocate reporting all processing details in radiomics studies and strongly recommend the use of open source implementations.

Predicting all-cause and lung cancer mortality using emphysema score progression rate between baseline and follow-up chest CT images: A comparison of risk model performances

  • Schreuder, Anton
  • Jacobs, Colin
  • Gallardo-Estrella, Leticia
  • Prokop, Mathias
  • Schaefer-Prokop, Cornelia M
  • van Ginneken, Bram
PLoS One 2019 Journal Article, cited 0 times

Quantitative Delta T1 (dT1) as a Replacement for Adjudicated Central Reader Analysis of Contrast-Enhancing Tumor Burden: A Subanalysis of the American College of Radiology Imaging Network 6677/Radiation Therapy Oncology Group 0625 Multicenter Brain Tumor Trial.

  • Schmainda, K M
  • Prah, M A
  • Zhang, Z
  • Snyder, B S
  • Rand, S D
  • Jensen, T R
  • Barboriak, D P
  • Boxerman, J L
AJNR Am J Neuroradiol 2019 Journal Article, cited 0 times
BACKGROUND AND PURPOSE: Brain tumor clinical trials requiring solid tumor assessment typically rely on the 2D manual delineation of enhancing tumors by >/=2 expert readers, a time-consuming step with poor interreader agreement. As a solution, we developed quantitative dT1 maps for the delineation of enhancing lesions. This retrospective analysis compares dT1 with 2D manual delineation of enhancing tumors acquired at 2 time points during the post therapeutic surveillance period of the American College of Radiology Imaging Network 6677/Radiation Therapy Oncology Group 0625 (ACRIN 6677/RTOG 0625) clinical trial. MATERIALS AND METHODS: Patients enrolled in ACRIN 6677/RTOG 0625, a multicenter, randomized Phase II trial of bevacizumab in recurrent glioblastoma, underwent standard MR imaging before and after treatment initiation. For 123 patients from 23 institutions, both 2D manual delineation of enhancing tumors and dT1 datasets were evaluable at weeks 8 (n = 74) and 16 (n = 57). Using dT1, we assessed the radiologic response and progression at each time point. Percentage agreement with adjudicated 2D manual delineation of enhancing tumor reads and association between progression status and overall survival were determined. RESULTS: For identification of progression, dT1 and adjudicated 2D manual delineation of enhancing tumor reads were in perfect agreement at week 8, with 73.7% agreement at week 16. Both methods showed significant differences in overall survival at each time point. When nonprogressors were further divided into responders versus nonresponders/nonprogressors, the agreement decreased to 70.3% and 52.6%, yet dT1 showed a significant difference in overall survival at week 8 (P = .01), suggesting that dT1 may provide greater sensitivity for stratifying subpopulations. CONCLUSIONS: This study shows that dT1 can predict early progression comparable with the standard method but offers the potential for substantial time and cost savings for clinical trials.

Regression based overall survival prediction of glioblastoma multiforme patients using a single discovery cohort of multi-institutional multi-channel MR images

  • Sanghani, Parita
  • Ang, Beng Ti
  • King, Nicolas Kon Kam
  • Ren, Hongliang
Med Biol Eng ComputMed Biol Eng Comput 2019 Journal Article, cited 0 times
Glioblastoma multiforme (GBM) are malignant brain tumors, associated with poor overall survival (OS). This study aims to predict OS of GBM patients (in days) using a regression framework and assess the impact of tumor shape features on OS prediction. Multi-channel MR image derived texture features, tumor shape, and volumetric features, and patient age were obtained for 163 GBM patients. In order to assess the impact of tumor shape features on OS prediction, two feature sets, with and without tumor shape features, were created. For the feature set with tumor shape features, the mean prediction error (MPE) was 14.6 days and its 95% confidence interval (CI) was 195.8 days. For the feature set excluding shape features, the MPE was 17.1 days and its 95% CI was observed to be 212.7 days. The coefficient of determination (R2) value obtained for the feature set with shape features was 0.92, while it was 0.90 for the feature set excluding shape features. Although marginal, inclusion of shape features improves OS prediction in GBM patients. The proposed OS prediction method using regression provides good accuracy and overcomes the limitations of GBM OS classification, like choosing data-derived or pre-decided thresholds to define the OS groups.

Real-time interactive holographic 3D display with a 360 degrees horizontal viewing zone

  • Sando, Yusuke
  • Satoh, Kazuo
  • Barada, Daisuke
  • Yatagai, Toyohiko
Appl Opt 2019 Journal Article, cited 0 times
To realize a real-time interactive holographic three-dimensional (3D) display system, we synthesize a set of 24 full high-definition (HD) binary computer-generated holograms (CGHs) based on a 3D fast-Fourier-transform-based approach. These 24 CGHs are streamed into a digital micromirror device (DMD) as a single 24-bit image at 60 Hz: 1440 CGHs are synthesized in less than a second. Continual updates of the CGHs displayed on the DMD and synchronization with a rotating mirror enlarges the horizontal viewing zone to 360 degrees using a time-division approach. We successfully demonstrate interactive manipulation, such as object rotation, rendering mode switching, and threshold value alteration, for a medical dataset of a human head obtained by X-ray computed tomography.

Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks

  • Sandfort, Veit
  • Yan, Ke
  • Pickhardt, Perry J
  • Summers, Ronald M
Sci RepScientific reports 2019 Journal Article, cited 0 times
Labeled medical imaging data is scarce and expensive to generate. To achieve generalizable deep learning models large amounts of data are needed. Standard data augmentation is a method to increase generalizability and is routinely performed. Generative adversarial networks offer a novel method for data augmentation. We evaluate the use of CycleGAN for data augmentation in CT segmentation tasks. Using a large image database we trained a CycleGAN to transform contrast CT images into non-contrast images. We then used the trained CycleGAN to augment our training using these synthetic non-contrast images. We compared the segmentation performance of a U-Net trained on the original dataset compared to a U-Net trained on the combined dataset of original data and synthetic non-contrast images. We further evaluated the U-Net segmentation performance on two separate datasets: The original contrast CT dataset on which segmentations were created and a second dataset from a different hospital containing only non-contrast CTs. We refer to these 2 separate datasets as the in-distribution and out-of-distribution datasets, respectively. We show that in several CT segmentation tasks performance is improved significantly, especially in out-of-distribution (noncontrast CT) data. For example, when training the model with standard augmentation techniques, performance of segmentation of the kidneys on out-of-distribution non-contrast images was dramatically lower than for in-distribution data (Dice score of 0.09 vs. 0.94 for out-of-distribution vs. in-distribution data, respectively, p < 0.001). When the kidney model was trained with CycleGAN augmentation techniques, the out-of-distribution (non-contrast) performance increased dramatically (from a Dice score of 0.09 to 0.66, p < 0.001). Improvements for the liver and spleen were smaller, from 0.86 to 0.89 and 0.65 to 0.69, respectively. We believe this method will be valuable to medical imaging researchers to reduce manual segmentation effort and cost in CT imaging.

Resolving the molecular complexity of brain tumors through machine learning approaches for precision medicine

  • Sandanaraj, Edwin
2019 Thesis, cited 0 times
Glioblastoma (GBM) tumors are highly aggressive malignant brain tumors and are resistant to conventional therapies. The Cancer Genome Atlas (TCGA) efforts distinguished histologically similar GBM tumors into unique molecular subtypes. The World Health Organization (WHO) has also since incorporated key molecular indicators such as IDH mutations and 1p/19q co-deletions in the clinical classification scheme. The National Neuroscience Institute (NNI) Brain Tumor Resource distinguishes itself as the exclusive collection of patient tumors with corresponding live cells capable of re-creating the full spectrum of the original patient tumor molecular heterogeneity. These cells are thus important to re-create “mouse-patient tumor replicas” that can be prospectively tested with novel compounds, yet have retrospective clinical history, transcriptomic data and tissue paraffin blocks for data mining. My thesis aims to establish a computational framework for the molecular subtyping of brain tumors using machine learning approaches. The applicability of the empirical Bayes model has been demonstrated in the integration of various transcriptomic databases. We utilize predictive algorithms such as template-based, centroid-based, connectivity map (CMAP) and recursive feature elimination combined with random forest approaches to stratify primary tumors and GBM cells. These subtyping approaches serve as key factors for the development of predictive models and eventually, improving precision medicine strategies. We validate the robustness and clinical relevance of our Brain Tumor Resource by evaluating two critical pathways for GBM maintenance. We identify a sialyltransferase enzyme (ST3Gal1) transcriptomic program contributing to tumorigenicity and tumor cell invasiveness. Further, we generate a STAT3 functionally-tuned signature and demonstrate its pivotal role in patient prognosis and chemoresistance. We show that IGF1-R mediates resistance in non-responders to STAT3 inhibitors. Taken together, our studies demonstrate the application of machine learning approaches in revealing molecular insights into brain tumors and subsequently, the translation of these integrative analyses into more effective targeted therapies in the clinics.

Classification of Lung CT Images using BRISK Features

  • Sambasivarao, B.
  • Prathiba, G.
International Journal of Engineering and Advanced Technology (IJEAT) 2019 Journal Article, cited 0 times
Lung cancer is the major cause of death in humans. To increase the survival rate of the people, early detection of cancer is required. Lung cancer that starts in the cells of lung is mainly of two types i.e., cancerous (malignant) and non-cancerous cell (benign). In this paper, work is done on the lung images obtained from the Society of Photographic Instrumentation Engineers (SPIE) database. This SPIE database contains normal, benign and malignant images. In this work, 300 images from the database are used out of which 150 are benign and 150 are malignant. Feature points of lung tumor images are extracted by using Binary Robust Invariant Scale Keypoints (BRISK). BRISK attains commensurate characteristic of correspondence at much less computation time. BRISK is adaptive, high quality accomplishments in avant-grade algorithms. BRISK features divide the pairs of pixels surrounding the keypoint into two subsets: short-distance and long-distance pairs. The orientation of the feature point is calculated by Local intensity gradients from long distance pairs. Rotation of Short distance pairs is obtained using this orientation. These BRISK features are used by classifier for classifying the lung tumors as either benign or malignant. The performance is evaluated by calculating the accuracy.

Automated delineation of non‐small cell lung cancer: A step toward quantitative reasoning in medical decision science

  • Saad, Maliazurina
  • Lee, Ik Hyun
  • Choi, Tae‐Sun
International Journal of Imaging Systems and Technology 2019 Journal Article, cited 0 times
Quantitative reasoning in medical decision science relies on the delineation of pathological objects. For example, evidence‐based clinical decisions regarding lung diseases require the segmentation of nodules, tumors, or cancers. Non‐small cell lung cancer (NSCLC) tends to be large sized, irregularly shaped, and grows against surrounding structures imposing challenges in the segmentation, even for expert clinicians. An automated delineation tool based on spatial analysis was developed and studied on 25 sets of computed tomography scans of NSCLC. Manual and automated delineations were compared, and the proposed method exhibited robustness in terms of the tumor size (5.32–18.24 mm), shape (spherical or irregular), contouring (lobulated, spiculated, or cavitated), localization (solitary, pleural, mediastinal, endobronchial, or tagging), and laterality (left or right lobe) with accuracy between 80% and 99%. Small discrepancies observed between the manual and automated delineations may arise from the variability in the practitioners' definitions of region of interest or imaging artifacts that reduced the tissue resolution.

Are shape morphologies associated with survival? A potential shape-based biomarker predicting survival in lung cancer

  • Saad, Maliazurina
  • Lee, Ik Hyun
  • Choi, Tae-Sun
J Cancer Res Clin Oncol 2019 Journal Article, cited 0 times
PURPOSE: Imaging biomarkers (IBMs) are increasingly investigated as prognostic indicators. IBMs might be capable of assisting treatment selection by providing useful insights into tumor-specific factors in a non-invasive manner. METHODS: We investigated six three-dimensional shape-based IBMs: eccentricities between (I) intermediate-major axis (Eimaj), (II) intermediate-minor axis (Eimin), (III) major-minor axis (Emj-mn) and volumetric index of (I) sphericity (VioS), (II) flattening (VioF), (III) elongating (VioE). Additionally, we investigated previously established two-dimensional shape IBMs: eccentricity (E), index of sphericity (IoS), and minor-to-major axis length (Mn_Mj). IBMs were compared in terms of their predictive performance for 5-year overall survival in two independent cohorts of patients with lung cancer. Cohort 1 received surgical excision, while cohort 2 received radiation therapy alone or chemo-radiation therapy. Univariate and multivariate survival analyses were performed. Correlations with clinical parameters were evaluated using analysis of variance. IBM reproducibility was assessed using concordance correlation coefficients (CCCs). RESULTS: E was associated with reduced survival in cohort 1 (hazard ratio [HR]: 0.664). Eimin and VioF were associated with reduced survival in cohort 2 (HR 1.477 and 1.701). VioS was associated with reduced survival in cohorts 1 and 2 (HR 1.758 and 1.472). Spherical tumors correlated with shorter survival durations than did irregular tumors (median survival difference: 1.21 and 0.35 years in cohorts 1 and 2, respectively). VioS was a significant predictor of survival in multivariate analyses of both cohorts. All IBMs showed good reproducibility (CCC ranged between 0.86-0.98). CONCLUSIONS: In both investigated cohorts, VioS successfully linked shape morphology to patient survival.

Multi-Disease Segmentation of Gliomas and White Matter Hyperintensities in the BraTS Data Using a 3D Convolutional Neural Network

  • Rudie, Jeffrey D.
  • Weiss, David A.
  • Saluja, Rachit
  • Rauschecker, Andreas M.
  • Wang, Jiancong
  • Sugrue, Leo
  • Bakas, Spyridon
  • Colby, John B.
Frontiers in computational neuroscience 2019 Journal Article, cited 0 times
An important challenge in segmenting real-world biomedical imaging data is the presence of multiple disease processes within individual subjects. Most adults above age 60 exhibit a variable degree of small vessel ischemic disease, as well as chronic infarcts, which will manifest as white matter hyperintensities (WMH) on brain MRIs. Subjects diagnosed with gliomas will also typically exhibit some degree of abnormal T2 signal due to WMH, rather than just due to tumor. We sought to develop a fully automated algorithm to distinguish and quantify these distinct disease processes within individual subjects’ brain MRIs. To address this multi-disease problem, we trained a 3D U-Net to distinguish between abnormal signal arising from tumors vs. WMH in the 3D multi-parametric MRI (mpMRI, i.e., native T1-weighted, T1-post-contrast, T2, T2-FLAIR) scans of the International Brain Tumor Segmentation (BraTS) 2018 dataset (ntraining = 285, nvalidation = 66). Our trained neuroradiologist manually annotated WMH on the BraTS training subjects, finding that 69% of subjects had WMH. Our 3D U-Net model had a 4-channel 3D input patch (80 × 80 × 80) from mpMRI, four encoding and decoding layers, and an output of either four [background, active tumor (AT), necrotic core (NCR), peritumoral edematous/infiltrated tissue (ED)] or five classes (adding WMH as the fifth class). For both the four- and five-class output models, the median Dice for whole tumor (WT) extent (i.e., union of AT, ED, NCR) was 0.92 in both training and validation sets. Notably, the five-class model achieved significantly (p = 0.002) lower/better Hausdorff distances for WT extent in the training subjects. There was strong positive correlation between manually segmented and predicted volumes for WT (r = 0.96) and WMH (r = 0.89). Larger lesion volumes were positively correlated with higher/better Dice scores for WT (r = 0.33), WMH (r = 0.34), and across all lesions (r = 0.89) on a log(10) transformed scale. While the median Dice for WMH was 0.42 across training subjects with WMH, the median Dice was 0.62 for those with at least 5 cm3 of WMH. We anticipate the development of computational algorithms that are able to model multiple diseases within a single subject will be a critical step toward translating and integrating artificial intelligence systems into the heterogeneous real-world clinical workflow.

Conditional Generative Adversarial Refinement Networks for Unbalanced Medical Image Semantic Segmentation

  • Rezaei, Mina
  • Yang, Haojin
  • Harmuth, Konstantin
  • Meinel, Christoph
2019 Conference Proceedings, cited 0 times

Optimizing deep belief network parameters using grasshopper algorithm for liver disease classification

  • Renukadevi, Thangavel
  • Karunakaran, Saminathan
International Journal of Imaging Systems and Technology 2019 Journal Article, cited 0 times
Image processing plays a vital role in many areas such as healthcare, military, scientific and business due to its wide variety of advantages and applications. Detection of computed tomography (CT) liver disease is one of the difficult tasks in the medical field. Hand crafted features and classifications are the two types of methods used in the previous approaches, to classify liver disease. But these classification results are not optimal. In this article, we propose a novel method utilizing deep belief network (DBN) with grasshopper optimization algorithm (GOA) for liver disease classification. Initially, the image quality is enhanced by preprocessing techniques and then features like texture, color and shape are extracted. The extracted features are reduced by utilizing the dimensionality reduction method like principal component analysis (PCA). Here, the DBN parameters are optimized using GOA for recognizing liver disease. The experiments are performed on the real time and open source CT image datasets which embraces normal, cyst, hepatoma, and cavernous hemangiomas, fatty liver, metastasis, cirrhosis, and tumor samples. The proposed method yields 98% accuracy, 95.82% sensitivity, 97.52% specificity, 98.53% precision, and 96.8% F‐1 score in simulation process when compared with other existing techniques.

Accelerating Machine Learning with Training Data Management

  • Ratner, Alexander Jason
2019 Thesis, cited 1 times
One of the biggest bottlenecks in developing machine learning applications today is the need for large hand-labeled training datasets. Even at the world's most sophisticated technology companies, and especially at other organizations across science, medicine, industry, and government, the time and monetary cost of labeling and managing large training datasets is often the blocking factor in using machine learning. In this thesis, we describe work on training data management systems that enable users to programmatically build and manage training datasets, rather than labeling and managing them by hand, and present algorithms and supporting theory for automatically modeling this noisier process of training set specification in order to improve the resulting training set quality. We then describe extensive empirical results and real-world deployments demonstrating that programmatically building, managing, and modeling training sets in this way can lead to radically faster, more flexible, and more accessible ways of developing machine learning applications. We start by describing data programming, a paradigm for labeling training datasets programmatically rather than by hand, and Snorkel, an open source training data management system built around data programming that has been used by major technology companies, academic labs, and government agencies to build machine learning applications in days or weeks rather than months or years. In Snorkel, rather than hand-labeling training data, users write programmatic operators called labeling functions, which label data using various heuristic or weak supervision strategies such as pattern matching, distant supervision, and other models. These labeling functions can have noisy, conflicting, and correlated outputs, which Snorkel models and combines into clean training labels without requiring any ground truth using theoretically consistent modeling approaches we develop. We then report on extensive empirical validations, user studies, and real-world applications of Snorkel in industrial, scientific, medical, and other use cases ranging from knowledge base construction from text data to medical monitoring over image and video data. Next, we will describe two other approaches for enabling users to programmatically build and manage training datasets, both currently integrated into the Snorkel open source framework: Snorkel MeTaL, an extension of data programming and Snorkel to the setting where users have multiple related classification tasks, in particular focusing on multi-task learning; and TANDA, a system for optimizing and managing strategies for data augmentation, a critical training dataset management technique wherein a labeled dataset is artificially expanded by transforming data points. Finally, we will conclude by outlining future research directions for further accelerating and democratizing machine learning workflows, such as higher-level programmatic interfaces and massively multi-task frameworks.

Multivariate Analysis of Preoperative Magnetic Resonance Imaging Reveals Transcriptomic Classification of de novo Glioblastoma Patients

  • Rathore, Saima
  • Akbari, Hamed
  • Bakas, Spyridon
  • Pisapia, Jared M
  • Shukla, Gaurav
  • Rudie, Jeffrey D
  • Da, Xiao
  • Davuluri, Ramana V
  • Dahmane, Nadia
  • O'Rourke, Donald M
Frontiers in computational neuroscience 2019 Journal Article, cited 0 times

Reg R-CNN: Lesion Detection and Grading Under Noisy Labels

  • Ramien, Gregor N.
  • Jaeger, Paul F.
  • Kohl, Simon A. A.
  • Maier-Hein, Klaus H.
2019 Conference Proceedings, cited 0 times
For the task of concurrently detecting and categorizing objects, the medical imaging community commonly adopts methods developed on natural images. Current state-of-the-art object detectors are comprised of two stages: the first stage generates region proposals, the second stage subsequently categorizes them. Unlike in natural images, however, for anatomical structures of interest such as tumors, the appearance in the image (e.g., scale or intensity) links to a malignancy grade that lies on a continuous ordinal scale. While classification models discard this ordinal relation between grades by discretizing the continuous scale to an unordered bag of categories, regression models are trained with distance metrics, which preserve the relation. This advantage becomes all the more important in the setting of label confusions on ambiguous data sets, which is the usual case with medical images. To this end, we propose Reg R-CNN, which replaces the second-stage classification model of a current object detector with a regression model. We show the superiority of our approach on a public data set with 1026 patients and a series of toy experiments. Code will be available at

Brain Tumor Classification Using MRI Images with K-Nearest Neighbor Method

  • Ramdlon, Rafi Haidar
  • Martiana Kusumaningtyas, Entin
  • Karlita, Tita
2019 Conference Proceedings, cited 0 times
The accuracy level in diagnosing tumor type through MRI results is required to establish appropriate medical treatment. MRI results can be computationally examined using K-Nearest Neighbor method, a basic science application and classification technique of image processing. Tumor classification system is designed to detect tumor and edema in T1 and T2 images sequences, as well as to label and classify tumor type. Data interpretation of such system derives from Axial section of MRI results only, which is classified into three classes: Astrocytoma, Glioblastoma, and Oligodendroglioma. To detect tumor area, basic image processing technique is employed, comprising of image enhancement, image binarization, morphological image, and watershed. Tumor classification is applied after segmentation process of Shape Extration Feature is undertaken. The results of tumor classification obtained was 89.5 percent, which is able to provide information regarding tumor detection more clearly and specifically.

Texture Classification Study of MR Images for Hepatocellular Carcinoma

  • QIU, Jia-jun
  • WU, Yue
  • HUI, Bei
  • LIU, Yan-bo
电子科技大学学报Bioelectronics 2019 Journal Article, cited 0 times
Combining wavelet multi-resolution analysis method and statistical analysis method, a composite texture classification model is proposed to evaluate its value in computer-aided diagnosis of hepatocellular carcinoma (HCC) and normal liver tissue based on magnetic resonance (MR) images. First, training samples are divided into two groups by two categories, statistics of wavelet coefficients are calculated in each group. Second, two discretizations are performed on wavelet coefficients of a new sample based on the two sets of statistical results, and two groups of features can be extracted by histogram, co-occurrence matrix, and run-length matrix, etc. Finally, classification is performed twice based on the two groups of features to calculate the category attribute probabilities, then a decision is conducted. The experimental results demonstrate that the proposed model can obtain better classification performance than routine methods, it is rewarding for the computer-aided diagnosis of HCC and normal liver tissue based on MR images.

A Reversible and Imperceptible Watermarking Approach for Ensuring the Integrity and Authenticity of Brain MR Images

  • Qasim, Asaad Flayyih
2019 Thesis, cited 0 times
The digital medical workflow has many circumstances in which the image data can be manipulated both within the secured Hospital Information Systems (HIS) and outside, as images are viewed, extracted and exchanged. This potentially grows ethical and legal concerns regarding modifying images details that are crucial in medical examinations. Digital watermarking is recognised as a robust technique for enhancing trust within medical imaging by detecting alterations applied to medical images. Despite its efficiency, digital watermarking has not been widely used in medical imaging. Existing watermarking approaches often suffer from validation of their appropriateness to medical domains. Particularly, several research gaps have been identified: (i) essential requirements for the watermarking of medical images are not well defined; (ii) no standard approach can be found in the literature to evaluate the imperceptibility of watermarked images; and (iii) no study has been conducted before to test digital watermarking in a medical imaging workflow. This research aims to investigate digital watermarking to designing, analysing and applying it to medical images to confirm manipulations can be detected and tracked. In addressing these gaps, a number of original contributions have been presented. A new reversible and imperceptible watermarking approach is presented to detect manipulations of brain Magnetic Resonance (MR) images based on Difference Expansion (DE) technique. Experimental results show that the proposed method, whilst fully reversible, can also realise a watermarked image with low degradation for reasonable and controllable embedding capacity. This is fulfilled by encoding the data into smooth regions (blocks that have least differences between their pixels values) inside the Region of Interest (ROI) part of medical images and also through the elimination of the large location map (location of pixels used for encoding the data) required at extraction to retrieve the encoded data. This compares favourably to outcomes reported under current state-of-art techniques in terms of visual image quality of watermarked images. This was also evaluated through conducting a novel visual assessment based on relative Visual Grading Analysis (relative VGA) to define a perceptual threshold in which modifications become noticeable to radiographers. The proposed approach is then integrated into medical systems to verify its validity and applicability in a real application scenario of medical imaging where medical images are generated, exchanged and archived. This enhanced security measure, therefore, enables the detection of image manipulations, by an imperceptible and reversible watermarking approach, that may establish increased trust in the digital medical imaging workflow.

Unpaired Synthetic Image Generation in Radiology Using GANs

  • Prokopenko, Denis
  • Stadelmann, Joël Valentin
  • Schulz, Heinrich
  • Renisch, Steffen
  • Dylov, Dmitry V.
2019 Journal Article, cited 1 times
In this work, we investigate approaches to generating synthetic Computed Tomography (CT) images from the real Magnetic Resonance Imaging (MRI) data. Generating the radiological scans has grown in popularity in the recent years due to its promise to enable single-modality radiotherapy planning in clinical oncology, where the co-registration of the radiological modalities is cumbersome. We rely on the Generative Adversarial Network (GAN) models with cycle consistency which permit unpaired image-to-image translation between the modalities. We also introduce the perceptual loss function term and the coordinate convolutional layer to further enhance the quality of translated images. The Unsharp masking and the Super-Resolution GAN (SRGAN) were considered to improve the quality of synthetic images. The proposed architectures were trained on the unpaired MRI-CT data and then evaluated on the paired brain dataset. The resulting CT scans were generated with the mean absolute error (MAE), the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) scores of 60.83 HU, 17.21 dB, and 0.8, respectively. DualGAN with perceptual loss function term and coordinate convolutional layer proved to perform best. The MRI-CT translation approach holds potential to eliminate the need for the patients to undergo both examinations and to be clinically accepted as a new tool for radiotherapy planning.

Disorder in Pixel-Level Edge Directions on T1WI Is Associated with the Degree of Radiation Necrosis in Primary and Metastatic Brain Tumors: Preliminary Findings

  • Prasanna, P
  • Rogers, L
  • Lam, TC
  • Cohen, M
  • Siddalingappa, A
  • Wolansky, L
  • Pinho, M
  • Gupta, A
  • Hatanpaa, KJ
  • Madabhushi, A
American Journal of Neuroradiology 2019 Journal Article, cited 0 times

Deep multi-modality collaborative learning for distant metastases predication in PET-CT soft-tissue sarcoma studies

  • Peng, Yige
  • Bi, Lei
  • Guo, Yuyu
  • Feng, Dagan
  • Fulham, Michael
  • Kim, Jinman
2019 Conference Proceedings, cited 0 times

CT-based radiomic features predict tumor grading and have prognostic value in patients with soft tissue sarcomas treated with neoadjuvant radiation therapy

  • Peeken, J. C.
  • Bernhofer, M.
  • Spraker, M. B.
  • Pfeiffer, D.
  • Devecka, M.
  • Thamer, A.
  • Shouman, M. A.
  • Ott, A.
  • Nusslin, F.
  • Mayr, N. A.
  • Rost, B.
  • Nyflot, M. J.
  • Combs, S. E.
Radiother Oncol 2019 Journal Article, cited 0 times
PURPOSE: In soft tissue sarcoma (STS) patients systemic progression and survival remain comparably low despite low local recurrence rates. In this work, we investigated whether quantitative imaging features ("radiomics") of radiotherapy planning CT-scans carry a prognostic value for pre-therapeutic risk assessment. METHODS: CT-scans, tumor grade, and clinical information were collected from three independent retrospective cohorts of 83 (TUM), 87 (UW) and 51 (McGill) STS patients, respectively. After manual segmentation and preprocessing, 1358 radiomic features were extracted. Feature reduction and machine learning modeling for the prediction of grading, overall survival (OS), distant (DPFS) and local (LPFS) progression free survival were performed followed by external validation. RESULTS: Radiomic models were able to differentiate grade 3 from non-grade 3 STS (area under the receiver operator characteristic curve (AUC): 0.64). The Radiomic models were able to predict OS (C-index: 0.73), DPFS (C-index: 0.68) and LPFS (C-index: 0.77) in the validation cohort. A combined clinical-radiomics model showed the best prediction for OS (C-index: 0.76). The radiomic scores were significantly associated in univariate and multivariate cox regression and allowed for significant risk stratification for all three endpoints. CONCLUSION: This is the first report demonstrating a prognostic potential and tumor grading differentiation by CT-based radiomics.

Decorin Expression Is Associated With Diffusion MR Phenotypes in Glioblastoma

  • Patel, Kunal S.
  • Raymond, Catalina
  • Yao, Jingwen
  • Tsung, Joseph
  • Liau, Linda M.
  • Everson, Richard
  • Cloughesy, Timothy F.
  • Ellingson, Benjamin
Neurosurgery 2019 Journal Article, cited 0 times
Abstract INTRODUCTION Significant evidence from multiple phase II trials have suggested diffusion-weighted imaging estimates of apparent diffusion coefficient (ADC) are a predictive imaging biomarker for survival benefit for recurrent glioblastoma when treated with anti-VEGF therapies, including bevacizumab, cediranib, and cabozantinib. Despite this observation, the underlying mechanism linking anti-VEGF therapeutic efficacy with diffusion MR characteristics remains unknown. We hypothesized that a high expression of decorin, a small proteoglycan that has been associated with sequestration of pro-angiogenic signaling as well as reduction in the viscosity of the extracellular environment, may be associated with elevated ADC. METHODS A differential gene expression analysis was carried out in human glioblastoma samples in whom preoperative diffusion imaging was obtained. ADC histogram analysis was carried out to calculate preoperative ADCL values, the average ADC in the lower distribution using a double Gaussian mixed model. The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA) databases were queried to identify diffusion imaging and levels of decorin protein expression. Patients with recurrent glioblastoma who undergo resection prospectively had targeted biopsies based on the ADC analysis collected. These samples were stained for decorin and quantified using whole-slide image analysis software. RESULTS Differential gene expression analysis between tumors associated with high and low preoperative ADCL showed that patients with high ADCL had increased decorin gene expression. Patients from the TCGA database with elevated ADCL had a significantly higher level of decorin gene expression (P = .01). These patients had a survival advantage with a log-rank analysis (P = .002). Patients with preoperative diffusion imaging had multiple targeted intraoperative biopsies stained for decorin. Patients with high ADCL had increased decorin expression on immunohistochemistry (P = .002). CONCLUSION Increased ADCL on diffusion MR imaging is associated with high decorin expression as well as increased survival in glioblastoma. Decorin may play an important role the imaging features on diffusion MR and anti-VEGF treatment efficacy. Decorin expression may serve as a future therapeutic target in patients with favorable diffusion MR characteristics.

Brain tumor detection based on Convolutional Neural Network with neutrosophic expert maximum fuzzy sure entropy

  • Özyurt, Fatih
  • Sert, Eser
  • Avci, Engin
  • Dogantekin, Esin
Measurement 2019 Journal Article, cited 0 times
Brain tumor classification is a challenging task in the field of medical image processing. The present study proposes a hybrid method using Neutrosophy and Convolutional Neural Network (NS-CNN). It aims to classify tumor region areas that are segmented from brain images as benign and malignant. In the first stage, MRI images were segmented using the neutrosophic set – expert maximum fuzzy-sure entropy (NS-EMFSE) approach. The features of the segmented brain images in the classification stage were obtained by CNN and classified using SVM and KNN classifiers. Experimental evaluation was carried out based on 5-fold cross-validation on 80 of benign tumors and 80 of malign tumors. The findings demonstrated that the CNN features displayed a high classification performance with different classifiers. Experimental results indicate that CNN features displayed a better classification performance with SVM as simulation results validated output data with an average success of 95.62%.

Effective Diagnosis and Treatment through Content-Based Medical Image Retrieval (CBMIR) by Using Artificial Intelligence

  • Owais, Muhammad
  • Arsalan, Muhammad
  • Choi, Jiho
  • Park, Kang Ryoung
J Clin Med 2019 Journal Article, cited 0 times
Medical-image-based diagnosis is a tedious task‚ and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%).

Autocorrection of lung boundary on 3D CT lung cancer images

  • Nurfauzi, R.
  • Nugroho, H. A.
  • Ardiyanto, I.
  • Frannita, E. L.
Journal of King Saud University - Computer and Information Sciences 2019 Journal Article, cited 0 times
Lung cancer in men has the highest mortality rate among all types of cancer. Juxta-pleural and juxta-vascular are the most common nodules located on the lung surface. A computer aided detection (CADe) system is effective for assisting radiologists in diagnosing lung nodules. However, the lung segmentation step requires sophisticated methods when juxta-pleural and juxta-vascular nodules are present. Fast computational time and low error in covering nodule areas are the aims of this study. The proposed method consists of five stages, namely ground truth (GT) extraction, data preparation, tracheal extraction, separation of lung fusion and lung border correction. The used data consist of 57 3D CT lung cancer images taken from selected LIDC-IDRI dataset. These nodules are determined as the outer areas labeled by four radiologists. The proposed method achieves the fastest computational time of 0.32 s per slice or 60 times faster than that of conventional adaptive border marching (ABM). Moreover, it produces under segmentation of nodule value as low as 14.6%. It indicates that the proposed method has a potential to be embedded in the lung CADe system to cover pleural juxta and vascular nodule areas in lung segmentation.

Projection-domain scatter correction for cone beam computed tomography using a residual convolutional neural network

  • Nomura, Yusuke
  • Xu, Qiong
  • Shirato, Hiroki
  • Shimizu, Shinichi
  • Xing, Lei
Med Phys 2019 Journal Article, cited 0 times
PURPOSE: Scatter is a major factor degrading the image quality of cone beam computed tomography (CBCT). Conventional scatter correction strategies require handcrafted analytical models with ad hoc assumptions, which often leads to less accurate scatter removal. This study aims to develop an effective scatter correction method using a residual convolutional neural network (CNN). METHODS: A U-net based 25-layer CNN was constructed for CBCT scatter correction. The establishment of the model consists of three steps: model training, validation, and testing. For model training, a total of 1800 pairs of x-ray projection and the corresponding scatter-only distribution in nonanthropomorphic phantoms taken in full-fan scan were generated using Monte Carlo simulation of a CBCT scanner installed with a proton therapy system. An end-to-end CNN training was implemented with two major loss functions for 100 epochs with a mini-batch size of 10. Image rotations and flips were randomly applied to augment the training datasets during training. For validation, 200 projections of a digital head phantom were collected. The proposed CNN-based method was compared to a conventional projection-domain scatter correction method named fast adaptive scatter kernel superposition (fASKS) method using 360 projections of an anthropomorphic head phantom. Two different loss functions were applied for the same CNN to evaluate the impact of loss functions on the final results. Furthermore, the CNN model trained with full-fan projections was fine-tuned for scatter correction in half-fan scan by using transfer learning with additional 360 half-fan projection pairs of nonanthropomorphic phantoms. The tuned-CNN model for half-fan scan was compared with the fASKS method as well as the CNN-based method without the fine-tuning using additional lung phantom projections. RESULTS: The CNN-based method provides projections with significantly reduced scatter and CBCT images with more accurate Hounsfield Units (HUs) than that of the fASKS-based method. Root mean squared error of the CNN-corrected projections was improved to 0.0862 compared to 0.278 for uncorrected projections or 0.117 for the fASKS-corrected projections. The CNN-corrected reconstruction provided better HU quantification, especially in regions near the air or bone interfaces. All four image quality measures, which include mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM), indicated that the CNN-corrected images were significantly better than that of the fASKS-corrected images. Moreover, the proposed transfer learning technique made it possible for the CNN model trained with full-fan projections to be applicable to remove scatters in half-fan projections after fine-tuning with only a small number of additional half-fan training datasets. SSIM value of the tuned-CNN-corrected images was 0.9993 compared to 0.9984 for the non-tuned-CNN-corrected images or 0.9990 for the fASKS-corrected images. Finally, the CNN-based method is computationally efficient - the correction time for the 360 projections only took less than 5 s in the reported experiments on a PC (4.20 GHz Intel Core-i7 CPU) with a single NVIDIA GTX 1070 GPU. CONCLUSIONS: The proposed deep learning-based method provides an effective tool for CBCT scatter correction and holds significant value for quantitative imaging and image-guided radiation therapy.

Classification of brain tumor isocitrate dehydrogenase status using MRI and deep learning

  • Nalawade, S.
  • Murugesan, G. K.
  • Vejdani-Jahromi, M.
  • Fisicaro, R. A.
  • Bangalore Yogananda, C. G.
  • Wagner, B.
  • Mickey, B.
  • Maher, E.
  • Pinho, M. C.
  • Fei, B.
  • Madhuranthakam, A. J.
  • Maldjian, J. A.
J Med Imaging (Bellingham) 2019 Journal Article, cited 0 times
Isocitrate dehydrogenase (IDH) mutation status is an important marker in glioma diagnosis and therapy. We propose an automated pipeline for noninvasively predicting IDH status using deep learning and T2-weighted (T2w) magnetic resonance (MR) images with minimal preprocessing (N4 bias correction and normalization to zero mean and unit variance). T2w MR images and genomic data were obtained from The Cancer Imaging Archive dataset for 260 subjects (120 high-grade and 140 low-grade gliomas). A fully automated two-dimensional densely connected model was trained to classify IDH mutation status on 208 subjects and tested on another held-out set of 52 subjects using fivefold cross validation. Data leakage was avoided by ensuring subject separation during the slice-wise randomization. Mean classification accuracy of 90.5% was achieved for each axial slice in predicting the three classes of no tumor, IDH mutated, and IDH wild type. Test accuracy of 83.8% was achieved in predicting IDH mutation status for individual subjects on the test dataset of 52 subjects. We demonstrate a deep learning method to predict IDH mutation status using T2w MRI alone. Radiologic imaging studies using deep learning methods must address data leakage (subject duplication) in the randomization process to avoid upward bias in the reported classification accuracy.

Prediction of malignant glioma grades using contrast-enhanced T1-weighted and T2-weighted magnetic resonance images based on a radiomic analysis

  • Nakamoto, Takahiro
  • Takahashi, Wataru
  • Haga, Akihiro
  • Takahashi, Satoshi
  • Kiryu, Shigeru
  • Nawa, Kanabu
  • Ohta, Takeshi
  • Ozaki, Sho
  • Nozawa, Yuki
  • Tanaka, Shota
  • Mukasa, Akitake
  • Nakagawa, Keiichi
Sci RepScientific reports 2019 Journal Article, cited 0 times
We conducted a feasibility study to predict malignant glioma grades via radiomic analysis using contrast-enhanced T1-weighted magnetic resonance images (CE-T1WIs) and T2-weighted magnetic resonance images (T2WIs). We proposed a framework and applied it to CE-T1WIs and T2WIs (with tumor region data) acquired preoperatively from 157 patients with malignant glioma (grade III: 55, grade IV: 102) as the primary dataset and 67 patients with malignant glioma (grade III: 22, grade IV: 45) as the validation dataset. Radiomic features such as size/shape, intensity, histogram, and texture features were extracted from the tumor regions on the CE-T1WIs and T2WIs. The Wilcoxon-Mann-Whitney (WMW) test and least absolute shrinkage and selection operator logistic regression (LASSO-LR) were employed to select the radiomic features. Various machine learning (ML) algorithms were used to construct prediction models for the malignant glioma grades using the selected radiomic features. Leave-one-out cross-validation (LOOCV) was implemented to evaluate the performance of the prediction models in the primary dataset. The selected radiomic features for all folds in the LOOCV of the primary dataset were used to perform an independent validation. As evaluation indices, accuracies, sensitivities, specificities, and values for the area under receiver operating characteristic curve (or simply the area under the curve (AUC)) for all prediction models were calculated. The mean AUC value for all prediction models constructed by the ML algorithms in the LOOCV of the primary dataset was 0.902 +/- 0.024 (95% CI (confidence interval), 0.873-0.932). In the independent validation, the mean AUC value for all prediction models was 0.747 +/- 0.034 (95% CI, 0.705-0.790). The results of this study suggest that the malignant glioma grades could be sufficiently and easily predicted by preparing the CE-T1WIs, T2WIs, and tumor delineations for each patient. Our proposed framework may be an effective tool for preoperatively grading malignant gliomas.

Quantitative and Qualitative Evaluation of Convolutional Neural Networks with a Deeper U-Net for Sparse-View Computed Tomography Reconstruction

  • Nakai, H.
  • Nishio, M.
  • Yamashita, R.
  • Ono, A.
  • Nakao, K. K.
  • Fujimoto, K.
  • Togashi, K.
Acad Radiol 2019 Journal Article, cited 0 times
Rationale and Objectives To evaluate the utility of a convolutional neural network (CNN) with an increased number of contracting and expanding paths of U-net for sparse-view CT reconstruction. Materials and Methods This study used 60 anonymized chest CT cases from a public database called “The Cancer Imaging Archive”. Eight thousand images from 40 cases were used for training. Eight hundred and 80 images from another 20 cases were used for quantitative and qualitative evaluation, respectively. Sparse-view CT images subsampled by a factor of 20 were simulated, and two CNNs were trained to create denoised images from the sparse-view CT. A CNN based on U-net with residual learning with four contracting and expanding paths (the preceding CNN) was compared with another CNN with eight contracting and expanding paths (the proposed CNN) both quantitatively (peak signal to noise ratio, structural similarity index), and qualitatively (the scores given by two radiologists for anatomical visibility, artifact and noise, and overall image quality) using the Wilcoxon signed-rank test. Nodule and emphysema appearance were also evaluated qualitatively. Results The proposed CNN was significantly better than the preceding CNN both quantitatively and qualitatively (overall image quality interquartile range, 3.0–3.5 versus 1.0–1.0 reported from the preceding CNN; p < 0.001). However, only 2 of 22 cases used for emphysematous evaluation (2 CNNs for every 11 cases with emphysema) had an average score of ≥ 2 (on a 3 point scale). Conclusion Increasing contracting and expanding paths may be useful for sparse-view CT reconstruction with CNN. However, poor reproducibility of emphysema appearance should also be noted. Key Words Convolutional neural network CNN Sparse-view CT Deep learning Abbreviations BN batch normalization CNN convolutional neural networks CTcomputed tomography dB decibel GGO ground glass opacity GPU graphics processing unit MSE the mean squared error PSNR peak signal to noise ratio ReLU rectified linear unit SSIM structural similarity index TCIA The Cancer Imaging Archive

Advanced 3D printed model of middle cerebral artery aneurysms for neurosurgery simulation

  • Nagassa, Ruth G
  • McMenamin, Paul G
  • Adams, Justin W
  • Quayle, Michelle R
  • Rosenfeld, Jeffrey V
3D Print Med 2019 Journal Article, cited 0 times
BACKGROUND: Neurosurgical residents are finding it more difficult to obtain experience as the primary operator in aneurysm surgery. The present study aimed to replicate patient-derived cranial anatomy, pathology and human tissue properties relevant to cerebral aneurysm intervention through 3D printing and 3D print-driven casting techniques. The final simulator was designed to provide accurate simulation of a human head with a middle cerebral artery (MCA) aneurysm. METHODS: This study utilized living human and cadaver-derived medical imaging data including CT angiography and MRI scans. Computer-aided design (CAD) models and pre-existing computational 3D models were also incorporated in the development of the simulator. The design was based on including anatomical components vital to the surgery of MCA aneurysms while focusing on reproducibility, adaptability and functionality of the simulator. Various methods of 3D printing were utilized for the direct development of anatomical replicas and moulds for casting components that optimized the bio-mimicry and mechanical properties of human tissues. Synthetic materials including various types of silicone and ballistics gelatin were cast in these moulds. A novel technique utilizing water-soluble wax and silicone was used to establish hollow patient-derived cerebrovascular models. RESULTS: A patient-derived 3D aneurysm model was constructed for a MCA aneurysm. Multiple cerebral aneurysm models, patient-derived and CAD, were replicated as hollow high-fidelity models. The final assembled simulator integrated six anatomical components relevant to the treatment of cerebral aneurysms of the Circle of Willis in the left cerebral hemisphere. These included models of the cerebral vasculature, cranial nerves, brain, meninges, skull and skin. The cerebral circulation was modeled through the patient-derived vasculature within the brain model. Linear and volumetric measurements of specific physical modular components were repeated, averaged and compared to the original 3D meshes generated from the medical imaging data. Calculation of the concordance correlation coefficient (rhoc: 90.2%-99.0%) and percentage difference (</=0.4%) confirmed the accuracy of the models. CONCLUSIONS: A multi-disciplinary approach involving 3D printing and casting techniques was used to successfully construct a multi-component cerebral aneurysm surgery simulator. Further study is planned to demonstrate the educational value of the proposed simulator for neurosurgery residents.

Recommendations for Processing Head CT Data

  • Muschelli, J.
Frontiers in Neuroinformatics 2019 Journal Article, cited 0 times
Many research applications of neuroimaging use magnetic resonance imaging (MRI). As such, recommendations for image analysis and standardized imaging pipelines exist. Clinical imaging, however, relies heavily on X-ray computed tomography (CT) scans for diagnosis and prognosis. Currently, there is only one image processing pipeline for head CT, which focuses mainly on head CT data with lesions. We present tools and a complete pipeline for processing CT data, focusing on open-source solutions, that focus on head CT but are applicable to most CT analyses. We describe going from raw DICOM data to a spatially normalized brain within CT presenting a full example with code. Overall, we recommend anonymizing data with Clinical Trials Processor, converting DICOM data to NIfTI using dcm2niix, using BET for brain extraction, and registration using a publicly-available CT template for analysis.

Impact of image preprocessing methods on reproducibility of radiomic features in multimodal magnetic resonance imaging in glioblastoma

  • Moradmand, Hajar
  • Aghamiri, Seyed Mahmoud Reza
  • Ghaderi, Reza
J Appl Clin Med Phys 2019 Journal Article, cited 0 times
To investigate the effect of image preprocessing, in respect to intensity inhomogeneity correction and noise filtering, on the robustness and reproducibility of the radiomics features extracted from the Glioblastoma (GBM) tumor in multimodal MR images (mMRI). In this study, for each patient 1461 radiomics features were extracted from GBM subregions (i.e., edema, necrosis, enhancement, and tumor) of mMRI (i.e., FLAIR, T1, T1C, and T2) volumes for five preprocessing combinations (in total 116 880 radiomics features). The robustness and reproducibility of the radiomics features were assessed under four comparisons: (a) Baseline versus modified bias field; (b) Baseline versus modified bias field followed by noise filtering; (c) Baseline versus modified noise, and (d) Baseline versus modified noise followed bias field correction. The concordance correlation coefficient (CCC), dynamic range (DR), and interclass correlation coefficient (ICC) were used as metrics. Shape features and subsequently, local binary pattern (LBP) filtered images were highly stable and reproducible against bias field correction and noise filtering in all measurements. In all MRI modalities, necrosis regions (NC: n ~449/1461, 30%) had the highest number of highly robust features, with CCC and DR >= 0.9, in comparison with edema (ED: n ~296/1461, 20%), enhanced (EN: n ~ 281/1461, 19%) and active-tumor regions (TM: n ~254/1461, 17%). The necrosis regions (NC: n ~ 449/1461, 30%) had a higher number of highly robust features (CCC and DR >= 0.9) than edema (ED: n ~ 296/1461, 20%), enhanced (EN: n ~ 281/1461, 19%) and active-tumor (TM: n ~ 254/1461, 17%) regions across all modalities. Furthermore, our results identified that the percentage of high reproducible features with ICC >= 0.9 after bias field correction (23.2%), and bias field correction followed by noise filtering (22.4%) were higher in contrast with noise smoothing and also noise smoothing follow by bias correction. These preliminary findings imply that preprocessing sequences can also have a significant impact on the robustness and reproducibility of mMRI-based radiomics features and identification of generalizable and consistent preprocessing algorithms is a pivotal step before imposing radiomics biomarkers into the clinic for GBM patients.

Evaluation of TP53/PIK3CA mutations using texture and morphology analysis on breast MRI

  • Moon, W. K.
  • Chen, H. H.
  • Shin, S. U.
  • Han, W.
  • Chang, R. F.
Magn Reson Imaging 2019 Journal Article, cited 0 times
PURPOSE: Somatic mutations in TP53 and PIK3CA genes, the two most frequent genetic alternations in breast cancer, are associated with prognosis and therapeutic response. This study predicted the presence of TP53 and PIK3CA mutations in breast cancer by using texture and morphology analyses on breast MRI. MATERIALS AND METHODS: A total of 107 breast cancers (dataset A) from The Cancer Imaging Archive (TCIA) consisting of 40 TP53 mutation cancer and 67 cancers without TP53 mutation; 35 PIK3CA mutations cancer and 72 without PIK3CA mutation. 122 breast cancer (dataset B) from Seoul National University Hospital containing 54 TP53 mutation cancer and 68 without mutations were used in this study. At first, the tumor area was segmented by a region growing method. Subsequently, gray level co-occurrence matrix (GLCM) texture features were extracted after ranklet transform, and a series of features including compactness, margin, and ellipsoid fitting model were used to describe the morphological characteristics of tumors. Lastly, a logistic regression was used to identify the presence of TP53 and PIK3CA mutations. The classification performances were evaluated by accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). Taking into account the trade-offs of sensitivity and specificity, the overall performances were evaluated by using receiver operating characteristic (ROC) curve analysis. RESULTS: The GLCM texture feature based on ranklet transform is more capable of recognizing TP53 and PIK3CA mutations than morphological feature, especially for the TP53 mutation that achieves statistically significant. The area under the ROC curve (AUC) for TP53 mutation dataset A and dataset B achieved 0.78 and 0.81 respectively. For PIK3CA mutation, the AUC of ranklet texture feature was 0.70. CONCLUSION: Texture analysis of segmented tumor on breast MRI based on ranklet transform is potential in recognizing the presence of TP53 mutation and PIK3CA mutation.

Automated AJCC (7th edition) staging of non-small cell lung cancer (NSCLC) using deep convolutional neural network (CNN) and recurrent neural network (RNN)

  • Moitra, Dipanjan
  • Mandal, Rakesh Kr
Health Inf Sci Syst 2019 Journal Article, cited 0 times
Purpose: A large chunk of lung cancers are of the type non-small cell lung cancer (NSCLC). Both the treatment planning and patients' prognosis depend greatly on factors like AJCC staging which is an abstraction over TNM staging. Many significant efforts have so far been made towards automated staging of NSCLC, but the groundbreaking application of a deep neural networks (DNNs) is yet to be observed in this domain of study. DNN is capable of achieving higher level of accuracy than the traditional artificial neural networks (ANNs) as it uses deeper layers of convolutional neural network (CNN). The objective of the present study is to propose a simple yet fast CNN model combined with recurrent neural network (RNN) for automated AJCC staging of NSCLC and to compare the outcome with a few standard machine learning algorithms along with a few similar studies. Methods: The NSCLC radiogenomics collection from the cancer imaging archive (TCIA) dataset was considered for the study. The tumor images were refined and filtered by resizing, enhancing, de-noising, etc. The initial image processing phase was followed by texture based image segmentation. The segmented images were fed into a hybrid feature detection and extraction model which was comprised of two sequential phases: maximally stable extremal regions (MSER) and the speeded up robust features (SURF). After a prolonged experiment, the desired CNN-RNN model was derived and the extracted features were fed into the model. Results: The proposed CNN-RNN model almost outperformed the other machine learning algorithms under consideration. The accuracy remained steadily higher than the other contemporary studies. Conclusion: The proposed CNN-RNN model performed commendably during the study. Further studies may be carried out to refine the model and develop an improved auxiliary decision support system for oncologists and radiologists.

Automated grading of non-small cell lung cancer by fuzzy rough nearest neighbour method

  • Moitra, Dipanjan
  • Mandal, Rakesh Kr
Network Modeling Analysis in Health Informatics and Bioinformatics 2019 Journal Article, cited 0 times
Lung cancer is one of the most lethal diseases across the world. Most lung cancers belong to the category of non-small cell lung cancer (NSCLC). Many studies have so far been carried out to avoid the hazards and bias of manual classification of NSCLC tumors. A few of such studies were intended towards automated nodal staging using the standard machine learning algorithms. Many others tried to classify tumors as either benign or malignant. None of these studies considered the pathological grading of NSCLC. Automated grading may perfectly depict the dissimilarity between normal tissue and cancer affected tissue. Such automation may save patients from undergoing a painful biopsy and may also help radiologists or oncologists in grading the tumor or lesion correctly. The present study aims at the automated grading of NSCLC tumors using the fuzzy rough nearest neighbour (FRNN) method. The dataset was extracted from The Cancer Imaging Archive and it comprised PET/CT images of NSCLC tumors of 211 patients. Accelerated segment test (FAST) and histogram oriented gradients methods were used to detect and extract features from the segmented images. Gray level co-occurrence matrix (GLCM) features were also considered in the study. The features along with the clinical grading information were fed into four machine learning algorithms: FRNN, logistic regression, multi-layer perceptron, and support vector machine. The results were thoroughly compared in the light of various evaluation-metrics. The confusion matrix was found balanced, and the outcome was found more cost-effective for FRNN. Results were also compared with various other leading studies done earlier in this field. The proposed FRNN model performed satisfactorily during the experiment. Further exploration of FRNN may be very helpful for radiologists and oncologists in planning the treatment for NSCLC. More varieties of cancers may be considered while conducting similar studies.


  • Mohana, P
  • Venkatesan, P
The uncontrollable cells in the lungs are the main cause of lung cancer that reduces the ability to breathe. In this study, fusion of Computed Tomography (CT) lung image and Positron Emission Tomography (PET) lung image using their structural similarity is presented. The fused image has more information compared to individual CT and PET lung images which helps radiologists to make decision quickly. Initially, the CT and PET images are divided into blocks of predefined size in an overlapping manner. The structural similarity between each block of CT and PET are computed for fusion. Image fusion is performed using a combination of structural similarity and MAX rule. If the structural similarity between CT and PET block is greater than a particular threshold, the MAX rule is applied; otherwise the pixel intensities in CT image are used. A simple thresholding approach is employed to detect the lung nodule from the fused image. The qualitative analyses show that the fusion approach provides more information with accurate detection of lung nodules.

Database Acquisition for the Lung Cancer Computer Aided Diagnostic Systems

  • Meldo, Anna
  • Utkin, Lev
  • Lukashin, Aleksey
  • Muliukha, Vladimir
  • Zaborovsky, Vladimir
2019 Conference Paper, cited 0 times
Most of the used computer aided diagnostic (CAD) systems based on applying the deep learning algorithms are similar from the point of view of data processing stages. The main typical stages are the training data acquisition, pre-processing, segmentation and classification. Homogeneity of a training dataset structure and its completeness are very important for minimizing inaccuracies in the development of the CAD systems. The main difficulties in the medical training data acquisition are concerned with their heterogeneity and incompleteness. Another problem is a lack of a sufficient large amount of data for training deep neural networks which are a basis of the CAD systems. In order to overcome these problems in the lung cancer CAD systems, a new methodology of the dataset acquisition is proposed by using as an example the database called LIRA which has been applied to training the intellectual lung cancer CAD system called by Dr. AIzimov. One of the important peculiarities of the dataset LIRA is the morphological confirmation of diseases. Another peculiarity is taking into account and including “atypical” cases from the point of view of radiographic features. The database development is carried out in the interdisciplinary collaboration of radiologists and data scientists developing the CAD system.

Bone Marrow and Tumor Radiomics at (18)F-FDG PET/CT: Impact on Outcome Prediction in Non-Small Cell Lung Cancer

  • Mattonen, Sarah A
  • Davidzon, Guido A
  • Benson, Jalen
  • Leung, Ann N C
  • Vasanawala, Minal
  • Horng, George
  • Shrager, Joseph B
  • Napel, Sandy
  • Nair, Viswam S.
Radiology 2019 Journal Article, cited 0 times
Background Primary tumor maximum standardized uptake value is a prognostic marker for non-small cell lung cancer. In the setting of malignancy, bone marrow activity from fluorine 18-fluorodeoxyglucose (FDG) PET may be informative for clinical risk stratification. Purpose To determine whether integrating FDG PET radiomic features of the primary tumor, tumor penumbra, and bone marrow identifies lung cancer disease-free survival more accurately than clinical features alone. Materials and Methods Patients were retrospectively analyzed from two distinct cohorts collected between 2008 and 2016. Each tumor, its surrounding penumbra, and bone marrow from the L3-L5 vertebral bodies was contoured on pretreatment FDG PET/CT images. There were 156 bone marrow and 512 tumor and penumbra radiomic features computed from the PET series. Randomized sparse Cox regression by least absolute shrinkage and selection operator identified features that predicted disease-free survival in the training cohort. Cox proportional hazards models were built and locked in the training cohort, then evaluated in an independent cohort for temporal validation. Results There were 227 patients analyzed; 136 for training (mean age, 69 years +/- 9 [standard deviation]; 101 men) and 91 for temporal validation (mean age, 72 years +/- 10; 91 men). The top clinical model included stage; adding tumor region features alone improved outcome prediction (log likelihood, -158 vs -152; P = .007). Adding bone marrow features continued to improve performance (log likelihood, -158 vs -145; P = .001). The top model integrated stage, two bone marrow texture features, one tumor with penumbra texture feature, and two penumbra texture features (concordance, 0.78; 95% confidence interval: 0.70, 0.85; P < .001). This fully integrated model was a predictor of poor outcome in the independent cohort (concordance, 0.72; 95% confidence interval: 0.64, 0.80; P < .001) and a binary score stratified patients into high and low risk of poor outcome (P < .001). Conclusion A model that includes pretreatment fluorine 18-fluorodeoxyglucose PET texture features from the primary tumor, tumor penumbra, and bone marrow predicts disease-free survival of patients with non-small cell lung cancer more accurately than clinical features alone. (c) RSNA, 2019 Online supplemental material is available for this article.

[18F] FDG Positron Emission Tomography (PET) Tumor and Penumbra Imaging Features Predict Recurrence in Non-Small Cell Lung Cancer

  • Mattonen, Sarah A.
  • Davidzon, Guido A.
  • Bakr, Shaimaa
  • Echegaray, Sebastian
  • Leung, Ann N. C.
  • Vasanawala, Minal
  • Horng, George
  • Napel, Sandy
  • Nair, Viswam S.
Tomography (Ann Arbor, Mich.) 2019 Journal Article, cited 0 times
We identified computational imaging features on 18F-fluorodeoxyglucose positron emission tomography (PET) that predict recurrence/progression in non-small cell lung cancer (NSCLC). We retrospectively identified 291 patients with NSCLC from 2 prospectively acquired cohorts (training, n = 145; validation, n = 146). We contoured the metabolic tumor volume (MTV) on all pretreatment PET images and added a 3-dimensional penumbra region that extended outward 1 cm from the tumor surface. We generated 512 radiomics features, selected 435 features based on robustness to contour variations, and then applied randomized sparse regression (LASSO) to identify features that predicted time to recurrence in the training cohort. We built Cox proportional hazards models in the training cohort and independently evaluated the models in the validation cohort. Two features including stage and a MTV plus penumbra texture feature were selected by LASSO. Both features were significant univariate predictors, with stage being the best predictor (hazard ratio [HR] = 2.15 [95% confidence interval (CI): 1.56-2.95], P < .001). However, adding the MTV plus penumbra texture feature to stage significantly improved prediction (P = .006). This multivariate model was a significant predictor of time to recurrence in the training cohort (concordance = 0.74 [95% CI: 0.66-0.81], P < .001) that was validated in a separate validation cohort (concordance = 0.74 [95% CI: 0.67-0.81], P < .001). A combined radiomics and clinical model improved NSCLC recurrence prediction. FDG PET radiomic features may be useful biomarkers for lung cancer prognosis and add clinical utility for risk stratification.

Bone suppression for chest X-ray image using a convolutional neural filter

  • Matsubara, N.
  • Teramoto, A.
  • Saito, K.
  • Fujita, H.
Australas Phys Eng Sci Med 2019 Journal Article, cited 0 times
Chest X-rays are used for mass screening for the early detection of lung cancer. However, lung nodules are often overlooked because of bones overlapping the lung fields. Bone suppression techniques based on artificial intelligence have been developed to solve this problem. However, bone suppression accuracy needs improvement. In this study, we propose a convolutional neural filter (CNF) for bone suppression based on a convolutional neural network which is frequently used in the medical field and has excellent performance in image processing. CNF outputs a value for the bone component of the target pixel by inputting pixel values in the neighborhood of the target pixel. By processing all positions in the input image, a bone-extracted image is generated. Finally, bone-suppressed image is obtained by subtracting the bone-extracted image from the original chest X-ray image. Bone suppression was most accurate when using CNF with six convolutional layers, yielding bone suppression of 89.2%. In addition, abnormalities, if present, were effectively imaged by suppressing only bone components and maintaining soft-tissue. These results suggest that the chances of missing abnormalities may be reduced by using the proposed method. The proposed method is useful for bone suppression in chest X-ray images.

Domain-Based Analysis of Colon Polyp in CT Colonography Using Image-Processing Techniques

  • Manjunath, K N
  • Siddalingaswamy, PC
  • Prabhu, GK
Asian Pacific Journal of Cancer Prevention 2019 Journal Article, cited 0 times
Background: The purpose of the research was to improve the polyp detection accuracy in CT Colonography (CTC)through effective colon segmentation, removal of tagged fecal matter through Electronic Cleansing (EC), and measuringthe smaller polyps. Methods: An improved method of boundary-based semi-automatic colon segmentation with theknowledge of colon distension, an adaptive multistep method for the virtual cleansing of segmented colon based onthe knowledge of Hounsfield Units, and an automated method of smaller polyp measurement using skeletonizationtechnique have been implemented. Results: The techniques were evaluated on 40 CTC dataset. The segmentationmethod was able to delineate the colon wall accurately. The submerged colonic structures were preserved withoutsoft tissue erosion, pseudo enhanced voxels were corrected, and the air-contrast layer was removed without losingthe adjacent tissues. The smaller polyp of size less than validated qualitatively and quantitatively. Segmented colons were validated through volumetric overlap computation,and accuracy of 95.826±0.6854% was achieved. In polyp measurement, the paired t-test method was applied to comparethe difference with ground truth and at α=5%, t=0.9937 and p=0.098 was achieved. The statistical values of TPR=90%,TNR=82.3% and accuracy=88.31% were achieved. Conclusion: An automated system of polyp measurement has beendeveloped starting from colon segmentation to improve the existing CTC solutions. The analysis of domain-basedapproach of polyp has given good results. A prototype software, which can be used as a low-cost polyp diagnosis tool,has been developed.

Scale-Space DCE-MRI Radiomics Analysis Based on Gabor Filters for Predicting Breast Cancer Therapy Response

  • Manikis, Georgios C.
  • Venianaki, Maria
  • Skepasianos, Iraklis
  • Papadakis, Georgios Z.
  • Maris, Thomas G.
  • Agelaki, Sofia
  • Karantanas, Apostolos
  • Marias, Kostas
2019 Conference Paper, cited 0 times
Radiomics-based studies have created an unprecedented momentum in computational medical imaging over the last years by significantly advancing and empowering correlational and predictive quantitative studies in numerous clinical applications. An important element of this exciting field of research especially in oncology is multi-scale texture analysis since it can effectively describe tissue heterogeneity, which is highly informative for clinical diagnosis and prognosis. There are however, several concerns regarding the plethora of radiomics features used in the literature especially regarding their performance consistency across studies. Since many studies use software packages that yield multi-scale texture features it makes sense to investigate the scale-space performance of texture candidate biomarkers under the hypothesis that significant texture markers may have a more persistent scale-space performance. To this end, this study proposes a methodology for the extraction of Gabor multi-scale and orientation texture DCE-MRI radiomics for predicting breast cancer complete response to neoadjuvant therapy. More specifically, a Gabor filter bank was created using four different orientations and ten different scales and then first-order and second-order texture features were extracted for each scale-orientation data representation. The performance of all these features was evaluated under a generalized repeated cross-validation framework in a scale-space fashion using extreme gradient boosting classifiers.

Study on Prognosis Factors of Non-Small Cell Lung Cancer Based on CT Image Features

  • Lu, Xiaoteng
  • Gong, Jing
  • Nie, Shengdong
Journal of Medical Imaging and Health Informatics 2019 Journal Article, cited 0 times
This study aims to investigate the prognosis factors of non-small cell lung cancer (NSCLC) based on CT image features and develop a new quantitative image feature prognosis approach using CT images. Firstly, lung tumors were segmented and images features were extracted. Secondly, the Kaplan-Meier method was used to have a univariate survival analysis. A multiple survival analysis was carried out with the method of COX regression model. Thirdly, SMOTE algorithm was took to make the feature data balanced. Finally, classifiers based on WEKA were established to test the prognosis ability of independent prognosis factors. Univariate analysis results reflected that six features had significant influence on patients' prognosis. After multivariate analysis, angular second moment, srhge and volume were significantly related to the survival situation of NSCLC patients (P < 0.05). According to the results of classifiers, these three features could make a well prognosis on the NSCLC. The best classification accuracy was 78.4%. The results of our study suggested that angular second moment, srhge and volume were high potential independent prognosis factors of NSCLC.

A mathematical-descriptor of tumor-mesoscopic-structure from computed-tomography images annotates prognostic- and molecular-phenotypes of epithelial ovarian cancer

  • Lu, Haonan
  • Arshad, Mubarik
  • Thornton, Andrew
  • Avesani, Giacomo
  • Cunnea, Paula
  • Curry, Ed
  • Kanavati, Fahdi
  • Liang, Jack
  • Nixon, Katherine
  • Williams, Sophie T.
  • Hassan, Mona Ali
  • Bowtell, David D. L.
  • Gabra, Hani
  • Fotopoulou, Christina
  • Rockall, Andrea
  • Aboagye, Eric O.
Nature Communications 2019 Journal Article, cited 0 times
The five-year survival rate of epithelial ovarian cancer (EOC) is approximately 35-40% despite maximal treatment efforts, highlighting a need for stratification biomarkers for personalized treatment. Here we extract 657 quantitative mathematical descriptors from the preoperative CT images of 364 EOC patients at their initial presentation. Using machine learning, we derive a non-invasive summary-statistic of the primary ovarian tumor based on 4 descriptors, which we name "Radiomic Prognostic Vector" (RPV). RPV reliably identifies the 5% of patients with median overall survival less than 2 years, significantly improves established prognostic methods, and is validated in two independent, multi-center cohorts. Furthermore, genetic, transcriptomic and proteomic analysis from two independent datasets elucidate that stromal phenotype and DNA damage response pathways are activated in RPV-stratified tumors. RPV and its associated analysis platform could be exploited to guide personalized therapy of EOC and is potentially transferrable to other cancer types.

A Weighted Voting Ensemble Self-Labeled Algorithm for the Detection of Lung Abnormalities from X-Rays

  • Livieris, Ioannis
  • Kanavos, Andreas
  • Tampakas, Vassilis
  • Pintelas, Panagiotis
Algorithms 2019 Journal Article, cited 0 times
During the last decades, intensive efforts have been devoted to the extraction of useful knowledge from large volumes of medical data employing advanced machine learning and data mining techniques. Advances in digital chest radiography have enabled research and medical centers to accumulate large repositories of classified (labeled) images and mostly of unclassified (unlabeled) images from human experts. Machine learning methods such as semi-supervised learning algorithms have been proposed as a new direction to address the problem of shortage of available labeled data, by exploiting the explicit classification information of labeled data with the information hidden in the unlabeled data. In the present work, we propose a new ensemble semi-supervised learning algorithm for the classification of lung abnormalities from chest X-rays based on a new weighted voting scheme. The proposed algorithm assigns a vector of weights on each component classifier of the ensemble based on its accuracy on each class. Our numerical experiments illustrate the efficiency of the proposed ensemble methodology against other state-of-the-art classification methods.

Detecting Lung Abnormalities From X-rays Using an Improved SSL Algorithm

  • Livieris, Ioannis
  • Kanavos, Andreas
  • Pintelas, Panagiotis
Electronic Notes in Theoretical Computer Science 2019 Journal Article, cited 0 times

Oligodendroglial tumours: subventricular zone involvement and seizure history are associated with CIC mutation status

  • Liu, Zhenyin
  • Liu, Hongsheng
  • Liu, Zhenqing
  • Zhang, Jing
BMC Neurol 2019 Journal Article, cited 1 times
BACKGROUND: CIC-mutant oligodendroglial tumours linked to better prognosis. We aim to investigate associations between CIC gene mutation status, MR characteristics and clinical features. METHODS: Imaging and genomic data from the Cancer Genome Atlas and the Cancer Imaging Archive (TCGA/TCIA) for 59 patients with oligodendroglial tumours were used. Differences between CIC mutation and CIC wild-type were tested using Chi-square test and binary logistic regression analysis. RESULTS: In univariate analysis, the clinical variables and MR features, which consisted 3 selected features (subventricular zone[SVZ] involvement, volume and seizure history) were associated with CIC mutation status (all p < 0.05). A multivariate logistic regression analysis identified that seizure history (no vs. yes odd ratio [OR]: 28.960, 95 confidence interval [CI]:2.625-319.49, p = 0.006) and SVZ involvement (SVZ- vs. SVZ+ OR: 77.092, p = 0.003; 95% CI: 4.578-1298.334) were associated with a higher incidence of CIC mutation status. The nomogram showed good discrimination, with a C-index of 0.906 (95% CI: 0.812-1.000) and was well calibrated. SVZ- group has increased (SVZ- vs. SVZ+, hazard ratio [HR]: 4.500, p = 0.04; 95% CI: 1.069-18.945) overall survival. CONCLUSIONS: Absence of seizure history and SVZ involvement (-) was associated with a higher incidence of CIC mutation.

Cross-Modality Knowledge Transfer for Prostate Segmentation from CT Scans

  • Liu, Yucheng
  • Khosravan, Naji
  • Liu, Yulin
  • Stember, Joseph
  • Shoag, Jonathan
  • Bagci, Ulas
  • Jambawalikar, Sachin
2019 Book Section, cited 0 times

Deep learning for magnetic resonance imaging-genomic mapping of invasive breast carcinoma

  • Liu, Qian
2019 Thesis, cited 0 times
To identify MRI-based radiomic features that could be obtained automatically by a deep learning (DL) model and could predict the clinical characteristics of breast cancer (BC). Also, to explain the potential underlying genomic mechanisms of the predictive radiomic features. A denoising autoencoder (DA) was developed to retrospectively extract 4,096 phenotypes from the MRI of 110 BC patients collected by The Cancer Imaging Archive (TCIA). The associations of these phenotypes with genomic features (commercialized gene signatures, expression of risk genes, and biological pathways activities extracted from the same patients’ mRNA expression collected by The Cancer Genome Atlas (TCGA)) were tested based on linear mixed effect (LME) models. A least absolute shrinkage and selection operator (LASSO) model was used to identify the most predictive MRI phenotypes for each clinical phenotype (tumor size (T), lymph node metastasis(N), status of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2)). More than 1,000 of the 4,096 MRI phenotypes were associated with the activities of risk genes, gene signatures, and biological pathways (adjusted P-value < 0.05). High performances are obtained in the prediction of the status of T, N, ER, PR, HER2 (AUC>0.9). These identified MRI phenotypes also show significant power to stratify the BC tumors. DL based auto MRI features performed very well in predicting clinical characteristics of BC and these phenotypes were identified to have genomic significance.

Multi-subtype classification model for non-small cell lung cancer based on radiomics: SLS model

  • Liu, J.
  • Cui, J.
  • Liu, F.
  • Yuan, Y.
  • Guo, F.
  • Zhang, G.
Med Phys 2019 Journal Article, cited 0 times
PURPOSE: Histological subtypes of non-small cell lung cancer (NSCLC) are crucial for systematic treatment decisions. However, the current studies which used non-invasive radiomic methods to classify NSCLC histology subtypes mainly focused on two main subtypes: squamous cell carcinoma (SCC) and adenocarcinoma (ADC), while multi-subtype classifications that included the other two subtypes of NSCLC: large cell carcinoma (LCC) and not otherwise specified (NOS), were very few in the previous studies. The aim of this work is to establish a multi-subtype classification model for the four main subtypes of NSCLC and improve the classification performance and generalization ability compared with previous studies. METHODS: In this work, we extracted 1029 features from regions of interest in computed tomography (CT) images of 349 patients from two different datasets using radiomic methods. Based on 'three-in-one' concept, we proposed a model called SLS wrapping three algorithms, synthetic minority oversampling technique, l2,1-norm minimization, and support vector machines, into one hybrid technique to classify the four main subtypes of NSCLC: SCC, ADC, LCC and NOS, which could cover the whole range of NSCLC. RESULTS: We analyzed the 247 features obtained by dimension reduction, and found that the extracted features from three methods: first order statistics, gray level co-occurrence matrix, and gray level size zone matrix, were more conducive to the classification of NSCLC subtypes. The proposed SLS model achieved an average classification accuracy of 0.89 on the training set (95% confidence interval [CI]: 0.846 to 0.912) and a classification accuracy of 0.86 on the test set (95% CI: 0.779 to 0.941). CONCLUSIONS: The experiment results showed that the subtypes of NSCLC could be well classified by radiomic method. Our SLS model can accurately classify and diagnose the four subtypes of NSCLC based on CT images, and thus it has the potential to be used in the clinical practice to provide valuable information for lung cancer treatment and further promote the personalized medicine. This article is protected by copyright. All rights reserved.

Machine Learning Models on Prognostic Outcome Prediction for Cancer Images with Multiple Modalities

  • Liu, Gengbo
2019 Thesis, cited 0 times
Machine learning algorithms have been applied to predict different prognostic outcomes for many different diseases by directly using medical images. However, the higher resolution in various types of medical imaging modalities and new imaging feature extraction framework bringsnew challenges for predicting prognostic outcomes. Compared to traditional radiology practice, which is only based on visual interpretation and simple quantitative measurements, medical imaging featurescan dig deeper within medical images and potentially provide further objective support for clinical decisions.In this dissertation, we cover three projects with applying or designing machine learning models on predicting prognostic outcomes using various types of medical images.

A radiogenomics signature for predicting the clinical outcome of bladder urothelial carcinoma

  • Lin, Peng
  • Wen, Dong-Yue
  • Chen, Ling
  • Li, Xin
  • Li, Sheng-Hua
  • Yan, Hai-Biao
  • He, Rong-Quan
  • Chen, Gang
  • He, Yun
  • Yang, Hong
Eur Radiol 2019 Journal Article, cited 0 times
OBJECTIVES: To determine the integrative value of contrast-enhanced computed tomography (CECT), transcriptomics data and clinicopathological data for predicting the survival of bladder urothelial carcinoma (BLCA) patients. METHODS: RNA sequencing data, radiomics features and clinical parameters of 62 BLCA patients were included in the study. Then, prognostic signatures based on radiomics features and gene expression profile were constructed by using least absolute shrinkage and selection operator (LASSO) Cox analysis. A multi-omics nomogram was developed by integrating radiomics, transcriptomics and clinicopathological data. More importantly, radiomics risk score-related genes were identified via weighted correlation network analysis and submitted to functional enrichment analysis. RESULTS: The radiomics and transcriptomics signatures significantly stratified BLCA patients into high- and low-risk groups in terms of the progression-free interval (PFI). The two risk models remained independent prognostic factors in multivariate analyses after adjusting for clinical parameters. A nomogram was developed and showed an excellent predictive ability for the PFI in BLCA patients. Functional enrichment analysis suggested that the radiomics signature we developed could reflect the angiogenesis status of BLCA patients. CONCLUSIONS: The integrative nomogram incorporated CECT radiomics, transcriptomics and clinical features improved the PFI prediction in BLCA patients and is a feasible and practical reference for oncological precision medicine. KEY POINTS: * Our radiomics and transcriptomics models are proved robust for survival prediction in bladder urothelial carcinoma patients. * A multi-omics nomogram model which integrates radiomics, transcriptomics and clinical features for prediction of progression-free interval in bladder urothelial carcinoma is established. * Molecular functional enrichment analysis is used to reveal the potential molecular function of radiomics signature.

Volumetric and Voxel-Wise Analysis of Dominant Intraprostatic Lesions on Multiparametric MRI

  • Lee, Joon
  • Carver, Eric
  • Feldman, Aharon
  • Pantelic, Milan V
  • Elshaikh, Mohamed
  • Wen, Ning
Front Oncol 2019 Journal Article, cited 0 times
Introduction: Multiparametric MR imaging (mpMRI) has shown promising results in the diagnosis and localization of prostate cancer. Furthermore, mpMRI may play an important role in identifying the dominant intraprostatic lesion (DIL) for radiotherapy boost. We sought to investigate the level of correlation between dominant tumor foci contoured on various mpMRI sequences. Methods: mpMRI data from 90 patients with MR-guided biopsy-proven prostate cancer were obtained from the SPIE-AAPM-NCI Prostate MR Classification Challenge. Each case consisted of T2-weighted (T2W), apparent diffusion coefficient (ADC), and K(trans) images computed from dynamic contrast-enhanced sequences. All image sets were rigidly co-registered, and the dominant tumor foci were identified and contoured for each MRI sequence. Hausdorff distance (HD), mean distance to agreement (MDA), and Dice and Jaccard coefficients were calculated between the contours for each pair of MRI sequences (i.e., T2 vs. ADC, T2 vs. K(trans), and ADC vs. K(trans)). The voxel wise spearman correlation was also obtained between these image pairs. Results: The DILs were located in the anterior fibromuscular stroma, central zone, peripheral zone, and transition zone in 35.2, 5.6, 32.4, and 25.4% of patients, respectively. Gleason grade groups 1-5 represented 29.6, 40.8, 15.5, and 14.1% of the study population, respectively (with group grades 4 and 5 analyzed together). The mean contour volumes for the T2W images, and the ADC and K(trans) maps were 2.14 +/- 2.1, 2.22 +/- 2.2, and 1.84 +/- 1.5 mL, respectively. K(trans) values were indistinguishable between cancerous regions and the rest of prostatic regions for 19 patients. The Dice coefficient and Jaccard index were 0.74 +/- 0.13, 0.60 +/- 0.15 for T2W-ADC and 0.61 +/- 0.16, 0.46 +/- 0.16 for T2W-K(trans). The voxel-based Spearman correlations were 0.20 +/- 0.20 for T2W-ADC and 0.13 +/- 0.25 for T2W-K(trans). Conclusions: The DIL contoured on T2W images had a high level of agreement with those contoured on ADC maps, but there was little to no quantitative correlation of these results with tumor location and Gleason grade group. Technical hurdles are yet to be solved for precision radiotherapy to target the DILs based on physiological imaging. A Boolean sum volume (BSV) incorporating all available MR sequences may be reasonable in delineating the DIL boost volume.

Automatic GPU memory management for large neural models in TensorFlow

  • Le, Tung D.
  • Imai, Haruki
  • Negishi, Yasushi
  • Kawachiya, Kiyokuni
2019 Conference Proceedings, cited 0 times
Deep learning models are becoming larger and will not fit inthe limited memory of accelerators such as GPUs for train-ing. Though many methods have been proposed to solvethis problem, they are rather ad-hoc in nature and difficultto extend and integrate with other techniques. In this pa-per, we tackle the problem in a formal way to provide astrong foundation for supporting large models. We proposea method of formally rewriting the computational graph of amodel where swap-out and swap-in operations are insertedto temporarily store intermediate results on CPU memory.By introducing a categorized topological ordering for simu-lating graph execution, the memory consumption of a modelcan be easily analyzed by using operation distances in theordering. As a result, the problem of fitting a large model intoa memory-limited accelerator is reduced to the problem ofreducing operation distances in a categorized topological or-dering. We then show how to formally derive swap-out andswap-in operations from an existing graph and present rulesto optimize the graph. Finally, we propose a simulation-basedauto-tuning to automatically find suitable graph-rewritingparameters for the best performance. We developed a modulein TensorFlow, calledLMS, by which we successfully trainedResNet-50 with a4.9x larger mini-batch size and 3D U-Netwith a5.6x larger image resolution.

Conditional random fields improve the CNN-based prostate cancer classification performance

  • Lapa, Paulo Alberto Fernandes
2019 Thesis, cited 0 times
Prostate cancer is a condition with life-threatening implications but without clear causes yet identified. Several diagnostic procedures can be used, ranging from human dependent and very invasive to using state of the art non-invasive medical imaging. With recent academic and industry focus on the deep learning field, novel research has been performed on to how to improve prostate cancer diagnosis using Convolutional Neural Networks to interpret Magnetic Resonance images. Conditional Random Fields have achieved outstanding results in the image segmentation task, by promoting homogeneous classification at the pixel level. A new implementation, CRF-RNN defines Conditional Random Fields by means of convolutional layers, allowing the end to end training of the feature extractor and classifier models. This work tries to repurpose CRFs for the image classification task, a more traditional sub-field of imaging analysis, on a way that to the best of the author’s knowledge, has not been implemented before. To achieve this, a purpose-built architecture was refitted, adding a CRF layer as a feature extractor step. To serve as the implementation’s benchmark, a multi-parametric Magnetic Resonance Imaging dataset was used, initially provided for the PROSTATEx Challenge 2017 and collected by the Radboud University. The results are very promising, showing an increase in the network’s classification quality. Cancro da próstata é uma condição que pode apresentar risco de vida, mas sem causas ainda corretamente identificadas. Vários métodos de diagnóstico podem ser utilizados, desde bastante invasivos e dependentes do operador humano a métodos não invasivos de ponta através de imagens médicas. Com o crescente interesse das universidades e da indústria no campo do deep learning, investigação tem sido desenvolvida com o propósito de melhorar o diagnóstico de cancro da próstata através de Convolutional Neural Networks (CNN) (Redes Neuronais Convolucionais) para interpretar imagens de Ressonância Magnética. Conditional Random Fields (CRF) (Campos Aleatórios Condicionais) alcançaram resultados muito promissores no campo da Segmentação de Imagem, por promoverem classificações homogéneas ao nível do pixel. Uma nova implementação, CRF-RNN redefine os CRF através de camadas de CNN, permitindo assim o treino integrado da rede que extrai as características e o modelo que faz a classificação. Este trabalho tenta aproveitar os CRF para a tarefa de Classificação de Imagem, um campo mais tradicional, numa abordagem que nunca foi implementada anteriormente, para o conhecimento do autor. Para conseguir isto, uma nova arquitetura foi definida, utilizando uma camada CRF-RNN como um extrator de características. Como meio de comparação foi utilizada uma base de dados de imagens multiparamétricas de Ressonância Magnética, recolhida pela Universidade de Radboud e inicialmente utilizada para o PROSTATEx Challenge 2017. Os resultados são bastante promissores, mostrando uma melhoria na capacidade de classificação da rede neuronal.

Semantic learning machine improves the CNN-Based detection of prostate cancer in non-contrast-enhanced MRI

  • Lapa, Paulo
  • Gonçalves, Ivo
  • Rundo, Leonardo
  • Castelli, Mauro
2019 Conference Proceedings, cited 0 times
Considering that Prostate Cancer (PCa) is the most frequently diagnosed tumor in Western men, considerable attention has been devoted in computer-assisted PCa detection approaches. However, this task still represents an open research question. In the clinical practice, multiparametric Magnetic Resonance Imaging (MRI) is becoming the most used modality, aiming at defining biomarkers for PCa. In the latest years, deep learning techniques have boosted the performance in prostate MR image analysis and classification. This work explores the use of the Semantic Learning Machine (SLM) neuroevolution algorithm to replace the backpropagation algorithm commonly used in the last fully-connected layers of Convolutional Neural Networks (CNNs). We analyzed the non-contrast-enhanced multispectral MRI sequences included in the PROSTATEx dataset, namely: T2-weighted, Proton Density weighted, Diffusion Weighted Imaging. The experimental results show that the SLM significantly outperforms XmasNet, a state-of-the-art CNN. In particular, with respect to XmasNet, the SLM achieves higher classification accuracy (without neither pre-training the underlying CNN nor relying on backprogation) as well as a speed-up of one order of magnitude.

A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop

  • Langlotz, Curtis P
  • Allen, Bibb
  • Erickson, Bradley J
  • Kalpathy-Cramer, Jayashree
  • Bigelow, Keith
  • Cook, Tessa S
  • Flanders, Adam E
  • Lungren, Matthew P
  • Mendelson, David S
  • Rudie, Jeffrey D
  • Wang, Ge
  • Kandarpa, Krishna
Radiology 2019 Journal Article, cited 1 times
Imaging research laboratories are rapidly creating machine learning systems that achieve expert human performance using open-source methods and tools. These artificial intelligence systems are being developed to improve medical image reconstruction, noise reduction, quality assurance, triage, segmentation, computer-aided detection, computer-aided classification, and radiogenomics. In August 2018, a meeting was held in Bethesda, Maryland, at the National Institutes of Health to discuss the current state of the art and knowledge gaps and to develop a roadmap for future research initiatives. Key research priorities include: 1, new image reconstruction methods that efficiently produce images suitable for human interpretation from source data; 2, automated image labeling and annotation methods, including information extraction from the imaging report, electronic phenotyping, and prospective structured image reporting; 3, new machine learning methods for clinical imaging data, such as tailored, pretrained model architectures, and federated machine learning methods; 4, machine learning methods that can explain the advice they provide to human users (so-called explainable artificial intelligence); and 5, validated methods for image de-identification and data sharing to facilitate wide availability of clinical imaging data sets. This research roadmap is intended to identify and prioritize these needs for academic research laboratories, funding agencies, professional societies, and industry.

Acute Tumor Transition Angle on Computed Tomography Predicts Chromosomal Instability Status of Primary Gastric Cancer: Radiogenomics Analysis from TCGA and Independent Validation

  • Lai, Ying-Chieh
  • Yeh, Ta-Sen
  • Wu, Ren-Chin
  • Tsai, Cheng-Kun
  • Yang, Lan-Yan
  • Lin, Gigin
  • Kuo, Michael D
Cancers 2019 Journal Article, cited 0 times
Chromosomal instability (CIN) of gastric cancer is correlated with distinct outcomes. This study aimed to investigate the role of computed tomography (CT) imaging traits in predicting the CIN status of gastric cancer. We screened 443 patients in the Cancer Genome Atlas gastric cancer cohort to filter 40 patients with complete CT imaging and genomic data as the training cohort. CT imaging traits were subjected to logistic regression to select independent predictors for the CIN status. For the validation cohort, we prospectively enrolled 18 gastric cancer patients for CT and tumor genomic analysis. The imaging predictors were tested in the validation cohort using receiver operating characteristic curve (ROC) analysis. Thirty patients (75%) in the training cohort and 9 patients (50%) in the validation cohort had CIN subtype gastric cancers. Smaller tumor diameter (p = 0.017) and acute tumor transition angle (p = 0.045) independently predict CIN status in the training cohort. In the validation cohort, acute tumor transition angle demonstrated the highest accuracy, sensitivity, and specificity of 88.9%, 88.9%, and 88.9%, respectively, and areas under ROC curve of 0.89. In conclusion, this pilot study showed acute tumor transition angle on CT images may predict the CIN status of gastric cancer.

Computer-Aided Diagnosis of Life-Threatening Diseases

  • Kumar, Pramod
  • Ambekar, Sameer
  • Roy, Subarna
  • Kunchur, Pavan
2019 Book Section, cited 0 times
According to WHO, the incidence of life-threatening diseases like cancer, diabetes, and Alzheimer’s disease is escalating globally. In the past few decades, traditional methods have been used to diagnose such diseases. These traditional methods often have limitations such as lack of accuracy, expense, and time-consuming procedures. Computer-aided diagnosis (CAD) aims to overcome these limitations by personalizing healthcare issues. Machine learning is a promising CAD method, offering effective solutions for these diseases. It is being used for early detection of cancer, diabetic retinopathy, as well as Alzheimer’s disease, and also to identify diseases in plants. Machine learning can increase efficiency, making the process more cost effective, with quicker delivery of results. There are several CAD algorithms (ANN, SVM, etc.) that can be used to train the disease dataset, and eventually make significant predictions. It has also been proven that CAD algorithms have potential to diagnose and early detection of life-threatening diseases.

Analysis of CT DICOM Image Segmentation for Abnormality Detection

  • Kulkarni, Rashmi
  • Bhavani, K.
International Journal of Engineering and Manufacturing 2019 Journal Article, cited 0 times
The cancer is a menacing disease. More care is required while diagnosing cancer disease. Mostly CT modality is used for Cancer therapy. Image processing techniques [1] can help doctors to diagnose easily and more accurately. Image pre-processing [2], segmentation methods [3] are used in extraction of cancerous nodules from CT images. Many researches have been done on segmentation of CT images with different algorithms, but they failed to reach 100% accuracy. This research work, proposes a model for analysis of CT image segmentation with filtered and without filtered images. And brings out the importance of pre-processing of CT images.

Medical (CT) image generation with style

  • Krishna, Arjun
  • Mueller, Klaus
2019 Conference Proceedings, cited 0 times

Investigating the role of model-based and model-free imaging biomarkers as early predictors of neoadjuvant breast cancer therapy outcome

  • Kontopodis, Eleftherios
  • Venianaki, Maria
  • Manikis, George C
  • Nikiforaki, Katerina
  • Salvetti, Ovidio
  • Papadaki, Efrosini
  • Papadakis, Georgios Z
  • Karantanas, Apostolos H
  • Marias, Kostas
IEEE J Biomed Health Inform 2019 Journal Article, cited 0 times
Imaging biomarkers (IBs) play a critical role in the clinical management of breast cancer (BRCA) patients throughout the cancer continuum for screening, diagnosis and therapy assessment especially in the neoadjuvant setting. However, certain model-based IBs suffer from significant variability due to the complex workflows involved in their computation, whereas model-free IBs have not been properly studied regarding clinical outcome. In the present study, IBs from 35 BRCA patients who received neoadjuvant chemotherapy (NAC) were extracted from dynamic contrast enhanced MR imaging (DCE-MRI) data with two different approaches, a model-free approach based on pattern recognition (PR), and a model-based one using pharmacokinetic compartmental modeling. Our analysis found that both model-free and model-based biomarkers can predict pathological complete response (pCR) after the first cycle of NAC. Overall, 8 biomarkers predicted the treatment response after the first cycle of NAC, with statistical significance (p-value<0.05), and 3 at the baseline. The best pCR predictors at first follow-up, achieving high AUC and sensitivity and specificity more than 50%, were the hypoxic component with threshold2 (AUC 90.4%) from the PR method, and the median value of kep (AUC 73.4%) from the model-based approach. Moreover, the 80th percentile of ve achieved the highest pCR prediction at baseline with AUC 78.5%. The results suggest that model-free DCE-MRI IBs could be a more robust alternative to complex, model-based ones such as kep and favor the hypothesis that the PR image-derived hypoxic image component captures actual tumor hypoxia information able to predict BRCA NAC outcome.

Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy

  • Koike, Yuhei
  • Akino, Yuichi
  • Sumida, Iori
  • Shiomi, Hiroya
  • Mizuno, Hirokazu
  • Yagi, Masashi
  • Isohashi, Fumiaki
  • Seo, Yuji
  • Suzuki, Osamu
  • Ogawa, Kazuhiko
J Radiat Res 2019 Journal Article, cited 0 times
The aim of this work is to generate synthetic computed tomography (sCT) images from multi-sequence magnetic resonance (MR) images using an adversarial network and to assess the feasibility of sCT-based treatment planning for brain radiotherapy. Datasets for 15 patients with glioblastoma were selected and 580 pairs of CT and MR images were used. T1-weighted, T2-weighted and fluid-attenuated inversion recovery MR sequences were combined to create a three-channel image as input data. A conditional generative adversarial network (cGAN) was trained using image patches. The image quality was evaluated using voxel-wise mean absolute errors (MAEs) of the CT number. For the dosimetric evaluation, 3D conformal radiotherapy (3D-CRT) and volumetric modulated arc therapy (VMAT) plans were generated using the original CT set and recalculated using the sCT images. The isocenter dose and dose-volume parameters were compared for 3D-CRT and VMAT plans, respectively. The equivalent path length was also compared. The mean MAEs for the whole body, soft tissue and bone region were 108.1 +/- 24.0, 38.9 +/- 10.7 and 366.2 +/- 62.0 hounsfield unit, respectively. The dosimetric evaluation revealed no significant difference in the isocenter dose for 3D-CRT plans. The differences in the dose received by 2% of the volume (D2%), D50% and D98% relative to the prescribed dose were <1.0%. The overall equivalent path length was shorter than that for real CT by 0.6 +/- 1.9 mm. A treatment planning study using generated sCT detected only small, clinically negligible differences. These findings demonstrated the feasibility of generating sCT images for MR-only radiotherapy from multi-sequence MR images using cGAN.

Machine learning-based unenhanced CT texture analysis for predicting BAP1 mutation status of clear cell renal cell carcinomas

  • Kocak, Burak
  • Durmaz, Emine Sebnem
  • Kaya, Ozlem Korkmaz
  • Kilickesmez, Ozgur
Acta Radiol 2019 Journal Article, cited 0 times
BACKGROUND: BRCA1-associated protein 1 (BAP1) mutation is an unfavorable factor for overall survival in patients with clear cell renal cell carcinoma (ccRCC). Radiomics literature about BAP1 mutation lacks papers that consider the reliability of texture features in their workflow. PURPOSE: Using texture features with a high inter-observer agreement, we aimed to develop and internally validate a machine learning-based radiomic model for predicting the BAP1 mutation status of ccRCCs. MATERIALS AND METHODS: For this retrospective study, 65 ccRCCs were included from a public database. Texture features were extracted from unenhanced computed tomography (CT) images, using two-dimensional manual segmentation. Dimension reduction was done in three steps: (i) inter-observer agreement analysis; (ii) collinearity analysis; and (iii) feature selection. The machine learning classifier was random forest. The model was validated using 10-fold nested cross-validation. The reference standard was the BAP1 mutation status. RESULTS: Out of 744 features, 468 had an excellent inter-observer agreement. After the collinearity analysis, the number of features decreased to 17. Finally, the wrapper-based algorithm selected six features. Using selected features, the random forest correctly classified 84.6% of the labelled slices regarding BAP1 mutation status with an area under the receiver operating characteristic curve of 0.897. For predicting ccRCCs with BAP1 mutation, the sensitivity, specificity, and precision were 90.4%, 78.8%, and 81%, respectively. For predicting ccRCCs without BAP1 mutation, the sensitivity, specificity, and precision were 78.8%, 90.4%, and 89.1%, respectively. CONCLUSION: Machine learning-based unenhanced CT texture analysis might be a potential method for predicting the BAP1 mutation status of ccRCCs.

Reliability of Single-Slice–Based 2D CT Texture Analysis of Renal Masses: Influence of Intra- and Interobserver Manual Segmentation Variability on Radiomic Feature Reproducibility

  • Kocak, Burak
  • Durmaz, Emine Sebnem
  • Kaya, Ozlem Korkmaz
  • Ates, Ece
  • Kilickesmez, Ozgur
AJR Am J Roentgenol 2019 Journal Article, cited 0 times
OBJECTIVE. The objective of our study was to investigate the potential influence of intra- and interobserver manual segmentation variability on the reliability of single-slice-based 2D CT texture analysis of renal masses. MATERIALS AND METHODS. For this retrospective study, 30 patients with clear cell renal cell carcinoma were included from a public database. For intra- and interobserver analyses, three radiologists with varying degrees of experience segmented the tumors from unenhanced CT and corticomedullary phase contrast-enhanced CT (CECT) in different sessions. Each radiologist was blind to the image slices selected by other radiologists and him- or herself in the previous session. A total of 744 texture features were extracted from original, filtered, and transformed images. The intraclass correlation coefficient was used for reliability analysis. RESULTS. In the intraobserver analysis, the rates of features with good to excellent reliability were 84.4-92.2% for unenhanced CT and 85.5-93.1% for CECT. Considering the mean rates of unenhanced CT and CECT, having high experience resulted in better reliability rates in terms of the intraobserver analysis. In the interobserver analysis, the rates were 76.7% for unenhanced CT and 84.9% for CECT. The gray-level cooccurrence matrix and first-order feature groups yielded higher good to excellent reliability rates on both unenhanced CT and CECT. Filtered and transformed images resulted in more features with good to excellent reliability than the original images did on both unenhanced CT and CECT. CONCLUSION. Single-slice-based 2D CT texture analysis of renal masses is sensitive to intra- and interobserver manual segmentation variability. Therefore, it may lead to nonreproducible results in radiomic analysis unless a reliability analysis is considered in the workflow.

Radiogenomics of lower-grade gliomas: machine learning-based MRI texture analysis for predicting 1p/19q codeletion status

  • Kocak, B.
  • Durmaz, E. S.
  • Ates, E.
  • Sel, I.
  • Turgut Gunes, S.
  • Kaya, O. K.
  • Zeynalova, A.
  • Kilickesmez, O.
Eur Radiol 2019 Journal Article, cited 0 times
OBJECTIVE: To evaluate the potential value of the machine learning (ML)-based MRI texture analysis for predicting 1p/19q codeletion status of lower-grade gliomas (LGG), using various state-of-the-art ML algorithms. MATERIALS AND METHODS: For this retrospective study, 107 patients with LGG were included from a public database. Texture features were extracted from conventional T2-weighted and contrast-enhanced T1-weighted MRI images, using LIFEx software. Training and unseen validation splits were created using stratified 10-fold cross-validation technique along with minority over-sampling. Dimension reduction was done using collinearity analysis and feature selection (ReliefF). Classifications were done using adaptive boosting, k-nearest neighbours, naive Bayes, neural network, random forest, stochastic gradient descent, and support vector machine. Friedman test and pairwise post hoc analyses were used for comparison of classification performances based on the area under the curve (AUC). RESULTS: Overall, the predictive performance of the ML algorithms were statistically significantly different, chi2(6) = 26.7, p < 0.001. There was no statistically significant difference among the performance of the neural network, naive Bayes, support vector machine, random forest, and stochastic gradient descent, adjusted p > 0.05. The mean AUC and accuracy values of these five algorithms ranged from 0.769 to 0.869 and from 80.1 to 84%, respectively. The neural network had the highest mean rank with mean AUC and accuracy values of 0.869 and 83.8%, respectively. CONCLUSIONS: The ML-based MRI texture analysis might be a promising non-invasive technique for predicting the 1p/19q codeletion status of LGGs. Using this technique along with various ML algorithms, more than four-fifths of the LGGs can be correctly classified. KEY POINTS: * More than four-fifths of the lower-grade gliomas can be correctly classified with machine learning-based MRI texture analysis. Satisfying classification outcomes are not limited to a single algorithm. * A few-slice-based volumetric segmentation technique would be a valid approach, providing satisfactory predictive textural information and avoiding excessive segmentation duration in clinical practice. * Feature selection is sensitive to different patient data set samples so that each sampling leads to the selection of different feature subsets, which needs to be considered in future works.

Unenhanced CT Texture Analysis of Clear Cell Renal Cell Carcinomas: A Machine Learning-Based Study for Predicting Histopathologic Nuclear Grade

  • Kocak, Burak
  • Durmaz, Emine Sebnem
  • Ates, Ece
  • Kaya, Ozlem Korkmaz
  • Kilickesmez, Ozgur
American Journal of Roentgenology 2019 Journal Article, cited 0 times
OBJECTIVE: The purpose of this study is to investigate the predictive performance of machine learning (ML)-based unenhanced CT texture analysis in distinguishing low (grades I and II) and high (grades III and IV) nuclear grade clear cell renal cell carcinomas (RCCs). MATERIALS AND METHODS: For this retrospective study, 81 patients with clear cell RCC (56 high and 25 low nuclear grade) were included from a public database. Using 2D manual segmentation, 744 texture features were extracted from unenhanced CT images. Dimension reduction was done in three consecutive steps: reproducibility analysis by two radiologists, collinearity analysis, and feature selection. Models were created using artificial neural network (ANN) and binary logistic regression, with and without synthetic minority oversampling technique (SMOTE), and were validated using 10-fold cross-validation. The reference standard was histopathologic nuclear grade (low vs high). RESULTS: Dimension reduction steps yielded five texture features for the ANN and six for the logistic regression algorithm. None of clinical variables was selected. ANN alone and ANN with SMOTE correctly classified 81.5% and 70.5%, respectively, of clear cell RCCs, with AUC values of 0.714 and 0.702, respectively. The logistic regression algorithm alone and with SMOTE correctly classified 75.3% and 62.5%, respectively, of the tumors, with AUC values of 0.656 and 0.666, respectively. The ANN performed better than the logistic regression (p < 0.05). No statistically significant difference was present between the model performances created with and without SMOTE (p > 0.05). CONCLUSION: ML-based unenhanced CT texture analysis using ANN can be a promising noninvasive method in predicting the nuclear grade of clear cell RCCs.

Influence of segmentation margin on machine learning–based high-dimensional quantitative CT texture analysis: a reproducibility study on renal clear cell carcinomas

  • Kocak, Burak
  • Ates, Ece
  • Durmaz, Emine Sebnem
  • Ulusan, Melis Baykara
  • Kilickesmez, Ozgur
European Radiology 2019 Journal Article, cited 0 times

Training of deep convolutional neural nets to extract radiomic signatures of tumors

  • Kim, J.
  • Seo, S.
  • Ashrafinia, S.
  • Rahmim, A.
  • Sossi, V.
  • Klyuzhin, I.
Journal of Nuclear Medicine 2019 Journal Article, cited 0 times
Objectives: Radiomics-based analysis of FDG PET images has been shown to improve the assessment and prediction of tumor growth rate, response to treatment and other patient outcomes [1]. An alternative new approach to image analysis involves the use of convolutional neural networks (CNNs), wherein relevant image features are learned implicitly and automatically in the process of network training [2]; this is in contrast to radiomics analyses, where the features are “hand-crafted” and are explicitly computed (EC). Although CNNs represent a more general approach, it is not clear whether the implicitly learned features may, or have the ability to include radiomics features (RFs) as a subset. If this is the case, CNN-based approaches may eventually obviate the use of EC RFs. Further, the use of CNNs instead of RFs may completely eliminate the need for feature selection and tumor delineation, enabling high-throughput data analyses. Thus, our objective was to test whether CNNs can learn to act similarly to several commonly used RFs. Using a set of simulated and real FDG PET images of tumors, we train the CNNs to estimate the values of RFs from the images without the explicit computation. We then compare the values of the CNN-estimated and EC features. Methods: Using a stochastic volumetric model for tumor growth, 2000 FDG images of tumors confined to a bounding box (BB) were simulated (40x40x40 voxels, voxel size 2.0 mm), and 10 RFs (3 x morphology, 4 x intensity histogram, 3 x texture features) were computed for each image using the SERA library [3] (compliant with the Image Biomarker Standardization Initiative, IBSI [4]). A 3D CNN with 4 convolutional layers, and a total of 164 filters, was implemented in Python using the Keras library with TensorFlow backend ( The mean absolute error was the optimized loss function. The CNN was trained to automatically estimate the values each of the 10 RFs for each image; 1900 of images were used for training, and 100 were used for testing, to compare the CNN-estimated values to the EC feature values. We also used a secondary test set comprised of 133 real tumor images, obtained from the head and neck PET/CT imaging study [5] publicly available at the Cancer Imaging Archive. The tumors were cropped to a BB, and the images were resampled to yield similar image size to the simulated image set. Results: After the training procedure, on the simulated test set the CNN was able to estimate the values of most EC RFs with 10-20% error (relative to the range). In the morphology group, the errors were 3.8% for volume, 12.0% for compactness, 15.7% for flatness. In the intensity group, the errors were 13.7% for the mean, 15.4% for variance, 12.3% for skewness, and 13.1% for kurtosis. In the texture group, the error was 10.6% for GLCM contrast, 13.4% for cluster tendency, and 21.7% for angular momentum. With all features, the difference between the CNN-estimated and EC feature values were statistically insignificant (two-sample t-test), and the correlation between the feature values was highly significant (p<0.01). On the real image test set, we observed higher error rates, on the order of 20-30%; however, with all but one feature (angular momentum), there was a significant correlation between the CNN-estimated and EC features (p<0.01). Conclusions: Our results suggest that CNNs can be trained to act similarly to several widely used RFs. While the accuracy of CNN-based estimates varied between the features, in general, the CNN showed a good propensity for learning. Thus, it is likely that with more complex network architectures and training data, features can be estimated more accurately. While a greater number of RFs need to be similarly tested in the future, these initial experiments provide first evidence that, given the sufficient quality and quantity of the training data, the CNNs indeed represent a more general approach to feature extraction, and may potentially replace radiomics-based analyses without compromising the descriptive thoroughness.

Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities

  • Kim, Incheol
  • Rajaraman, Sivaramakrishnan
  • Antani, Sameer
Diagnostics (Basel) 2019 Journal Article, cited 0 times
Deep learning (DL) methods are increasingly being applied for developing reliable computer-aided detection (CADe), diagnosis (CADx), and information retrieval algorithms. However, challenges in interpreting and explaining the learned behavior of the DL models hinders their adoption and use in real-world systems. In this study, we propose a novel method called “Class-selective Relevance Mapping” (CRM) for localizing and visualizing discriminative regions of interest (ROI) within a medical image. Such visualizations offer improved explanation of the convolutional neural network (CNN)-based DL model predictions. We demonstrate CRM effectiveness in classifying medical imaging modalities toward automatically labeling them for visual information retrieval applications. The CRM is based on linear sum of incremental mean squared errors (MSE) calculated at the output layer of the CNN model. It measures both positive and negative contributions of each spatial element in the feature maps produced from the last convolution layer leading to correct classification of an input image. A series of experiments on a “multi-modality” CNN model designed for classifying seven different types of image modalities shows that the proposed method is significantly better in detecting and localizing the discriminative ROIs than other state of the art class-activation methods. Further, to visualize its effectiveness we generate “class-specific” ROI maps by averaging the CRM scores of images in each modality class, and characterize the visual explanation through their different size, shape, and location for our multi-modality CNN model that achieved over 98% performance on a dataset constructed from publicly available images.

Prediction of 1p/19q Codeletion in Diffuse Glioma Patients Using Preoperative Multiparametric Magnetic Resonance Imaging

  • Kim, Donnie
  • Wang, Nicholas C
  • Ravikumar, Visweswaran
  • Raghuram, DR
  • Li, Jinju
  • Patel, Ankit
  • Wendt, Richard E
  • Rao, Ganesh
  • Rao, Arvind
Frontiers in computational neuroscience 2019 Journal Article, cited 0 times

3D convolution neural networks for molecular subtype prediction in glioblastoma multiforme

  • Khened, Mahendra
  • Anand, Vikas Kumar
  • Acharya, Gagan
  • Shah, Nameeta
  • Krishnamurthi, Ganapathy
2019 Conference Proceedings, cited 0 times

Zonal Segmentation of Prostate T2W-MRI using Atrous Convolutional Neural Network

  • Khan, Zia
  • Yahya, Norashikin
  • Alsaih, Khaled
  • Meriaudeau, Fabrice
2019 Conference Paper, cited 0 times
The number of prostate cancer cases is steadily increasing especially with rising number of ageing population. It is reported that 5-year relative survival rate for man with stage 1 prostate cancer is almost 99% hence, early detection will significantly improve treatment planning and increase survival rate. Magnetic resonance imaging (MRI) technique is a common imaging modality for diagnosis of prostate cancer. MRI provide good visualization of soft tissue and enable better lesion detection and staging of prostate cancer. The main challenge of prostate whole gland segmentation is due to blurry boundary of central gland (CG) and peripheral zone (PZ) which lead to differential diagnosis. Since there is substantial difference in occurance and characteristic of cancer in both zones. So to enhance the diagnosis of prostate gland, we implemented DeeplabV3+ semantic segmentation approach to segment the prostate into zones. DeepLabV3+ achieved significant results in segmentation of prostate MRI by applying several parallel atrous convolution with different rates. The CNN-based semantic segmentation approach is trained and tested on NCI-ISBI 1.5T and 3T MRI dataset consist of 40 patients. Performance evaluation based on Dice similarity coefficient (DSC) of the Deeplab-based segmentation is compared with two other CNN-based semantic segmentation techniques: FCN and PSNet. Results shows that prostate segmentation using DeepLabV3+ can perform better than FCN and PSNet with average DSC of 70.3% in PZ and 88% in CG zone. This indicates the significant contribution made by the atrous convolution layer, in producing better prostate segmentation result.

ECM-CSD: An Efficient Classification Model for Cancer Stage Diagnosis in CT Lung Images Using FCM and SVM Techniques

  • Kavitha, MS
  • Shanthini, J
  • Sabitha, R
Journal of Medical Systems 2019 Journal Article, cited 0 times

Neurosense: deep sensing of full or near-full coverage head/brain scans in human magnetic resonance imaging

  • Kanber, B.
  • Ruffle, J.
  • Cardoso, J.
  • Ourselin, S.
  • Ciccarelli, O.
Neuroinformatics 2019 Journal Article, cited 0 times
The application of automated algorithms to imaging requires knowledge of its content, a curatorial task, for which we ordinarily rely on the Digital Imaging and Communications in Medicine (DICOM) header as the only source of image meta-data. However, identifying brain MRI scans that have full or near-full coverage among a large number (e.g. >5000) of scans comprising both head/brain and other body parts is a time-consuming task that cannot be automated with the use of the information stored in the DICOM header attributes alone. Depending on the clinical scenario, an entire set of scans acquired in a single visit may often be labelled “BRAIN” in the DICOM field 0018,0015 (Body Part Examined), while the individual scans will often not only include brain scans with full coverage, but also others with partial brain coverage, scans of the spinal cord, and in some cases other body parts.

Multicenter CT phantoms public dataset for radiomics reproducibility tests

  • Kalendralis, Petros
  • Traverso, Alberto
  • Shi, Zhenwei
  • Zhovannik, Ivan
  • Monshouwer, Rene
  • Starmans, Martijn P A
  • Klein, Stefan
  • Pfaehler, Elisabeth
  • Boellaard, Ronald
  • Dekker, Andre
  • Wee, Leonard
Med Phys 2019 Journal Article, cited 0 times
PURPOSE: The aim of this paper is to describe a public, open-access, computed tomography (CT) phantom image set acquired at three centers and collected especially for radiomics reproducibility research. The dataset is useful to test radiomic features reproducibility with respect to various parameters, such as acquisition settings, scanners, and reconstruction algorithms. ACQUISITION AND VALIDATION METHODS: Three phantoms were scanned in three independent institutions. Images of the following phantoms were acquired: Catphan 700 and COPDGene Phantom II (Phantom Laboratory, Greenwich, NY, USA), and the Triple modality 3D Abdominal Phantom (CIRS, Norfolk, VA, USA). Data were collected at three Dutch medical centers: MAASTRO Clinic (Maastricht, NL), Radboud University Medical Center (Nijmegen, NL), and University Medical Center Groningen (Groningen, NL) with scanners from two different manufacturers Siemens Healthcare and Philips Healthcare. The following acquisition parameter were varied in the phantom scans: slice thickness, reconstruction kernels, and tube current. DATA FORMAT AND USAGE NOTES: We made the dataset publically available on the Dutch instance of "Extensible Neuroimaging Archive Toolkit-XNAT" ( The dataset is freely available and reusable with attribution (Creative Commons 3.0 license). POTENTIAL APPLICATIONS: Our goal was to provide a findable, open-access, annotated, and reusable CT phantom dataset for radiomics reproducibility studies. Reproducibility testing and harmonization are fundamental requirements for wide generalizability of radiomics-based clinical prediction models. It is highly desirable to include only reproducible features into models, to be more assured of external validity across hitherto unseen contexts. In this view, phantom data from different centers represent a valuable source of information to exclude CT radiomic features that may already be unstable with respect to simplified structures and tightly controlled scan settings. The intended extension of our shared dataset is to include other modalities and phantoms with more realistic lesion simulations.

Enhancement of Deep Learning in Image Classification Performance Using Xception with the Swish Activation Function for Colorectal Polyp Preliminary Screening

  • Jinsakul, Natinai
  • Tsai, Cheng-Fa
  • Tsai, Chia-En
  • Wu, Pensee
Mathematics 2019 Journal Article, cited 0 times
One of the leading forms of cancer is colorectal cancer (CRC), which is responsible for increasing mortality in young people. The aim of this paper is to provide an experimental modification of deep learning of Xception with Swish and assess the possibility of developing a preliminary colorectal polyp screening system by training the proposed model with a colorectal topogram dataset in two and three classes. The results indicate that the proposed model can enhance the original convolutional neural network model with evaluation classification performance by achieving accuracy of up to 98.99% for classifying into two classes and 91.48% for three classes. For testing of the model with another external image, the proposed method can also improve the prediction compared to the traditional method, with 99.63% accuracy for true prediction of two classes and 80.95% accuracy for true prediction of three classes.

Fusion Radiomics Features from Conventional MRI Predict MGMT Promoter Methylation Status in Lower Grade Gliomas

  • Jiang, Chendan
  • Kong, Ziren
  • Liu, Sirui
  • Feng, Shi
  • Zhang, Yiwei
  • Zhu, Ruizhe
  • Chen, Wenlin
  • Wang, Yuekun
  • Lyu, Yuelei
  • You, Hui
  • Zhao, Dachun
  • Wang, Renzhi
  • Wang, Yu
  • Ma, Wenbin
  • Feng, Feng
Eur J Radiol 2019 Journal Article, cited 0 times
PURPOSE: The methylation status of the O6-methylguanine-DNA methyltransferase (MGMT) promoter has been proven to be a prognostic and predictive biomarker for lower grade glioma (LGG). This study aims to build a radiomics model to preoperatively predict the MGMT promoter methylation status in LGG. METHOD: 122 pathology-confirmed LGG patients were retrospectively reviewed, with 87 local patients as the training dataset, and 35 from The Cancer Imaging Archive as independent validation. A total of 1702 radiomics features were extracted from three-dimensional contrast-enhanced T1 (3D-CE-T1)-weighted and T2-weighted MRI images, including 14 shape, 18 first order, 75 texture, and 744 wavelet features respectively. The radiomics features were selected with the least absolute shrinkage and selection operator algorithm, and prediction models were constructed with multiple classifiers. Models were evaluated using receiver operating characteristic (ROC). RESULTS: Five radiomics prediction models, namely, 3D-CE-T1-weighted single radiomics model, T2-weighted single radiomics model, fusion radiomics model, linear combination radiomics model, and clinical integrated model, were built. The fusion radiomics model, which constructed from the concatenation of both series, displayed the best performance, with an accuracy of 0.849 and an area under the curve (AUC) of 0.970 (0.939-1.000) in the training dataset, and an accuracy of 0.886 and an AUC of 0.898 (0.786-1.000) in the validation dataset. Linear combination of single radiomics models and integration of clinical factors did not improve. CONCLUSIONS: Conventional MRI radiomics models are reliable for predicting the MGMT promoter methylation status in LGG patients. The fusion of radiomics features from different series may increase the prediction performance.

Assessment of prostate cancer prognostic Gleason grade group using zonal-specific features extracted from biparametric MRI using a KNN classifier

  • Jensen, C.
  • Carl, J.
  • Boesen, L.
  • Langkilde, N. C.
  • Ostergaard, L. R.
J Appl Clin Med Phys 2019 Journal Article, cited 0 times
PURPOSE: To automatically assess the aggressiveness of prostate cancer (PCa) lesions using zonal-specific image features extracted from diffusion weighted imaging (DWI) and T2W MRI. METHODS: Region of interest was extracted from DWI (peripheral zone) and T2W MRI (transitional zone and anterior fibromuscular stroma) around the center of 112 PCa lesions from 99 patients. Image histogram and texture features, 38 in total, were used together with a k-nearest neighbor classifier to classify lesions into their respective prognostic Grade Group (GG) (proposed by the International Society of Urological Pathology 2014 consensus conference). A semi-exhaustive feature search was performed (1-6 features in each feature set) and validated using threefold stratified cross validation in a one-versus-rest classification setup. RESULTS: Classifying PCa lesions into GGs resulted in AUC of 0.87, 0.88, 0.96, 0.98, and 0.91 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5 for the peripheral zone, respectively. The results for transitional zone and anterior fibromuscular stroma were AUC of 0.85, 0.89, 0.83, 0.94, and 0.86 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5, respectively. CONCLUSION: This study showed promising results with reasonable AUC values for classification of all GG indicating that zonal-specific imaging features from DWI and T2W MRI can be used to differentiate between PCa lesions of various aggressiveness.

Deep Neural Network Based Classifier Model for Lung Cancer Diagnosis and Prediction System in Healthcare Informatics

  • Jayaraj, D.
  • Sathiamoorthy, S.
2019 Conference Paper, cited 0 times
Lung cancer is a most important deadly disease which results to mortality of people because of the cells growth in unmanageable way. This problem leads to increased significance among physicians as well as academicians to develop efficient diagnosis models. Therefore, a novel method for automated identification of lung nodule becomes essential and it forms the motivation of this study. This paper presents a new deep learning classification model for lung cancer diagnosis. The presented model involves four main steps namely preprocessing, feature extraction, segmentation and classification. A particle swarm optimization (PSO) algorithm is sued for segmentation and deep neural network (DNN) is applied for classification. The presented PSO-DNN model is tested against a set of sample lung images and the results verified the goodness of the projected model on all the applied images.

Prediction of Treatment Response to Neoadjuvant Chemotherapy for Breast Cancer via Early Changes in Tumor Heterogeneity Captured by DCE-MRI Registration

  • Jahani, Nariman
  • Cohen, Eric
  • Hsieh, Meng-Kang
  • Weinstein, Susan P
  • Pantalone, Lauren
  • Hylton, Nola
  • Newitt, David
  • Davatzikos, Christos
  • Kontos, Despina
Sci RepScientific reports 2019 Journal Article, cited 0 times
We analyzed DCE-MR images from 132 women with locally advanced breast cancer from the I-SPY1 trial to evaluate changes of intra-tumor heterogeneity for augmenting early prediction of pathologic complete response (pCR) and recurrence-free survival (RFS) after neoadjuvant chemotherapy (NAC). Utilizing image registration, voxel-wise changes including tumor deformations and changes in DCE-MRI kinetic features were computed to characterize heterogeneous changes within the tumor. Using five-fold cross-validation, logistic regression and Cox regression were performed to model pCR and RFS, respectively. The extracted imaging features were evaluated in augmenting established predictors, including functional tumor volume (FTV) and histopathologic and demographic factors, using the area under the curve (AUC) and the C-statistic as performance measures. The extracted voxel-wise features were also compared to analogous conventional aggregated features to evaluate the potential advantage of voxel-wise analysis. Voxel-wise features improved prediction of pCR (AUC = 0.78 (±0.03) vs 0.71 (±0.04), p < 0.05 and RFS (C-statistic = 0.76 ( ± 0.05), vs 0.63 ( ± 0.01)), p < 0.05, while models based on analogous aggregate imaging features did not show appreciable performance changes (p > 0.05). Furthermore, all selected voxel-wise features demonstrated significant association with outcome (p < 0.05). Thus, precise measures of voxel-wise changes in tumor heterogeneity extracted from registered DCE-MRI scans can improve early prediction of neoadjuvant treatment outcomes in locally advanced breast cancer.

NextMed, Augmented and Virtual Reality platform for 3D medical imaging visualization: Explanation of the software platform developed for 3D models visualization related with medical images using Augmented and Virtual Reality technology

  • Izard, Santiago González
  • Plaza, Óscar Alonso
  • Torres, Ramiro Sánchez
  • Méndez, Juan Antonio Juanes
  • García-Peñalvo, Francisco José
2019 Conference Proceedings, cited 0 times