The image biomarker standardization initiative: standardized quantitative radiomics for high-throughput image-based phenotyping

  • Zwanenburg, Alex
  • Vallières, Martin
  • Abdalah, Mahmoud A
  • Aerts, Hugo JWL
  • Andrearczyk, Vincent
  • Apte, Aditya
  • Ashrafinia, Saeed
  • Bakas, Spyridon
  • Beukinga, Roelof J
  • Boellaard, Ronald
Radiology 2020 Journal Article, cited 247 times
Website

Spline curve deformation model with prior shapes for identifying adhesion boundaries between large lung tumors and tissues around lungs in CT images

  • Zhang, Xin
  • Wang, Jie
  • Yang, Ying
  • Wang, Bing
  • Gu, Lixu
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Automated segmentation of lung tumors attached to anatomic structures such as the chest wall or mediastinum remains a technical challenge because of the similar Hounsfield units of these structures. To address this challenge, we propose herein a spline curve deformation model that combines prior shapes to correct large spatially contiguous errors (LSCEs) in input shapes derived from image-appearance cues.The model is then used to identify the adhesion boundaries between large lung tumors and tissue around the lungs. METHODS: The deformation of the whole curve is driven by the transformation of the control points (CPs) of the spline curve, which are influenced by external and internal forces. The external force drives the model to fit the positions of the non-LSCEs of the input shapes while the internal force ensures the local similarity of the displacements of the neighboring CPs. The proposed model corrects the gross errors in the lung input shape caused by large lung tumors, where the initial lung shape for the model is inferred from the training shapes by shape group-based sparse prior information and the input lung shape is inferred by adaptive-thresholding-based segmentation followed by morphological refinement. RESULTS: The accuracy of the proposed model is verified by applying it to images of lungs with either moderate large-sized (ML) tumors or giant large-sized (GL) tumors. The quantitative results in terms of the averages of the dice similarity coefficient (DSC) and the Jaccard similarity index (SI) are 0.982 +/- 0.006 and 0.965 +/- 0.012 for segmentation of lungs adhered by ML tumors, and 0.952 +/- 0.048 and 0.926 +/- 0.059 for segmentation of lungs adhered by GL tumors, which give 0.943 +/- 0.021 and 0.897 +/- 0.041 for segmentation of the ML tumors, and 0.907 +/- 0.057 and 0.888 +/- 0.091 for segmentation of the GL tumors, respectively. In addition, the bidirectional Hausdorff distances are 5.7 +/- 1.4 and 11.3 +/- 2.5 mm for segmentation of lungs with ML and GL tumors, respectively. CONCLUSIONS: When combined with prior shapes, the proposed spline curve deformation can deal with large spatially consecutive errors in object shapes obtained from image-appearance information. We verified this method by applying it to the segmentation of lungs with large tumors adhered to the tissue around the lungs and the large tumors. Both the qualitative and quantitative results are more accurate and repeatable than results obtained with current state-of-the-art techniques.

Generalizing Deep Learning for Medical Image Segmentation to Unseen Domains via Deep Stacked Transformation

  • Zhang, Ling
  • Xu, Daguang
  • Xu, Ziyue
  • Wang, Xiaosong
  • Yang, Dong
  • Sanford, Thomas
  • Harmon, Stephanie
  • Turkbey, Baris
  • Wood, Bradford J
  • Roth, Holger
  • Myronenko, Andriy
IEEE Trans Med Imaging 2020 Journal Article, cited 0 times
Website
Recent advances in deep learning for medical image segmentation demonstrate expert-level accuracy. However, application of these models in clinically realistic environments can result in poor generalization and decreased accuracy, mainly due to the domain shift across different hospitals, scanner vendors, imaging protocols, and patient populations etc. Common transfer learning and domain adaptation techniques are proposed to address this bottleneck. However, these solutions require data (and annotations) from the target domain to retrain the model, and is therefore restrictive in practice for widespread model deployment. Ideally, we wish to have a trained (locked) model that can work uniformly well across unseen domains without further training. In this paper, we propose a deep stacked transformation approach for domain generalization. Specifically, a series of n stacked transformations are applied to each image during network training. The underlying assumption is that the “expected” domain shift for a specific medical imaging modality could be simulated by applying extensive data augmentation on a single source domain, and consequently, a deep model trained on the augmented “big” data (BigAug) could generalize well on unseen domains. We exploit four surprisingly effective, but previously understudied, image-based characteristics for data augmentation to overcome the domain generalization problem. We train and evaluate the BigAug model (with n = 9 transformations) on three different 3D segmentation tasks (prostate gland, left atrial, left ventricle) covering two medical imaging modalities (MRI and ultrasound) involving eight publicly available challenge datasets. The results show that when training on relatively small dataset (n=10~32 volumes, depending on the size of the available datasets) from a single source domain: (i) BigAug models degrade an average of 11% (Dice score change) from source to unseen domain, substantially better than conventional augmentation (degrading 39%) and CycleGAN-based domain adaptation method (degrading 25%), (ii) BigAug is better than “shallower" stacked transforms (i.e. those with fewer transforms) on unseen domains and demonstrates modest improvement to conventional augmentation on the source domain, (iii) after training with BigAug on one source domain, performance on an unseen domain is similar to training a model from scratch on that domain when using the same number of training samples. When training on large datasets (n=465 volumes) with BigAug, (iv) application to unseen domains reaches the performance of state-of-the-art fully supervised models that are trained and tested on their source domains. These findings establish a strong benchmark for the study of domain generalization in medical imaging, and can be generalized to the design of highly robust deep segmentation models for clinical deployment.

Convection enhanced delivery of anti-angiogenic and cytotoxic agents in combination therapy against brain tumour

  • Zhan, W.
Eur J Pharm Sci 2020 Journal Article, cited 0 times
Website
Convection enhanced delivery is an effective alternative to routine delivery methods to overcome the blood brain barrier. However, its treatment efficacy remains disappointing in clinic owing to the rapid drug elimination in tumour tissue. In this study, multiphysics modelling is employed to investigate the combination delivery of anti-angiogenic and cytotoxic drugs from the perspective of intratumoural transport. Simulations are based on a 3-D realistic brain tumour model that is reconstructed from patient magnetic resonance images. The tumour microvasculature is targeted by bevacizumab, and six cytotoxic drugs are included, as doxorubicin, carmustine, cisplatin, fluorouracil, methotrexate and paclitaxel. The treatment efficacy is evaluated in terms of the distribution volume where the drug concentration is above the corresponding LD90. Results demonstrate that the infusion of bevacizumab can slightly improve interstitial fluid flow, but is significantly efficient in reducing the fluid loss from the blood circulatory system to inhibit the concentration dilution. As the transport of bevacizumab is dominated by convection, its spatial distribution and anti-angiogenic effectiveness present high sensitivity to the directional interstitial fluid flow. Infusing bevacizumab could enhance the delivery outcomes of all the six drugs, however, the degree of enhancement differs. The delivery of doxorubicin can be improved most, whereas, the impacts on methotrexate and paclitaxel are limited. Fluorouracil could cover the comparable distribution volume as paclitaxel in the combination therapy for effective cell killing. Results obtain in this study could be a guide for the design of this co-delivery treatment.

Effects of Focused-Ultrasound-and-Microbubble-Induced Blood-Brain Barrier Disruption on Drug Transport under Liposome-Mediated Delivery in Brain Tumour: A Pilot Numerical Simulation Study

  • Zhan, Wenbo
Pharmaceutics 2020 Journal Article, cited 0 times
Website

The prognostic value of CT radiomic features from primary tumours and pathological lymphnodes in head and neck cancer patients

  • Zhai, Tiantian
2020 Thesis, cited 0 times
Website
Head and neck cancer (HNC) is responsible for about 0.83 million new cancer cases and 0.43 million cancer deaths worldwide every year. Around 30%-50% of patients with locally advanced HNC experience treatment failures, predominantly occurring at the site of the primary tumor, followed by regional failures and distant metastases. In order to optimize treatment strategy, the overall aim of this thesis is to identify the patients who are at high risk of treatment failures. We developed and externally validated a series of models on the different patterns of failure to predict the risk of local failures, regional failures, distant metastasis and individual nodal failures in HNC patients. New type of radiomic features based on the CT image were included in our modelling analysis, and we firstly showed that the radiomic features improved the prognostic performance of the models containing clinical factors significantly. Our studies provide clinicians new tools to predict the risk of treatment failures. This may support optimization of treatment strategy of this disease, and subsequently improve the patient survival rate.

CT images with expert manual contours of thoracic cancer for benchmarking auto-segmentation accuracy

  • Yang, J.
  • Veeraraghavan, H.
  • van Elmpt, W.
  • Dekker, A.
  • Gooding, M.
  • Sharp, G.
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Automatic segmentation offers many benefits for radiotherapy treatment planning; however, the lack of publicly available benchmark datasets limits the clinical use of automatic segmentation. In this work, we present a well-curated computed tomography (CT) dataset of high-quality manually drawn contours from patients with thoracic cancer that can be used to evaluate the accuracy of thoracic normal tissue auto-segmentation systems. ACQUISITION AND VALIDATION METHODS: Computed tomography scans of 60 patients undergoing treatment simulation for thoracic radiotherapy were acquired from three institutions: MD Anderson Cancer Center, Memorial Sloan Kettering Cancer Center, and the MAASTRO clinic. Each institution provided CT scans from 20 patients, including mean intensity projection four-dimensional CT (4D CT), exhale phase (4D CT), or free-breathing CT scans depending on their clinical practice. All CT scans covered the entire thoracic region with a 50-cm field of view and slice spacing of 1, 2.5, or 3 mm. Manual contours of left/right lungs, esophagus, heart, and spinal cord were retrieved from the clinical treatment plans. These contours were checked for quality and edited if necessary to ensure adherence to RTOG 1106 contouring guidelines. DATA FORMAT AND USAGE NOTES: The CT images and RTSTRUCT files are available in DICOM format. The regions of interest were named according to the nomenclature recommended by American Association of Physicists in Medicine Task Group 263 as Lung_L, Lung_R, Esophagus, Heart, and SpinalCord. This dataset is available on The Cancer Imaging Archive (funded by the National Cancer Institute) under Lung CT Segmentation Challenge 2017 (http://doi.org/10.7937/K9/TCIA.2017.3r3fvz08). POTENTIAL APPLICATIONS: This dataset provides CT scans with well-delineated manually drawn contours from patients with thoracic cancer that can be used to evaluate auto-segmentation systems. Additional anatomies could be supplied in the future to enhance the existing library of contours.

Research of Multimodal Medical Image Fusion Based on Parameter-Adaptive Pulse-Coupled Neural Network and Convolutional Sparse Representation

  • Xia, J.
  • Lu, Y.
  • Tan, L.
Comput Math Methods Med 2020 Journal Article, cited 0 times
Website
Visual effects of medical image have a great impact on clinical assistant diagnosis. At present, medical image fusion has become a powerful means of clinical application. The traditional medical image fusion methods have the problem of poor fusion results due to the loss of detailed feature information during fusion. To deal with it, this paper proposes a new multimodal medical image fusion method based on the imaging characteristics of medical images. In the proposed method, the non-subsampled shearlet transform (NSST) decomposition is first performed on the source images to obtain high-frequency and low-frequency coefficients. The high-frequency coefficients are fused by a parameter-adaptive pulse-coupled neural network (PAPCNN) model. The method is based on parameter adaptive and optimized connection strength beta adopted to promote the performance. The low-frequency coefficients are merged by the convolutional sparse representation (CSR) model. The experimental results show that the proposed method solves the problems of difficult parameter setting and poor detail preservation of sparse representation during image fusion in traditional PCNN algorithms, and it has significant advantages in visual effect and objective indices compared with the existing mainstream fusion algorithms.

Three-Plane–assembled Deep Learning Segmentation of Gliomas

  • Wu, Shaocheng
  • Li, Hongyang
  • Quang, Daniel
  • Guan, Yuanfang
Radiology: Artificial Intelligence 2020 Journal Article, cited 0 times
Website
An accurate and fast deep learning approach developed for automatic segmentation of brain glioma on multimodal MRI scans achieved Sørensen–Dice scores of 0.80, 0.83, and 0.91 for enhancing tumor, tumor core, and whole tumor, respectively. Purpose To design a computational method for automatic brain glioma segmentation of multimodal MRI scans with high efficiency and accuracy. Materials and Methods The 2018 Multimodal Brain Tumor Segmentation Challenge (BraTS) dataset was used in this study, consisting of routine clinically acquired preoperative multimodal MRI scans. Three subregions of glioma—the necrotic and nonenhancing tumor core, the peritumoral edema, and the contrast-enhancing tumor—were manually labeled by experienced radiologists. Two-dimensional U-Net models were built using a three-plane–assembled approach to segment three subregions individually (three-region model) or to segment only the whole tumor (WT) region (WT-only model). The term three-plane–assembled means that coronal and sagittal images were generated by reformatting the original axial images. The model performance for each case was evaluated in three classes: enhancing tumor (ET), tumor core (TC), and WT. Results On the internal unseen testing dataset split from the 2018 BraTS training dataset, the proposed models achieved mean Sørensen–Dice scores of 0.80, 0.84, and 0.91, respectively, for ET, TC, and WT. On the BraTS validation dataset, the proposed models achieved mean 95% Hausdorff distances of 3.1 mm, 7.0 mm, and 5.0 mm, respectively, for ET, TC, and WT and mean Sørensen–Dice scores of 0.80, 0.83, and 0.91, respectively, for ET, TC, and WT. On the BraTS testing dataset, the proposed models ranked fourth out of 61 teams. The source code is available at https://github.com/GuanLab/Brain_Glioma. Conclusion This deep learning method consistently segmented subregions of brain glioma with high accuracy, efficiency, reliability, and generalization ability on screening images from a large population, and it can be efficiently implemented in clinical practice to assist neuro-oncologists or radiologists. Supplemental material is available for this article.

Classification of Lung Nodules Based on Deep Residual Networks and Migration Learning

  • Wu, Panpan
  • Sun, Xuanchao
  • Zhao, Ziping
  • Wang, Haishuai
  • Pan, Shirui
  • Schuller, Bjorn
Comput Intell Neurosci 2020 Journal Article, cited 0 times
Website
The classification process of lung nodule detection in a traditional computer-aided detection (CAD) system is complex, and the classification result is heavily dependent on the performance of each step in lung nodule detection, causing low classification accuracy and high false positive rate. In order to alleviate these issues, a lung nodule classification method based on a deep residual network is proposed. Abandoning traditional image processing methods and taking the 50-layer ResNet network structure as the initial model, the deep residual network is constructed by combining residual learning and migration learning. The proposed approach is verified by conducting experiments on the lung computed tomography (CT) images from the publicly available LIDC-IDRI database. An average accuracy of 98.23% and a false positive rate of 1.65% are obtained based on the ten-fold cross-validation method. Compared with the conventional support vector machine (SVM)-based CAD system, the accuracy of our method improved by 9.96% and the false positive rate decreased by 6.95%, while the accuracy improved by 1.75% and 2.42%, respectively, and the false positive rate decreased by 2.07% and 2.22%, respectively, in contrast to the VGG19 model and InceptionV3 convolutional neural networks. The experimental results demonstrate the effectiveness of our proposed method in lung nodule classification for CT images.

Dosiomics improves prediction of locoregional recurrence for intensity modulated radiotherapy treated head and neck cancer cases

  • Wu, A.
  • Li, Y.
  • Qi, M.
  • Lu, X.
  • Jia, Q.
  • Guo, F.
  • Dai, Z.
  • Liu, Y.
  • Chen, C.
  • Zhou, L.
  • Song, T.
Oral Oncol 2020 Journal Article, cited 0 times
Website
OBJECTIVES: To investigate whether dosiomics can benefit to IMRT treated patient's locoregional recurrences (LR) prediction through a comparative study on prediction performance inspection between radiomics methods and that integrating dosiomics in head and neck cancer cases. MATERIALS AND METHODS: A cohort of 237 patients with head and neck cancer from four different institutions was obtained from The Cancer Imaging Archive and utilized to train and validate the radiomics-only prognostic model and integrate the dosiomics prognostic model. For radiomics, the radiomics features were initially extracted from images, including CTs and PETs, and selected on the basis of their concordance index (CI) values, then condensed via principle component analysis. Lastly, multivariate Cox proportional hazards regression models were constructed with class-imbalance adjustment as the LR prediction models by inputting those condensed features. For dosiomics integration model establishment, the initial features were similar, but with additional 3-dimensional dose distribution from radiation treatment plans. The CI and the Kaplan-Meier curves with log-rank analysis were used to assess and compare these models. RESULTS: Observed from the independent validation dataset, the CI of the model for dosiomics integration (0.66) was significantly different from that for radiomics (0.59) (Wilcoxon test, p=5.9x10(-31)). The integrated model successfully classified the patients into high- and low-risk groups (log-rank test, p=2.5x10(-02)), whereas the radiomics model was not able to provide such classification (log-rank test, p=0.37). CONCLUSION: Dosiomics can benefit in predicting the LR in IMRT-treated patients and should not be neglected for related investigations.

Determining patient abdomen thickness from a single digital radiograph with a computational model: clinical results from a proof of concept study

  • Worrall, M.
  • Vinnicombe, S.
  • Sutton, D.
Br J Radiol 2020 Journal Article, cited 0 times
Website
OBJECTIVE: A computational model has been created to estimate the abdominal thickness of a patient following an X-ray examination; its intended application is assisting with patient dose audit of paediatric X-ray examinations. This work evaluates the accuracy of the computational model in a clinical setting for adult patients undergoing anteroposterior (AP) abdomen X-ray examinations. METHODS: The model estimates patient thickness using the radiographic image, the exposure factors with which the image was acquired, a priori knowledge of the characteristics of the X-ray unit and detector and the results of extensive Monte Carlo simulation of patient examinations. For 20 patients undergoing AP abdominal X-ray examinations, the model was used to estimate the patient thickness; these estimates were compared against a direct measurement made at the time of the examination. RESULTS: Estimates of patient thickness made using the model were on average within +/-5.8% of the measured thickness. CONCLUSION: The model can be used to accurately estimate the thickness of a patient undergoing an AP abdominal X-ray examination where the patient's size falls within the range of the size of patients used to create the computational model. ADVANCES IN KNOWLEDGE: This work demonstrates that it is possible to accurately estimate the AP abdominal thickness of an adult patient using the digital X-ray image and a computational model.

Quantifying the incremental value of deep learning: Application to lung nodule detection

  • Warsavage, Theodore Jr
  • Xing, Fuyong
  • Baron, Anna E
  • Feser, William J
  • Hirsch, Erin
  • Miller, York E
  • Malkoski, Stephen
  • Wolf, Holly J
  • Wilson, David O
  • Ghosh, Debashis
PLoS One 2020 Journal Article, cited 0 times
Website
We present a case study for implementing a machine learning algorithm with an incremental value framework in the domain of lung cancer research. Machine learning methods have often been shown to be competitive with prediction models in some domains; however, implementation of these methods is in early development. Often these methods are only directly compared to existing methods; here we present a framework for assessing the value of a machine learning model by assessing the incremental value. We developed a machine learning model to identify and classify lung nodules and assessed the incremental value added to existing risk prediction models. Multiple external datasets were used for validation. We found that our image model, trained on a dataset from The Cancer Imaging Archive (TCIA), improves upon existing models that are restricted to patient characteristics, but it was inconclusive about whether it improves on models that consider nodule features. Another interesting finding is the variable performance on different datasets, suggesting population generalization with machine learning models may be more challenging than is often considered.

Semi-supervised mp-MRI data synthesis with StitchLayer and auxiliary distance maximization

  • Wang, Zhiwei
  • Lin, Yi
  • Cheng, Kwang-Ting Tim
  • Yang, Xin
Medical Image Analysis 2020 Journal Article, cited 0 times

Novel convolutional neural network architecture for improved pulmonary nodule classification on computed tomography

  • Wang, Yi
  • Zhang, Hao
  • Chae, Kum Ju
  • Choi, Younhee
  • Jin, Gong Yong
  • Ko, Seok-Bum
Multidimensional Systems and Signal Processing 2020 Journal Article, cited 0 times
Website
Computed tomography (CT) is widely used to locate pulmonary nodules for preliminary diagnosis of the lung cancer. However, due to high visual similarities between malignant (cancer) and benign (non-cancer) nodules, distinguishing malignant from malign nodules is not an easy task for a thoracic radiologist. In this paper, a novel convolutional neural network (ConvNet) architecture is proposed to classify the pulmonary nodules as either benign or malignant. Due to the high variance of nodule characteristics in CT scans, such as size and shape, a multi-path, multi-scale architecture is proposed and applied in the proposed ConvNet to improve the classification performance. The multi-scale method utilizes filters with different sizes to more effectively extracted nodule features from local regions, and the multi-path architecture combines features extracted from different ConvNet layers thereby enhancing the nodule features with respect to global regions. The proposed ConvNet is trained and evaluated on the LUNGx Challenge database, and achieves a sensitivity of 0.887 and a specificity of 0.924 with an area under the curve (AUC) of 0.948. The proposed ConvNet achieves a 14% AUC improvement compared to the state-of-the-art unsupervised learning approach. The proposed ConvNet also outperforms the other state-of-the-art ConvNets explicitly designed for pulmonary nodule classification. For clinical usage, the proposed ConvNet could potentially assist the radiologists to make diagnostic decisions in CT screening.

A prognostic analysis method for non-small cell lung cancer based on the computed tomography radiomics

  • Wang, Xu
  • Duan, Huihong
  • Li, Xiaobing
  • Ye, Xiaodan
  • Huang, Gang
  • Nie, Shengdong
Phys Med Biol 2020 Journal Article, cited 0 times
Website
In order to assist doctors in arranging the postoperative treatments and re-examinations for non-small cell lung cancer (NSCLC) patients, this study was initiated to explore a prognostic analysis method for NSCLC based on computed tomography (CT) radiomics. The data of 173 NSCLC patients were collected retrospectively and the clinically meaningful 3-year survival was used as the predictive limit to predict the patient's prognosis survival time range. Firstly, lung tumors were segmented and the radiomics features were extracted. Secondly, the feature weighting algorithm was used to screen and optimize the extracted original feature data. Then, the selected feature data combining with the prognosis survival of patients were used to train machine learning classification models. Finally, a prognostic survival prediction model and radiomics prognostic factors were obtained to predict the prognosis survival time range of NSCLC patients. The classification accuracy rate under cross-validation was up to 88.7% in the prognosis survival analysis model. When verifying on an independent data set, the model also yielded a high prediction accuracy which is up to 79.6%. Inverse different moment, lobulation sign and angular second moment were NSCLC prognostic factors based on radiomics. This study proved that CT radiomics features could effectively assist doctors to make more accurate prognosis survival prediction for NSCLC patients, so as to help doctors to optimize treatment and re-examination for NSCLC patients to extend their survival time.

Deep learning based image reconstruction algorithm for limited-angle translational computed tomography

  • Wang, Jiaxi
  • Liang, Jun
  • Cheng, Jingye
  • Guo, Yumeng
  • Zeng, Li
PLoS One 2020 Journal Article, cited 0 times
Website

Is an analytical dose engine sufficient for intensity modulated proton therapy in lung cancer?

  • Teoh, Suliana
  • Fiorini, Francesca
  • George, Ben
  • Vallis, Katherine A
  • Van den Heuvel, Frank
Br J Radiol 2020 Journal Article, cited 0 times
Website
OBJECTIVE: To identify a subgroup of lung cancer plans where the analytical dose calculation (ADC) algorithm may be clinically acceptable compared to Monte Carlo (MC) dose calculation in intensity modulated proton therapy (IMPT). METHODS: Robust-optimised IMPT plans were generated for 20 patients to a dose of 70 Gy (relative biological effectiveness) in 35 fractions in Raystation. For each case, four plans were generated: three with ADC optimisation using the pencil beam (PB) algorithm followed by a final dose calculation with the following algorithms: PB (PB-PB), MC (PB-MC) and MC normalised to prescription dose (PB-MC scaled). A fourth plan was generated where MC optimisation and final dose calculation was performed (MC-MC). Dose comparison and gamma analysis (PB-PB vs PB-MC) at two dose thresholds were performed: 20% (D20) and 99% (D99) with PB-PB plans as reference. RESULTS: Overestimation of the dose to 99% and mean dose of the clinical target volume was observed in all PB-MC compared to PB-PB plans (median: 3.7 Gy(RBE) (5%) (range: 2.3 to 6.9 Gy(RBE)) and 1.8 Gy(RBE) (3%) (0.5 to 4.6 Gy(RBE))). PB-MC scaled plans resulted in significantly higher CTVD2 compared to PB-PB (median difference: -4 Gy(RBE) (-6%) (-5.3 to -2.4 Gy(RBE)), p </= .001). The overall median gamma pass rates (3%-3 mm) at D20 and D99 were 93.2% (range:62.2-97.5%) and 71.3 (15.4-92.0%). On multivariate analysis, presence of mediastinal disease and absence of range shifters were significantly associated with high gamma pass rates. Median D20 and D99 pass rates with these predictors were 96.0% (95.3-97.5%) and 85.4% (75.1-92.0%). MC-MC achieved similar target coverage and doses to OAR compared to PB-PB plans. CONCLUSION: In the presence of mediastinal involvement and absence of range shifters Raystation ADC may be clinically acceptable in lung IMPT. Otherwise, MC algorithm would be recommended to ensure accuracy of treatment plans. ADVANCES IN KNOWLEDGE: Although MC algorithm is more accurate compared to ADC in lung IMPT, ADC may be clinically acceptable where there is mediastinal involvement and absence of range shifters.

Staging of clear cell renal cell carcinoma using random forest and support vector machine

  • Talaat, D.
  • Zada, F.
  • Kadry, R.
2020 Conference Paper, cited 0 times
Website
Abstract. Kidney cancer is one of the deadliest types of cancer affecting the human body. It’s regarded as the seventh most common type of cancer affecting men and the ninth affecting women. Early diagnosis of kidney cancer can improve the survival rates for many patients. Clear cell renal cell carcinoma (ccRCC) accounts for 90% of renal cancers. Although the exact cause of the kidney cancer is still unknown, early diagnosis can help patients get the proper treatment at the proper time. In this paper, a novel semi-automated model is proposed for early detection and staging of clear cell renal cell carcinoma. The proposed model consists of three phases: segmentation, feature extraction, and classification. The first phase is image segmentation phase where images were masked to segment the kidney lobes. Then the masked images were fed into watershed algorithm to extract tumor from the kidney. The second phase is feature extraction phase where gray level co-occurrence matrix (GLCM) method was integrated with normal statistical method to extract the feature vectors from the segmented images. The last phase is the classification phase where the resulted feature vectors were introduced to random forest (RF) and support vector machine (SVM) classifiers. Experiments have been carried out to validate the effectiveness of the proposed model using TCGA-KRIC dataset which contains 228 CT scans of ccRCC patients where 150 scans were used for learning and 78 for validation. The proposed model showed an outstanding improvement of 15.12% for accuracy from the previous work.

Development and Validation of a Modified Three-Dimensional U-Net Deep-Learning Model for Automated Detection of Lung Nodules on Chest CT Images From the Lung Image Database Consortium and Japanese Datasets

  • Suzuki, K.
  • Otsuka, Y.
  • Nomura, Y.
  • Kumamaru, K. K.
  • Kuwatsuru, R.
  • Aoki, S.
Acad Radiol 2020 Journal Article, cited 0 times
Website
RATIONALE AND OBJECTIVES: A more accurate lung nodule detection algorithm is needed. We developed a modified three-dimensional (3D) U-net deep-learning model for the automated detection of lung nodules on chest CT images. The purpose of this study was to evaluate the accuracy of the developed modified 3D U-net deep-learning model. MATERIALS AND METHODS: In this Health Insurance Portability and Accountability Act-compliant, Institutional Review Board-approved retrospective study, the 3D U-net based deep-learning model was trained using the Lung Image Database Consortium and Image Database Resource Initiative dataset. For internal model validation, we used 89 chest CT scans that were not used for model training. For external model validation, we used 450 chest CT scans taken at an urban university hospital in Japan. Each case included at least one nodule of >5 mm identified by an experienced radiologist. We evaluated model accuracy using the competition performance metric (CPM) (average sensitivity at 1/8, 1/4, 1/2, 1, 2, 4, and 8 false-positives per scan). The 95% confidence interval (CI) was computed by bootstrapping 1000 times. RESULTS: In the internal validation, the CPM was 94.7% (95% CI: 89.1%-98.6%). In the external validation, the CPM was 83.3% (95% CI: 79.4%-86.1%). CONCLUSION: The modified 3D U-net deep-learning model showed high performance in both internal and external validation.

Radiomics for glioblastoma survival analysis in pre-operative MRI: exploring feature robustness, class boundaries, and machine learning techniques

  • Suter, Y.
  • Knecht, U.
  • Alao, M.
  • Valenzuela, W.
  • Hewer, E.
  • Schucht, P.
  • Wiest, R.
  • Reyes, M.
Cancer Imaging 2020 Journal Article, cited 0 times
Website
BACKGROUND: This study aims to identify robust radiomic features for Magnetic Resonance Imaging (MRI), assess feature selection and machine learning methods for overall survival classification of Glioblastoma multiforme patients, and to robustify models trained on single-center data when applied to multi-center data. METHODS: Tumor regions were automatically segmented on MRI data, and 8327 radiomic features extracted from these regions. Single-center data was perturbed to assess radiomic feature robustness, with over 16 million tests of typical perturbations. Robust features were selected based on the Intraclass Correlation Coefficient to measure agreement across perturbations. Feature selectors and machine learning methods were compared to classify overall survival. Models trained on single-center data (63 patients) were tested on multi-center data (76 patients). Priors using feature robustness and clinical knowledge were evaluated. RESULTS: We observed a very large performance drop when applying models trained on single-center on unseen multi-center data, e.g. a decrease of the area under the receiver operating curve (AUC) of 0.56 for the overall survival classification boundary at 1 year. By using robust features alongside priors for two overall survival classes, the AUC drop could be reduced by 21.2%. In contrast, sensitivity was 12.19% lower when applying a prior. CONCLUSIONS: Our experiments show that it is possible to attain improved levels of robustness and accuracy when models need to be applied to unseen multi-center data. The performance on multi-center data of models trained on single-center data can be increased by using robust features and introducing prior knowledge. For successful model robustification, tailoring perturbations for robustness testing to the target dataset is key.

ROI-based feature learning for efficient true positive prediction using convolutional neural network for lung cancer diagnosis

  • Suresh, Supriya
  • Mohan, Subaji
Neural Computing and Applications 2020 Journal Article, cited 0 times

Multisite Technical and Clinical Performance Evaluation of Quantitative Imaging Biomarkers from 3D FDG PET Segmentations of Head and Neck Cancer Images

  • Smith, Brian J
  • Buatti, John M
  • Bauer, Christian
  • Ulrich, Ethan J
  • Ahmadvand, Payam
  • Budzevich, Mikalai M
  • Gillies, Robert J
  • Goldgof, Dmitry
  • Grkovski, Milan
  • Hamarneh, Ghassan
  • Kinahan, Paul E
  • Muzi, John P
  • Muzi, Mark
  • Laymon, Charles M
  • Mountz, James M
  • Nehmeh, Sadek
  • Oborski, Matthew J
  • Zhao, Binsheng
  • Sunderland, John J
  • Beichel, Reinhard R
Tomography 2020 Journal Article, cited 1 times
Website
Quantitative imaging biomarkers (QIBs) provide medical image-derived intensity, texture, shape, and size features that may help characterize cancerous tumors and predict clinical outcomes. Successful clinical translation of QIBs depends on the robustness of their measurements. Biomarkers derived from positron emission tomography images are prone to measurement errors owing to differences in image processing factors such as the tumor segmentation method used to define volumes of interest over which to calculate QIBs. We illustrate a new Bayesian statistical approach to characterize the robustness of QIBs to different processing factors. Study data consist of 22 QIBs measured on 47 head and neck tumors in 10 positron emission tomography/computed tomography scans segmented manually and with semiautomated methods used by 7 institutional members of the NCI Quantitative Imaging Network. QIB performance is estimated and compared across institutions with respect to measurement errors and power to recover statistical associations with clinical outcomes. Analysis findings summarize the performance impact of different segmentation methods used by Quantitative Imaging Network members. Robustness of some advanced biomarkers was found to be similar to conventional markers, such as maximum standardized uptake value. Such similarities support current pursuits to better characterize disease and predict outcomes by developing QIBs that use more imaging information and are robust to different processing factors. Nevertheless, to ensure reproducibility of QIB measurements and measures of association with clinical outcomes, errors owing to segmentation methods need to be reduced.

Brain tumor segmentation approach based on the extreme learning machine and significantly fast and robust fuzzy C-means clustering algorithms running on Raspberry Pi hardware

  • ŞİŞİK, Fatih
  • Sert, Eser
Medical Hypotheses 2020 Journal Article, cited 0 times
Automatic decision support systems have gained importance in health sector in recent years. In parallel with recent developments in the fields of artificial intelligence and image processing, embedded systems are also used in decision support systems for tumor diagnosis. Extreme learning machine (ELM), is a recently developed, quick and efficient algorithm which can quickly and flawlessly diagnose tumors using machine learning techniques. Similarly, significantly fast and robust fuzzy C-means clustering algorithm (FRFCM) is a novel and fast algorithm which can display a high performance. In the present study, a brain tumor segmentation approach is proposed based on extreme learning machine and significantly fast and robust fuzzy C-means clustering algorithms (BTS-ELM-FRFCM) running on Raspberry Pi (PRI) hardware. The present study mainly aims to introduce a new segmentation system hardware containing new algorithms and offering a high level of accuracy the health sector. PRI’s are useful mobile devices due to their cost-effectiveness and satisfying hardware. 3200 training images were used to train ELM in the present study. 20 pieces of MRI images were used for testing process. Figure of merid (FOM), Jaccard similarity coefficient (JSC) and Dice indexes were used in order to evaluate the performance of the proposed approach. In addition, the proposed method was compared with brain tumor segmentation based on support vector machine (BTS-SVM), brain tumor segmentation based on fuzzy C-means (BTS-FCM) and brain tumor segmentation based on self-organizing maps and k-means (BTS-SOM). The statistical analysis on FOM, JSC and Dice results obtained using four different approaches indicated that BTS-ELM-FRFCM displayed the highest performance. Thus, it can be concluded that the embedded system designed in the present study can perform brain tumor segmentation with a high accuracy rate.

Unsupervised domain adaptation with adversarial learning for mass detection in mammogram

  • Shen, Rongbo
  • Yao, Jianhua
  • Yan, Kezhou
  • Tian, Kuan
  • Jiang, Cheng
  • Zhou, Ke
Neurocomputing 2020 Journal Article, cited 0 times
Website
Many medical image datasets have been collected without proper annotations for deep learning training. In this paper, we propose a novel unsupervised domain adaptation framework with adversarial learning to minimize the annotation efforts. Our framework employs a task specific network, i.e., fully convolutional network (FCN), for spatial density prediction. Moreover, we employ a domain discriminator, in which adversarial learning is adopted to align the less-annotated target domain features with the well-annotated source domain features in the feature space. We further propose a novel training strategy for the adversarial learning by coupling data from source and target domains and alternating the subnet updates. We employ the public CBIS-DDSM dataset as the source domain, and perform two sets of experiments on two target domains (i.e., the public INbreast dataset and a self-collected dataset), respectively. Experimental results suggest consistent and comparable performance improvement over the state-of-the-art methods. Our proposed training strategy is also proved to converge much faster.

An efficient denoising of impulse noise from MRI using adaptive switching modified decision based unsymmetric trimmed median filter

  • Sheela, C. Jaspin Jeba
  • Suganthi, G.
Biomedical Signal Processing and Control 2020 Journal Article, cited 0 times

Prediction of Molecular Mutations in Diffuse Low-Grade Gliomas using MR Imaging Features

  • Shboul, Zeina A
  • Chen, James
  • M Iftekharuddin, Khan
Sci RepScientific reports 2020 Journal Article, cited 0 times
Website
Diffuse low-grade gliomas (LGG) have been reclassified based on molecular mutations, which require invasive tumor tissue sampling. Tissue sampling by biopsy may be limited by sampling error, whereas non-invasive imaging can evaluate the entirety of a tumor. This study presents a non-invasive analysis of low-grade gliomas using imaging features based on the updated classification. We introduce molecular (MGMT methylation, IDH mutation, 1p/19q co-deletion, ATRX mutation, and TERT mutations) prediction methods of low-grade gliomas with imaging. Imaging features are extracted from magnetic resonance imaging data and include texture features, fractal and multi-resolution fractal texture features, and volumetric features. Training models include nested leave-one-out cross-validation to select features, train the model, and estimate model performance. The prediction models of MGMT methylation, IDH mutations, 1p/19q co-deletion, ATRX mutation, and TERT mutations achieve a test performance AUC of 0.83 +/- 0.04, 0.84 +/- 0.03, 0.80 +/- 0.04, 0.70 +/- 0.09, and 0.82 +/- 0.04, respectively. Furthermore, our analysis shows that the fractal features have a significant effect on the predictive performance of MGMT methylation IDH mutations, 1p/19q co-deletion, and ATRX mutations. The performance of our prediction methods indicates the potential of correlating computed imaging features with LGG molecular mutations types and identifies candidates that may be considered potential predictive biomarkers of LGG molecular classification.

An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor

  • Sharif, Muhammad
  • Amin, Javaria
  • Raza, Mudassar
  • Yasmin, Mussarat
  • Satapathy, Suresh Chandra
Pattern Recognition Letters 2020 Journal Article, cited 0 times
Tumor in brain is a major cause of death in human beings. If not treated properly and timely, there is a high chance of it to become malignant. Therefore, brain tumor detection at an initial stage is a significant requirement. In this work, initially the skull is removed through brain surface extraction (BSE) method. The skull removed image is then fed to particle swarm optimization (PSO) to achieve better segmentation. In the next step, Local binary patterns (LBP) and deep features of segmented images are extracted and genetic algorithm (GA) is applied for best features selection. Finally, artificial neural network (ANN) and other classifiers are utilized to classify the tumor grades. The publicly available complex brain datasets such as RIDER and BRATS 2018 Challenge are utilized for evaluation of method and attained 99% maximum accuracy. The results are also compared with existing methods which evident that the presented technique provided improved outcomes which are clear proof of its effectiveness and novelty.

A manifold learning regularization approach to enhance 3D CT image-based lung nodule classification

  • Ren, Y.
  • Tsai, M. Y.
  • Chen, L.
  • Wang, J.
  • Li, S.
  • Liu, Y.
  • Jia, X.
  • Shen, C.
Int J Comput Assist Radiol Surg 2020 Journal Article, cited 2 times
Website
PURPOSE: Diagnosis of lung cancer requires radiologists to review every lung nodule in CT images. Such a process can be very time-consuming, and the accuracy is affected by many factors, such as experience of radiologists and available diagnosis time. To address this problem, we proposed to develop a deep learning-based system to automatically classify benign and malignant lung nodules. METHODS: The proposed method automatically determines benignity or malignancy given the 3D CT image patch of a lung nodule to assist diagnosis process. Motivated by the fact that real structure among data is often embedded on a low-dimensional manifold, we developed a novel manifold regularized classification deep neural network (MRC-DNN) to perform classification directly based on the manifold representation of lung nodule images. The concise manifold representation revealing important data structure is expected to benefit the classification, while the manifold regularization enforces strong, but natural constraints on network training, preventing over-fitting. RESULTS: The proposed method achieves accurate manifold learning with reconstruction error of ~ 30 HU on real lung nodule CT image data. In addition, the classification accuracy on testing data is 0.90 with sensitivity of 0.81 and specificity of 0.95, which outperforms state-of-the-art deep learning methods. CONCLUSION: The proposed MRC-DNN facilitates an accurate manifold learning approach for lung nodule classification based on 3D CT images. More importantly, MRC-DNN suggests a new and effective idea of enforcing regularization for network training, possessing the potential impact to a board range of applications.

An unsupervised semi-automated pulmonary nodule segmentation method based on enhanced region growing

  • Ren, He
  • Zhou, Lingxiao
  • Liu, Gang
  • Peng, Xueqing
  • Shi, Weiya
  • Xu, Huilin
  • Shan, Fei
  • Liu, Lei
Quantitative Imaging in Medicine and Surgery 2020 Journal Article, cited 0 times
Website

Imaging Signature of 1p/19q Co-deletion Status Derived via Machine Learning in Lower Grade Glioma

  • Rathore, Saima
  • Chaddad, Ahmad
  • Bukhari, Nadeem Haider
  • Niazi, Tamim
2020 Book Section, cited 0 times
Website
We present a new approach to quantify the co-deletion of chromosomal arms 1p/19q status in lower grade glioma (LGG). Though the surgical biopsy followed by fluorescence in-situ hybridization test is the gold standard currently to identify mutational status for diagnosis and treatment planning, there are several imaging studies to predict the same. Our study aims to determine the 1p/19q mutational status of LGG non-invasively by advanced pattern analysis using multi-parametric MRI. The publicly available dataset at TCIA was used. T1-W and T2-W MRIs of a total 159 patients with grade-II and grade-III glioma, who had biopsy proven 1p/19q status consisting either no deletion (n = 57) or co-deletion (n = 102), were used in our study. We quantified the imaging profile of these tumors by extracting diverse imaging features, including the tumor’s spatial distribution pattern, volumetric, texture, and intensity distribution measures. We integrated these diverse features via support vector machines, to construct an imaging signature of 1p/19q, which was evaluated in independent discovery (n = 85) and validation (n = 74) cohorts, and compared with the 1p/19q status obtained through fluorescence in-situ hybridization test. The classification accuracy on complete, discovery and replication cohorts was 86.16%, 88.24%, and 85.14%, respectively. The classification accuracy when the model developed on training cohort was applied on unseen replication set was 82.43%. Non-invasive prediction of 1p/19q status from MRIs would allow improved treatment planning for LGG patients without the need of surgical biopsies and would also help in potentially monitoring the dynamic mutation changes during the course of the treatment.

Comparison of iterative parametric and indirect deep learning-based reconstruction methods in highly undersampled DCE-MR Imaging of the breast

  • Rastogi, A.
  • Yalavarthy, P. K.
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: To compare the performance of iterative direct and indirect parametric reconstruction methods with indirect deep learning-based reconstruction methods in estimating tracer-kinetic parameters from highly undersampled DCE-MR Imaging breast data and provide a systematic comparison of the same. METHODS: Estimation of tracer-kinetic parameters using indirect methods from undersampled data requires to reconstruct the anatomical images initially by solving an inverse problem. This reconstructed images gets utilized in turn to estimate the tracer-kinetic parameters. In direct estimation, the parameters are estimated without reconstructing the anatomical images. Both problems are ill-posed and are typically solved using prior-based regularization or using deep learning. In this study, for indirect estimation, two deep learning-based reconstruction frameworks namely, ISTA-Net(+) and MODL, were utilized. For direct and indirect parametric estimation, sparsity inducing priors (L1 and Total-Variation) and limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm as solver was deployed. The performance of these techniques were compared systematically in estimation of vascular permeability ( K trans ) from undersampled DCE-MRI breast data using Patlak as pharmaco-kinetic model. The experiments involved retrospective undersampling of the data 20x, 50x, and 100x and compared the results using PSNR, nRMSE, SSIM, and Xydeas metrics. The K trans maps estimated from fully sampled data were utilized as ground truth. The developed code was made available as https://github.com/Medical-Imaging-Group/DCE-MRI-Compare open-source for enthusiastic users. RESULTS: The reconstruction methods performance was evaluated using ten patients breast data (five patients each for training and testing). Consistent with other studies, the results indicate that direct parametric reconstruction methods provide improved performance compared to the indirect parameteric reconstruction methods. The results also indicate that for 20x undersampling, deep learning-based methods performs better or at par with direct estimation in terms of PSNR, SSIM, and nRMSE. However, for higher undersampling rates (50x and 100x) direct estimation performs better in all metrics. For all undersampling rates, direct reconstruction performed better in terms of Xydeas metric, which indicated fidelity in magnitude and orientation of edges. CONCLUSION: Deep learning-based indirect techniques perform at par with direct estimation techniques for lower undersampling rates in the breast DCE-MR imaging. At higher undersampling rates, they are not able to provide much needed generalization. Direct estimation techniques are able to provide more accurate results than both deep learning- and parametric-based indirect methods in these high undersampling scenarios.

A Clinical System for Non-invasive Blood-Brain Barrier Opening Using a Neuronavigation-Guided Single-Element Focused Ultrasound Transducer

  • Pouliopoulos, Antonios N
  • Wu, Shih-Ying
  • Burgess, Mark T
  • Karakatsani, Maria Eleni
  • Kamimura, Hermes A S
  • Konofagou, Elisa E
Ultrasound Med Biol 2020 Journal Article, cited 3 times
Website
Focused ultrasound (FUS)-mediated blood-brain barrier (BBB) opening is currently being investigated in clinical trials. Here, we describe a portable clinical system with a therapeutic transducer suitable for humans, which eliminates the need for in-line magnetic resonance imaging (MRI) guidance. A neuronavigation-guided 0.25-MHz single-element FUS transducer was developed for non-invasive clinical BBB opening. Numerical simulations and experiments were performed to determine the characteristics of the FUS beam within a human skull. We also validated the feasibility of BBB opening obtained with this system in two non-human primates using U.S. Food and Drug Administration (FDA)-approved treatment parameters. Ultrasound propagation through a human skull fragment caused 44.4 +/- 1% pressure attenuation at a normal incidence angle, while the focal size decreased by 3.3 +/- 1.4% and 3.9 +/- 1.8% along the lateral and axial dimension, respectively. Measured lateral and axial shifts were 0.5 +/- 0.4 mm and 2.1 +/- 1.1 mm, while simulated shifts were 0.1 +/- 0.2 mm and 6.1 +/- 2.4 mm, respectively. A 1.5-MHz passive cavitation detector transcranially detected cavitation signals of Definity microbubbles flowing through a vessel-mimicking phantom. T1-weighted MRI confirmed a 153 +/- 5.5 mm(3) BBB opening in two non-human primates at a mechanical index of 0.4, using Definity microbubbles at the FDA-approved dose for imaging applications, without edema or hemorrhage. In conclusion, we developed a portable system for non-invasive BBB opening in humans, which can be achieved at clinically relevant ultrasound exposures without the need for in-line MRI guidance. The proposed FUS system may accelerate the adoption of non-invasive FUS-mediated therapies due to its fast application, low cost and portability.

Peritumoral and intratumoral radiomic features predict survival outcomes among patients diagnosed in lung cancer screening

  • Perez-Morales, J.
  • Tunali, I.
  • Stringfield, O.
  • Eschrich, S. A.
  • Balagurunathan, Y.
  • Gillies, R. J.
  • Schabath, M. B.
Sci RepScientific reports 2020 Journal Article, cited 0 times
Website
The National Lung Screening Trial (NLST) demonstrated that screening with low-dose computed tomography (LDCT) is associated with a 20% reduction in lung cancer mortality. One potential limitation of LDCT screening is overdiagnosis of slow growing and indolent cancers. In this study, peritumoral and intratumoral radiomics was used to identify a vulnerable subset of lung patients associated with poor survival outcomes. Incident lung cancer patients from the NLST were split into training and test cohorts and an external cohort of non-screen detected adenocarcinomas was used for further validation. After removing redundant and non-reproducible radiomics features, backward elimination analyses identified a single model which was subjected to Classification and Regression Tree to stratify patients into three risk-groups based on two radiomics features (NGTDM Busyness and Statistical Root Mean Square [RMS]). The final model was validated in the test cohort and the cohort of non-screen detected adenocarcinomas. Using a radio-genomics dataset, Statistical RMS was significantly associated with FOXF2 gene by both correlation and two-group analyses. Our rigorous approach generated a novel radiomics model that identified a vulnerable high-risk group of early stage patients associated with poor outcomes. These patients may require aggressive follow-up and/or adjuvant therapy to mitigate their poor outcomes.

Efficient CT Image Reconstruction in a GPU Parallel Environment

  • Pérez, Tomás A Valencia
  • López, Javier M Hernández
  • Moreno-Barbosa, Eduardo
  • de Celis Alonso, Benito
  • Merino, Martín R Palomino
  • Meneses, Victor M Castaño
Tomography 2020 Journal Article, cited 0 times

A method of rapid quantification of patient-specific organ doses for CT using deep-learning-based multi-organ segmentation and GPU-accelerated Monte Carlo dose computing

  • Peng, Z.
  • Fang, X.
  • Yan, P.
  • Shan, H.
  • Liu, T.
  • Pei, X.
  • Wang, G.
  • Liu, B.
  • Kalra, M. K.
  • Xu, X. G.
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: One technical barrier to patient-specific computed tomography (CT) dosimetry has been the lack of computational tools for the automatic patient-specific multi-organ segmentation of CT images and rapid organ dose quantification. When previous CT images are available for the same body region of the patient, the ability to obtain patient-specific organ doses for CT - in a similar manner as radiation therapy treatment planning - will open the door to personalized and prospective CT scan protocols. This study aims to demonstrate the feasibility of combining deep-learning algorithms for automatic segmentation of multiple radiosensitive organs from CT images with the GPU-based Monte Carlo rapid organ dose calculation. METHODS: A deep convolutional neural network (CNN) based on the U-Net for organ segmentation is developed and trained to automatically delineate multiple radiosensitive organs from CT images. Two databases are used: The lung CT segmentation challenge 2017 (LCTSC) dataset that contains 60 thoracic CT scan patients, each consisting of five segmented organs, and the Pancreas-CT (PCT) dataset, which contains 43 abdominal CT scan patients each consisting of eight segmented organs. A fivefold cross-validation method is performed on both sets of data. Dice similarity coefficients (DSCs) are used to evaluate the segmentation performance against the ground truth. A GPU-based Monte Carlo dose code, ARCHER, is used to calculate patient-specific CT organ doses. The proposed method is evaluated in terms of relative dose errors (RDEs). To demonstrate the potential improvement of the new method, organ dose results are compared against those obtained for population-average patient phantoms used in an off-line dose reporting software, VirtualDose, at Massachusetts General Hospital. RESULTS: The median DSCs are found to be 0.97 (right lung), 0.96 (left lung), 0.92 (heart), 0.86 (spinal cord), 0.76 (esophagus) for the LCTSC dataset, along with 0.96 (spleen), 0.96 (liver), 0.95 (left kidney), 0.90 (stomach), 0.87 (gall bladder), 0.80 (pancreas), 0.75 (esophagus), and 0.61 (duodenum) for the PCT dataset. Comparing with organ dose results from population-averaged phantoms, the new patient-specific method achieved smaller absolute RDEs (mean +/- standard deviation) for all organs: 1.8% +/- 1.4% (vs 16.0% +/- 11.8%) for the lung, 0.8% +/- 0.7% (vs 34.0% +/- 31.1%) for the heart, 1.6% +/- 1.7% (vs 45.7% +/- 29.3%) for the esophagus, 0.6% +/- 1.2% (vs 15.8% +/- 12.7%) for the spleen, 1.2% +/- 1.0% (vs 18.1% +/- 15.7%) for the pancreas, 0.9% +/- 0.6% (vs 20.0% +/- 15.2%) for the left kidney, 1.7% +/- 3.1% (vs 19.1% +/- 9.8%) for the gallbladder, 0.3% +/- 0.3% (vs 24.2% +/- 18.7%) for the liver, and 1.6% +/- 1.7% (vs 19.3% +/- 13.6%) for the stomach. The trained automatic segmentation tool takes <5 s per patient for all 103 patients in the dataset. The Monte Carlo radiation dose calculations performed in parallel to the segmentation process using the GPU-accelerated ARCHER code take <4 s per patient to achieve <0.5% statistical uncertainty in all organ doses for all 103 patients in the database. CONCLUSION: This work shows the feasibility to perform combined automatic patient-specific multi-organ segmentation of CT images and rapid GPU-based Monte Carlo dose quantification with clinically acceptable accuracy and efficiency.

MRI and CT Identify Isocitrate Dehydrogenase (IDH)-Mutant Lower-Grade Gliomas Misclassified to 1p/19q Codeletion Status with Fluorescence in Situ Hybridization

  • Patel, Sohil H
  • Batchala, Prem P
  • Mrachek, E Kelly S
  • Lopes, Maria-Beatriz S
  • Schiff, David
  • Fadul, Camilo E
  • Patrie, James T
  • Jain, Rajan
  • Druzgal, T Jason
  • Williams, Eli S
Radiology 2020 Journal Article, cited 0 times
Website
Background Fluorescence in situ hybridization (FISH) is a standard method for 1p/19q codeletion testing in diffuse gliomas but occasionally renders erroneous results. Purpose To determine whether MRI/CT analysis identifies isocitrate dehydrogenase (IDH)-mutant gliomas misassigned to 1p/19q codeletion status with FISH. Materials and Methods Data in patients with IDH-mutant lower-grade gliomas (World Health Organization grade II/III) and 1p/19q codeletion status determined with FISH that were accrued from January 1, 2010 to October 1, 2017, were included in this retrospective study. Two neuroradiologist readers analyzed the pre-resection MRI findings (and CT findings, when available) to predict 1p/19q status (codeleted or noncodeleted) and provided a prediction confidence score (1 = low, 2 = moderate, 3 = high). Percentage concordance between the consensus neuroradiologist 1p/19q prediction and the FISH result was calculated. For gliomas where (a) consensus neuroradiologist 1p/19q prediction differed from the FISH result and (b) consensus neuroradiologist confidence score was 2 or greater, further 1p/19q testing was performed with chromosomal microarray analysis (CMA). Nine control specimens were randomly chosen from the remaining study sample for CMA. Percentage concordance between FISH and CMA among the CMA-tested cases was calculated. Results A total of 112 patients (median age, 38 years [interquartile range, 31–51 years]; 57 men) were evaluated (112 gliomas). Percentage concordance between the consensus neuroradiologist 1p/19q prediction and the FISH result was 84.8% (95 of 112; 95% confidence interval: 76.8%, 90.9%). Among the 17 neuroradiologist-FISH discordances, there were nine gliomas associated with a consensus neuroradiologist confidence score of 2 or greater. In six (66.7%) of these nine gliomas, the 1p/19q codeletion status as determined with CMA disagreed with the FISH result and agreed with the consensus neuroradiologist prediction. For the nine control specimens, there was 100% agreement between CMA and FISH for 1p/19q determination. Conclusion MRI and CT analysis can identify diffuse gliomas misassigned to 1p/19q codeletion status with fluorescence in situ hybridization (FISH). Further molecular testing should be considered for gliomas with discordant neuroimaging and FISH results.

An expert system for brain tumor detection: Fuzzy C-means with super resolution and convolutional neural network with extreme learning machine

  • Ozyurt, F.
  • Sert, E.
  • Avci, D.
Med Hypotheses 2020 Journal Article, cited 10 times
Website
Super-resolution, which is one of the trend issues of recent times, increases the resolution of the images to higher levels. Increasing the resolution of a vital image in terms of the information it contains such as brain magnetic resonance image (MRI), makes the important information in the MRI image more visible and clearer. Thus, it is provided that the borders of the tumors in the related image are found more successfully. In this study, brain tumor detection based on fuzzy C-means with super-resolution and convolutional neural networks with extreme learning machine algorithms (SR-FCM-CNN) approach has been proposed. The aim of this study has been segmented the tumors in high performance by using Super Resolution Fuzzy-C-Means (SR-FCM) approach for tumor detection from brain MR images. Afterward, feature extraction and pretrained SqueezeNet architecture from convolutional neural network (CNN) architectures and classification process with extreme learning machine (ELM) were performed. In the experimental studies, it has been determined that brain tumors have been better segmented and removed using SR-FCM method. Using the SquezeeNet architecture, features were extracted from a smaller neural network model with fewer parameters. In the proposed method, 98.33% accuracy rate has been detected in the diagnosis of segmented brain tumors using SR-FCM. This rate is greater 10% than the rate of recognition of brain tumors segmented with fuzzy C-means (FCM) without SR.

Optothermal tissue response for laser-based investigation of thyroid cancer

  • Okebiorun, Michael O.
  • ElGohary, Sherif H.
Informatics in Medicine Unlocked 2020 Journal Article, cited 0 times
Website
To characterize thyroid cancer imaging-based detection, we implemented a simulation of the optical and thermal response in an optical investigation of thyroid cancer. We employed the 3D Monte Carlo method and the bio-heat equation to determine the fluence and temperature distribution via the Molecular Optical Simulation Environment (MOSE) with a Finite element (FE) simulator. The optothermal effect of a neck surface-based source is also compared to a trachea-based source. Results show fluence and temperature distribution in a realistic 3D neck model with both endogenous and hypothetical tissue-specific exogenous contrast agents. It also reveals that the trachea illumination has a factor of ten better absorption and temperature change than the neck-surface illumination, and tumor-specific exogenous contrast agents have a relatively higher absorption and temperature change in the tumors, which could be assistive to clinicians and researchers to improve and better understand the region's response to laser-based diagnosis.

Modified fast adaptive scatter kernel superposition (mfASKS) correction and its dosimetric impact on CBCT‐based proton therapy dose calculation

  • Nomura, Yusuke
  • Xu, Qiong
  • Peng, Hao
  • Takao, Seishin
  • Shimizu, Shinichi
  • Xing, Lei
  • Shirato, Hiroki
Medical physics 2020 Journal Article, cited 0 times
Website

Homological radiomics analysis for prognostic prediction in lung cancer patients

  • Ninomiya, Kenta
  • Arimura, Hidetaka
Physica Medica 2020 Journal Article, cited 0 times
Website

Efficacy evaluation of 2D, 3D U-Net semantic segmentation and atlas-based segmentation of normal lungs excluding the trachea and main bronchi

  • Nemoto, Takafumi
  • Futakami, Natsumi
  • Yagi, Masamichi
  • Kumabe, Atsuhiro
  • Takeda, Atsuya
  • Kunieda, Etsuo
  • Shigematsu, Naoyuki
Journal of Radiation Research 2020 Journal Article, cited 0 times
Website
This study aimed to examine the efficacy of semantic segmentation implemented by deep learning and to confirm whether this method is more effective than a commercially dominant auto-segmentation tool with regards to delineating normal lung excluding the trachea and main bronchi. A total of 232 non-small-cell lung cancer cases were examined. The computed tomography (CT) images of these cases were converted from Digital Imaging and Communications in Medicine (DICOM) Radiation Therapy (RT) formats to arrays of 32 x 128 x 128 voxels and input into both 2D and 3D U-Net, which are deep learning networks for semantic segmentation. The number of training, validation and test sets were 160, 40 and 32, respectively. Dice similarity coefficients (DSCs) of the test set were evaluated employing Smart Segmentation Knowledge Based Contouring (Smart segmentation is an atlas-based segmentation tool), as well as the 2D and 3D U-Net. The mean DSCs of the test set were 0.964 [95% confidence interval (CI), 0.960-0.968], 0.990 (95% CI, 0.989-0.992) and 0.990 (95% CI, 0.989-0.991) with Smart segmentation, 2D and 3D U-Net, respectively. Compared with Smart segmentation, both U-Nets presented significantly higher DSCs by the Wilcoxon signed-rank test (P < 0.01). There was no difference in mean DSC between the 2D and 3D U-Net systems. The newly-devised 2D and 3D U-Net approaches were found to be more effective than a commercial auto-segmentation tool. Even the relatively shallow 2D U-Net which does not require high-performance computational resources was effective enough for the lung segmentation. Semantic segmentation using deep learning was useful in radiation treatment planning for lung cancers.

Regularized Three-Dimensional Generative Adversarial Nets for Unsupervised Metal Artifact Reduction in Head and Neck CT Images

  • Nakao, Megumi
  • Imanishi, Keiho
  • Ueda, Nobuhiro
  • Imai, Yuichiro
  • Kirita, Tadaaki
  • Matsuda, Tetsuya
IEEE Access 2020 Journal Article, cited 1 times
Website
The reduction of metal artifacts in computed tomography (CT) images, specifically for strongartifacts generated from multiple metal objects, is a challenging issue in medical imaging research. Althoughthere have been some studies on supervised metal artifact reduction through the learning of synthesizedartifacts, it is difficult for simulated artifacts to cover the complexity of the real physical phenomenathat may be observed in X-ray propagation. In this paper, we introduce metal artifact reduction methodsbased on an unsupervised volume-to-volume translation learned from clinical CT images. We constructthree-dimensional adversarial nets with a regularized loss function designed for metal artifacts from multipledental fillings. The results of experiments using a CT volume database of 361 patients demonstrate that theproposed framework has an outstanding capacity to reduce strong artifacts and to recover underlying missingvoxels, while preserving the anatomical features of soft tissues and tooth structures from the original images.

A shallow convolutional neural network predicts prognosis of lung cancer patients in multi-institutional computed tomography image datasets

  • Mukherjee, Pritam
  • Zhou, Mu
  • Lee, Edward
  • Schicht, Anne
  • Balagurunathan, Yoganand
  • Napel, Sandy
  • Gillies, Robert
  • Wong, Simon
  • Thieme, Alexander
  • Leung,Ann
  • Gevaert, Olivier
Nature Machine Intelligence 2020 Journal Article, cited 0 times
Website
Lung cancer is the most common fatal malignancy in adults worldwide, and non-small-cell lung cancer (NSCLC) accounts for 85% of lung cancer diagnoses. Computed tomography is routinely used in clinical practice to determine lung cancer treatment and assess prognosis. Here, we developed LungNet, a shallow convolutional neural network for predicting outcomes of patients with NSCLC. We trained and evaluated LungNet on four independent cohorts of patients with NSCLC from four medical centres: Stanford Hospital (n = 129), H. Lee Moffitt Cancer Center and Research Institute (n = 185), MAASTRO Clinic (n = 311) and Charité – Universitätsmedizin, Berlin (n = 84). We show that outcomes from LungNet are predictive of overall survival in all four independent survival cohorts as measured by concordance indices of 0.62, 0.62, 0.62 and 0.58 on cohorts 1, 2, 3 and 4, respectively. Furthermore, the survival model can be used, via transfer learning, for classifying benign versus malignant nodules on the Lung Image Database Consortium (n = 1,010), with improved performance (AUC = 0.85) versus training from scratch (AUC = 0.82). LungNet can be used as a non-invasive predictor for prognosis in patients with NSCLC and can facilitate interpretation of computed tomography images for lung cancer stratification and prognostication.

Prediction of Non-small Cell Lung Cancer Histology by a Deep Ensemble of Convolutional and Bidirectional Recurrent Neural Network

  • Moitra, Dipanjan
  • Mandal, Rakesh Kumar
Journal of Digital Imaging 2020 Journal Article, cited 0 times

Brain image classification by the combination of different wavelet transforms and support vector machine classification

  • Mishra, Shailendra Kumar
  • Deepthi, V. Hima
Journal of Ambient Intelligence and Humanized Computing 2020 Journal Article, cited 0 times
Website
The human brain is the primary organ, and it is located in the centre of the nervous system in the human body. The abnormal cells in the brain are known as a brain tumor. The tumor in the brain does not spread to the other parts of the human body. Early diagnosis of brain tumor is required. In this work, an efficient technique is presented for magnetic resonance imaging (MRI) brain image classification using different wavelet transforms like discrete wavelet transform (DWT), stationary wavelet transform (SWT) and dual tree M-band wavelet transform (DMWT) for feature extraction and selection of coefficients and support vector machine classifier is used for classification. The normal and abnormal MRI brain image features are decomposed by DWT, SWT and DMWT. The coefficients of sub-bands are selected by rank features for the classification. Results show that DWT, SWT and DMWT produce 98% accuracy for the MRI brain classification system.

“One Stop Shop” for Prostate Cancer Staging using Imaging Biomarkers and Spatially Registered Multi-Parametric MRI

  • Mayer, Rulon
2020 Patent, cited 0 times
Website

A quantitative validation of segmented colon in virtual colonoscopy using image moments

  • Manjunath, K. N.
  • Prabhu, G. K.
  • Siddalingaswamy, P. C.
Biomedical Journal 2020 Journal Article, cited 1 times
Website
Background: Evaluation of segmented colon is one of the challenges in Computed Tomography Colonography (CTC). The objective of the study was to measure the segmented colon accurately using image processing techniques. Methods: This was a retrospective study, and the Institutional Ethical clearance was obtained for the secondary dataset. The technique was tested on 85 CTC dataset. The CTCdataset of 100 - 120 kVp, 100 mA, and ST (Slice Thickness) of 1.25 and 2.5 mm were used for empirical testing. The initial results of the work appear in the conference proceedings. Post colon segmentation, three distance measurement techniques, and one volumetric overlap computation were applied in Euclidian space in which the distances were measured on MPR views of the segmented and unsegmented colons and the volumetric overlap calculation between these two volumes. Results: The key finding was that the measurements on both the segmented and the unsegmented volumes remain same without much difference noticed. This was statistically proved. The results were validated quantitatively on 2D MPR images. An accuracy of 95.265 ± 0.4551% was achieved through volumetric overlap computation. Through paired t-test, at alpha = 5% ; statistical values were p = 0.6769, and t = 0.4169 which infer that there was no much significant difference. Conclusion: The combination of different validation techniques was applied to check the robustness of colon segmentation method, and good results were achieved with this approach. Through quantitative validation, the results were accepted at alpha =5%.

MRI-based radiogenomics analysis for predicting genetic alterations in oncogenic signalling pathways in invasive breast carcinoma

  • Lin, P
  • Liu, WK
  • Li, X
  • Wan, D
  • Qin, H
  • Li, Q
  • Chen, G
  • He, Y
  • Yang, H
Clinical Radiology 2020 Journal Article, cited 0 times

The Impact of Obesity on Tumor Glucose Uptake in Breast and Lung Cancer

  • Leitner, Brooks P.
  • Perry, Rachel J.
JNCI Cancer Spectrum 2020 Journal Article, cited 0 times
Website
Obesity confers an increased incidence and poorer clinical prognosis in over ten cancer types. Paradoxically, obesity provides protection from poor outcomes in lung cancer. Mechanisms for the obesity-cancer links are not fully elucidated, with altered glucose metabolism being a promising candidate. Using 18F-Fluorodeoxyglucose positron-emission-tomography/computed-tomography images from The Cancer Imaging Archive, we explored the relationship between body mass index (BMI) and glucose metabolism in several cancers. In 188 patients (BMI: 27.7, SD = 5.1, Range = 17.4-49.3 kg/m2), higher BMI was associated with greater tumor glucose uptake in obesity-associated breast cancer r = 0.36, p = 0.02), and with lower tumor glucose uptake in non-small-cell lung cancer (r=-0.26, p = 0.048) using two-sided Pearson correlations. No relationship was observed in soft tissue sarcoma or squamous cell carcinoma. Harnessing The National Cancer Institute’s open-access database, we demonstrate altered tumor glucose metabolism as a potential mechanism for the detrimental and protective effects of obesity on breast and lung cancer, respectively.

Integrative Radiogenomics Approach for Risk Assessment of Post-Operative Metastasis in Pathological T1 Renal Cell Carcinoma: A Pilot Retrospective Cohort Study

  • Lee, H. W.
  • Cho, H. H.
  • Joung, J. G.
  • Jeon, H. G.
  • Jeong, B. C.
  • Jeon, S. S.
  • Lee, H. M.
  • Nam, D. H.
  • Park, W. Y.
  • Kim, C. K.
  • Seo, S. I.
  • Park, H.
Cancers (Basel) 2020 Journal Article, cited 0 times
Website
Despite the increasing incidence of pathological stage T1 renal cell carcinoma (pT1 RCC), postoperative distant metastases develop in many surgically treated patients, causing death in certain cases. Therefore, this study aimed to create a radiomics model using imaging features from multiphase computed tomography (CT) to more accurately predict the postoperative metastasis of pT1 RCC and further investigate the possible link between radiomics parameters and gene expression profiles generated by whole transcriptome sequencing (WTS). Four radiomic features, including the minimum value of a histogram feature from inner regions of interest (ROIs) (INNER_Min_hist), the histogram of the energy feature from outer ROIs (OUTER_Energy_Hist), the maximum probability of gray-level co-occurrence matrix (GLCM) feature from inner ROIs (INNER_MaxProb_GLCM), and the ratio of voxels under 80 Hounsfield units (Hus) in the nephrographic phase of postcontrast CT (Under80HURatio), were detected to predict the postsurgical metastasis of patients with pathological stage T1 RCC, and the clinical outcomes of patients could be successfully stratified based on their radiomic risk scores. Furthermore, we identified heterogenous-trait-associated gene signatures correlated with these four radiomic features, which captured clinically relevant molecular pathways, tumor immune microenvironment, and potential treatment strategies. Our results of accurate surrogates using radiogenomics could lead to additional benefit from adjuvant therapy or postsurgical metastases in pT1 RCC.

A Hybrid End-to-End Approach Integrating Conditional Random Fields into CNNs for Prostate Cancer Detection on MRI

  • Lapa, Paulo
  • Castelli, Mauro
  • Gonçalves, Ivo
  • Sala, Evis
  • Rundo, Leonardo
Applied Sciences 2020 Journal Article, cited 0 times

Medical image segmentation using modified fuzzy c mean based clustering

  • Kumar, Dharmendra
  • Solanki, Anil Kumar
  • Ahlawat, Anil
  • Malhotra, Sukhnandan
2020 Conference Proceedings, cited 0 times
Website
Locating disease area in medical images is one of the most challenging task in the field of image segmentation. This paper presents a new approach of image-segmentation using modified fuzzy c-means(MFCM) clustering. Considering low illuminated medical images, the input image is firstly enhanced using histogram equalization(HE) technique. The enhanced image is now segmented into various regions using the MFCM based approach. The local information is employed in the objective-function of MFCM to overcome the issue of noise sensitivity. After that membership partitioning is improved by using fast membership filtering. The observed result of the proposed scheme is found suitable in terms of various evaluating parameters for experimentation.

Impact of internal target volume definition for pencil beam scanned proton treatment planning in the presence of respiratory motion variability for lung cancer: A proof of concept

  • Krieger, Miriam
  • Giger, Alina
  • Salomir, Rares
  • Bieri, Oliver
  • Celicanin, Zarko
  • Cattin, Philippe C
  • Lomax, Antony J
  • Weber, Damien C
  • Zhang, Ye
Radiotherapy and Oncology 2020 Journal Article, cited 0 times
Website

Usefulness of gradient tree boosting for predicting histological subtype and EGFR mutation status of non-small cell lung cancer on (18)F FDG-PET/CT

  • Koyasu, S.
  • Nishio, M.
  • Isoda, H.
  • Nakamoto, Y.
  • Togashi, K.
Ann Nucl Med 2020 Journal Article, cited 3 times
Website
OBJECTIVE: To develop and evaluate a radiomics approach for classifying histological subtypes and epidermal growth factor receptor (EGFR) mutation status in lung cancer on PET/CT images. METHODS: PET/CT images of lung cancer patients were obtained from public databases and used to establish two datasets, respectively to classify histological subtypes (156 adenocarcinomas and 32 squamous cell carcinomas) and EGFR mutation status (38 mutant and 100 wild-type samples). Seven types of imaging features were obtained from PET/CT images of lung cancer. Two types of machine learning algorithms were used to predict histological subtypes and EGFR mutation status: random forest (RF) and gradient tree boosting (XGB). The classifiers used either a single type or multiple types of imaging features. In the latter case, the optimal combination of the seven types of imaging features was selected by Bayesian optimization. Receiver operating characteristic analysis, area under the curve (AUC), and tenfold cross validation were used to assess the performance of the approach. RESULTS: In the classification of histological subtypes, the AUC values of the various classifiers were as follows: RF, single type: 0.759; XGB, single type: 0.760; RF, multiple types: 0.720; XGB, multiple types: 0.843. In the classification of EGFR mutation status, the AUC values were: RF, single type: 0.625; XGB, single type: 0.617; RF, multiple types: 0.577; XGB, multiple types: 0.659. CONCLUSIONS: The radiomics approach to PET/CT images, together with XGB and Bayesian optimization, is useful for classifying histological subtypes and EGFR mutation status in lung cancer.

Design and evaluation of an accurate CNR-guided small region iterative restoration-based tumor segmentation scheme for PET using both simulated and real heterogeneous tumors

  • Koç, Alpaslan
  • Güveniş, Albert
Med Biol Eng ComputMed Biol Eng Comput 2020 Journal Article, cited 0 times
Website
Tumor delineation accuracy directly affects the effectiveness of radiotherapy. This study presents a methodology that minimizes potential errors during the automated segmentation of tumors in PET images. Iterative blind deconvolution was implemented in a region of interest encompassing the tumor with the number of iterations determined from contrast-to-noise ratios. The active contour and random forest classification-based segmentation method was evaluated using three distinct image databases that included both synthetic and real heterogeneous tumors. Ground truths about tumor volumes were known precisely. The volumes of the tumors were in the range of 0.49-26.34 cm(3), 0.64-1.52 cm(3), and 40.38-203.84 cm(3) respectively. Widely available software tools, namely, MATLAB, MIPAV, and ITK-SNAP were utilized. When using the active contour method, image restoration reduced mean errors in volumes estimation from 95.85 to 3.37%, from 815.63 to 17.45%, and from 32.61 to 6.80% for the three datasets. The accuracy gains were higher using datasets that include smaller tumors for which PVE is known to be more predominant. Computation time was reduced by a factor of about 10 in the smaller deconvolution region. Contrast-to-noise ratios were improved for all tumors in all data. The presented methodology has the potential to improve delineation accuracy in particular for smaller tumors at practically feasible computational times. Graphical abstract Evaluation of accurate lesion volumes using CNR-guided and ROI-based restoration method for PET images.

PleThora: Pleural effusion and thoracic cavity segmentations in diseased lungs for benchmarking chest CT processing pipelines

  • Kiser, Kendall J
  • Ahmed, Sara
  • Stieb, Sonja
  • Mohamed, Abdallah S R
  • Elhalawani, Hesham
  • Park, Peter Y S
  • Doyle, Nathan S
  • Wang, Brandon J
  • Barman, Arko
  • Li, Zhao
  • Zheng, W Jim
  • Fuller, Clifton D
  • Giancardo, Luca
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: This manuscript describes a dataset of thoracic cavity segmentations and discrete pleural effusion segmentations we have annotated on 402 CT scans acquired from patients with non-small cell lung cancer. The segmentation of these anatomic regions precedes fundamental tasks in image analysis pipelines such as lung structure segmentation, lesion detection, and radiomics feature extraction. ACQUISITION AND VALIDATION METHODS: Bilateral thoracic cavity volumes and pleural effusion volumes were manually segmented on CT scans acquired from The Cancer Imaging Archive "NSCLC Radiomics" data collection. Four-hundred-two thoracic segmentations were first generated automatically by a U-Net based algorithm trained on chest CTs without cancer, manually corrected by a medical student to include the complete thoracic cavity (normal, pathologic, and atelectatic lung parenchyma, lung hilum, pleural effusion, fibrosis, nodules, tumor, and other anatomic anomalies), and revised by a radiation oncologist or a radiologist. Seventy-eight pleural effusions were manually segmented by a medical student and revised by a radiologist or radiation oncologist. Interobserver agreement between the radiation oncologist and radiologist corrections was acceptable. DATA FORMAT AND USAGE NOTES: All expert-vetted segmentations are publicly available in NIfTI format through The Cancer Imaging Archive at https://doi.org/10.7937/tcia.2020.6c7y-gq39. Tabular data detailing clinical and technical metadata linked to segmentation cases are also available. POTENTIAL APPLICATIONS: Thoracic cavity segmentations will be valuable for developing image analysis pipelines on pathologic lungs - where current automated algorithms struggle most. In conjunction with gross tumor volume segmentations already available from "NSCLC Radiomics," pleural effusion segmentations may be valuable for investigating radiomics profile differences between effusion and primary tumor or training algorithms to discriminate between them.

Correlation between MR Image-Based Radiomics Features and Risk Scores Associated with Gene Expression Profiles in Breast Cancer

  • Kim, Ga Ram
  • Ku, You Jin
  • Kim, Jun Ho
  • Kim, Eun-Kyung
Journal of the Korean Society of Radiology 2020 Journal Article, cited 0 times
Website

Arterial input function and tracer kinetic model-driven network for rapid inference of kinetic maps in Dynamic Contrast-Enhanced MRI (AIF-TK-net)

  • Kettelkamp, Joseph
  • Lingala, Sajan Goud
2020 Conference Paper, cited 0 times
Website
We propose a patient-specific arterial input function (AIF) and tracer kinetic (TK) model-driven network to rapidly estimate the extended Tofts- Kety kinetic model parameters in DCE-MRI. We term our network as AIF-TK-net, which maps an input comprising of an image patch of the DCE-time series and the patient-specific AIF to the output image patch of the TK parameters. We leverage the open-source NEURO-RIDER database of brain tumor DCE-MRI scans to train our network. Once trained, our model rapidly infers the TK maps of unseen DCE-MRI images on the order of a 0.34 sec/slice for a 256x256x65 time series data on a NVIDIA GeForce GTX 1080 Ti GPU. We show its utility on high time resolution DCE-MRI datasets where significant variability in AIFs across patients exists. We demonstrate that the proposed AIF - TK net considerably improves the TK parameter estimation accuracy in comparison to a network, which does not utilize the patient AIF.

The Combination of Low Skeletal Muscle Mass and High Tumor Interleukin-6 Associates with Decreased Survival in Clear Cell Renal Cell Carcinoma

  • Kays, J. K.
  • Koniaris, L. G.
  • Cooper, C. A.
  • Pili, R.
  • Jiang, G.
  • Liu, Y.
  • Zimmers, T. A.
Cancers (Basel) 2020 Journal Article, cited 0 times
Website
Clear cell renal carcinoma (ccRCC) is frequently associated with cachexia which is itself associated with decreased survival and quality of life. We examined relationships among body phenotype, tumor gene expression, and survival. Demographic, clinical, computed tomography (CT) scans and tumor RNASeq for 217 ccRCC patients were acquired from the Cancer Imaging Archive and The Cancer Genome Atlas (TCGA). Skeletal muscle and fat masses measured from CT scans and tumor cytokine gene expression were compared with survival by univariate and multivariate analysis. Patients in the lowest skeletal muscle mass (SKM) quartile had significantly shorter overall survival versus the top three SKM quartiles. Patients who fell into the lowest quartiles for visceral adipose mass (VAT) and subcutaneous adipose mass (SCAT) also demonstrated significantly shorter overall survival. Multiple tumor cytokines correlated with mortality, most strongly interleukin-6 (IL-6); high IL-6 expression was associated with significantly decreased survival. The combination of low SKM/high IL-6 was associated with significantly lower overall survival compared to high SKM/low IL-6 expression (26.1 months vs. not reached; p < 0.001) and an increased risk of mortality (HR = 5.95; 95% CI = 2.86-12.38). In conclusion, tumor cytokine expression, body composition, and survival are closely related, with low SKM/high IL-6 expression portending worse prognosis in ccRCC.

ECIDS-Enhanced Cancer Image Diagnosis and Segmentation Using Artificial Neural Networks and Active Contour Modelling

  • Kavitha, M. S.
  • Shanthini, J.
  • Bhavadharini, R. M.
Journal of Medical Imaging and Health Informatics 2020 Journal Article, cited 0 times
In the present decade, image processing techniques are extensively utilized in various medical image diagnoses, specifically in dealing with cancer images for detection and treatment in advance. The quality of the image and the accuracy are the significant factors to be considered while analyzing the images for cancer diagnosis. With that note, in this paper, an Enhanced Cancer Image Diagnosis and Segmentation (ECIDS) framework has been developed for effective detection and segmentation of lung cancer cells. Initially, the Computed Tomography lung image (CT image) has been processed for denoising by employing kernel based global denoising function. Following that, the noise free lung images are given for feature extraction. The images are further classified into normal and abnormal classes using Feed Forward Artificial Neural Network Classification. With that, the classified lung cancer images are given for segmentation and the process of segmentation has been done here with the Active Contour Modelling with reduced gradient. The segmented cancer images are further given for medical processing. Moreover, the framework is experimented with MATLAB tool using the clinical dataset called LIDC-IDRI lung CT dataset. The results are analyzed and discussed based on some performance evaluation metrics such as energy, Entropy, Correlation and Homogeneity are involved in effective classification.

Multi-Institutional Validation of Deep Learning for Pretreatment Identification of Extranodal Extension in Head and Neck Squamous Cell Carcinoma

  • Kann, B. H.
  • Hicks, D. F.
  • Payabvash, S.
  • Mahajan, A.
  • Du, J.
  • Gupta, V.
  • Park, H. S.
  • Yu, J. B.
  • Yarbrough, W. G.
  • Burtness, B. A.
  • Husain, Z. A.
  • Aneja, S.
J Clin Oncol 2020 Journal Article, cited 5 times
Website
PURPOSE: Extranodal extension (ENE) is a well-established poor prognosticator and an indication for adjuvant treatment escalation in patients with head and neck squamous cell carcinoma (HNSCC). Identification of ENE on pretreatment imaging represents a diagnostic challenge that limits its clinical utility. We previously developed a deep learning algorithm that identifies ENE on pretreatment computed tomography (CT) imaging in patients with HNSCC. We sought to validate our algorithm performance for patients from a diverse set of institutions and compare its diagnostic ability to that of expert diagnosticians. METHODS: We obtained preoperative, contrast-enhanced CT scans and corresponding pathology results from two external data sets of patients with HNSCC: an external institution and The Cancer Genome Atlas (TCGA) HNSCC imaging data. Lymph nodes were segmented and annotated as ENE-positive or ENE-negative on the basis of pathologic confirmation. Deep learning algorithm performance was evaluated and compared directly to two board-certified neuroradiologists. RESULTS: A total of 200 lymph nodes were examined in the external validation data sets. For lymph nodes from the external institution, the algorithm achieved an area under the receiver operating characteristic curve (AUC) of 0.84 (83.1% accuracy), outperforming radiologists' AUCs of 0.70 and 0.71 (P = .02 and P = .01). Similarly, for lymph nodes from the TCGA, the algorithm achieved an AUC of 0.90 (88.6% accuracy), outperforming radiologist AUCs of 0.60 and 0.82 (P < .0001 and P = .16). Radiologist diagnostic accuracy improved when receiving deep learning assistance. CONCLUSION: Deep learning successfully identified ENE on pretreatment imaging across multiple institutions, exceeding the diagnostic ability of radiologists with specialized head and neck experience. Our findings suggest that deep learning has utility in the identification of ENE in patients with HNSCC and has the potential to be integrated into clinical decision making.

The contribution of axillary lymph node volume to recurrence-free survival status in breast cancer patients with sub-stratification by molecular subtypes and pathological complete response

  • Kang, James
  • Li, Haifang
  • Cattell, Renee
  • Talanki, Varsha
  • Cohen, Jules A.
  • Bernstein, Clifford S.
  • Duong, Tim
Breast Cancer Research 2020 Journal Article, cited 0 times
Website
Purpose This study sought to examine the contribution of axillary lymph node (LN) volume to recurrence-free survival (RFS) in breast cancer patients with sub-stratification by molecular subtypes, and full or nodal PCR. Methods The largest LN volumes per patient at pre-neoadjuvant chemotherapy on standard clinical breast 1.5-Tesla MRI, 3 molecular subtypes, full, breast, and nodal PCR, and 10-year RFS were tabulated (N = 110 patients from MRIs of I-SPY-1 TRIAL). A volume threshold of two standard deviations was used to categorize large versus small LNs for sub stratification. In addition, “normal” node volumes were determined from a different cohort of 218 axillary LNs. Results LN volume (4.07 ± 5.45 cm3) were significantly larger than normal axillary LN volumes (0.646 ± 0.657 cm3, P = 10− 16). Full and nodal pathologic complete response (PCR) was not dependent on pre-neoadjuvant chemotherapy nodal volume (P > .05). The HR+/HER2– group had smaller axillary LN volumes than the HER2 + and triple-negative groups (P < .05). Survival was not dependent on pre-treatment axillary LN volumes alone (P = .29). However, when substratified by PCR, the large LN group with full (P = .011) or nodal PCR (P = .0026) both showed better recurrence-free survival than the small LN group. There was significant difference in RFS when the small node group was separated by the 3 molecular subtypes (P = .036) but not the large node group (P = .97). Conclusions This study found an interaction of axillary lymph node volume, pathological complete responses, and molecular subtypes that inform recurrence-free survival status. Improved characterization of the axillary lymph nodes has the potential to improve the management of breast cancer patients.

Homology-based radiomic features for prediction of the prognosis of lung cancer based on CT-based radiomics

  • Kadoya, Noriyuki
  • Tanaka, Shohei
  • Kajikawa, Tomohiro
  • Tanabe, Shunpei
  • Abe, Kota
  • Nakajima, Yujiro
  • Yamamoto, Takaya
  • Takahashi, Noriyoshi
  • Takeda, Kazuya
  • Dobashi, Suguru
  • Takeda, Ken
  • Nakane, Kazuaki
  • Jingu, Keiichi
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Radiomics is a new technique that enables noninvasive prognostic prediction by extracting features from medical images. Homology is a concept used in many branches of algebra and topology that can quantify the contact degree. In the present study, we developed homology-based radiomic features to predict the prognosis of non-small-cell lung cancer (NSCLC) patients and then evaluated the accuracy of this prediction method. METHODS: Four data sets were used: two to provide training and test data and two for the selection of robust radiomic features. All the data sets were downloaded from The Cancer Imaging Archive (TCIA). In two-dimensional cases, the Betti numbers consist of two values: b0 (zero-dimensional Betti number), which is the number of isolated components, and b1 (one-dimensional Betti number), which is the number of one-dimensional or "circular" holes. For homology-based evaluation, CT images must be converted to binarized images in which each pixel has two possible values: 0 or 1. All CT slices of the gross tumor volume were used for calculating the homology histogram. First, by changing the threshold of the CT value (range: -150 to 300 HU) for all its slices, we developed homology-based histograms for b0 , b1 , and b1 /b0 using binarized images All histograms were then summed, and the summed histogram was normalized by the number of slices. 144 homology-based radiomic features were defined from the histogram. To compare the standard radiomic features, 107 radiomic features were calculated using the standard radiomics technique. To clarify the prognostic power, the relationship between the values of the homology-based radiomic features and overall survival was evaluated using LASSO Cox regression model and the Kaplan-Meier method. The retained features with non-zero coefficients calculated by the LASSO Cox regression model were used for fitting the regression model. Moreover, these features were then integrated into a radiomics signature. An individualized rad score was calculated from a linear combination of the selected features, which were weighted by their respective coefficients. RESULTS: When the patients in the training and test data sets were stratified into high-risk and low-risk groups according to the rad scores, the overall survival of the groups was significantly different. The C-index values for the homology-based features (rad score), standard features (rad score), and tumor size were 0.625, 0.603, and 0.607, respectively, for the training data sets and 0.689, 0.668, and 0.667 for the test data sets. This result showed that homology-based radiomic features had slightly higher prediction power than the standard radiomic features. CONCLUSIONS: Prediction performance using homology-based radiomic features had a comparable or slightly higher prediction power than standard radiomic features. These findings suggest that homology-based radiomic features may have great potential for improving the prognostic prediction accuracy of CT-based radiomics. In this result, it is noteworthy that there are some limitations.

Evaluation of Feature Robustness Against Technical Parameters in CT Radiomics: Verification of Phantom Study with Patient Dataset

  • Jin, Hyeongmin
  • Kim, Jong Hyo
Journal of Signal Processing Systems 2020 Journal Article, cited 1 times
Website
Recent advances in radiomics have shown promising results in prognostic and diagnostic studies with high dimensional imaging feature analysis. However, radiomic features are known to be affected by technical parameters and feature extraction methodology. We evaluate the robustness of CT radiomic features against the technical parameters involved in CT acquisition and feature extraction procedures using a standardized phantom and verify the feature robustness by using patient cases. ACR phantom was scanned with two tube currents, two reconstruction kernels, and two fields of view size. A total of 47 radiomic features of textures and first-order statistics were extracted on the homogeneous region from all scans. Intrinsic variability was measured to identify unstable features vulnerable to inherent CT noise and texture. Susceptibility index was defined to represent the susceptibility to the variation of a given technical parameter. Eighteen radiomic features were shown to be intrinsically unstable on reference condition. The features were more susceptible to the reconstruction kernel variation than to other sources of variation. The feature robustness evaluated on the phantom CT correlated with those evaluated on clinical CT scans. We revealed a number of scan parameters could significantly affect the radiomic features. These characteristics should be considered in a radiomic study when different scan parameters are used in a clinical dataset.

Retina U-Net: Embarrassingly Simple Exploitation of Segmentation Supervision for Medical Object Detection

  • Paul F. Jaeger
  • Simon A. A. Kohl
  • Sebastian Bickelhaupt
  • Fabian Isensee
  • Tristan Anselm Kuder
  • Heinz-Peter Schlemmer
  • Klaus H. Maier-Hein
2020 Conference Paper, cited 33 times
Website
The task of localizing and categorizing objects in medical images often remains formulated as a semantic segmentation problem. This approach, however, only indirectly solves the coarse localization task by predicting pixel-level scores, requiring ad-hoc heuristics when mapping back to object-level scores. State-of-the-art object detectors on the other hand, allow for individual object scoring in an end-to-end fashion, while ironically trading in the ability to exploit the full pixel-wise supervision signal. This can be particularly disadvantageous in the setting of medical image analysis, where data sets are notoriously small. In this paper, we propose Retina U-Net, a simple architecture, which naturally fuses the Retina Net one-stage detector with the U-Net architecture widely used for semantic segmentation in medical images. The proposed architecture recaptures discarded supervision signals by complementing object detection with an auxiliary task in the form of semantic segmentation without introducing the additional complexity of previously proposed two-stage detectors. We evaluate the importance of full segmentation supervision on two medical data sets, provide an in-depth analysis on a series of toy experiments and show how the corresponding performance gain grows in the limit of small data sets. Retina U-Net yields strong detection performance only reached by its more complex two-staged counterparts. Our framework including all methods implemented for operation on 2D and 3D images is available at github.com/pfjaeger/medicaldetectiontoolkit.

Feasibility study of a multi-criteria decision-making based hierarchical model for multi-modality feature and multi-classifier fusion: Applications in medical prognosis prediction

  • He, Qiang
  • Li, Xin
  • Kim, DW Nathan
  • Jia, Xun
  • Gu, Xuejun
  • Zhen, Xin
  • Zhou, Linghong
Information Fusion 2020 Journal Article, cited 0 times
Website

Descriptions and evaluations of methods for determining surface curvature in volumetric data

  • Hauenstein, Jacob D.
  • Newman, Timothy S.
Computers & Graphics 2020 Journal Article, cited 0 times
Website
Highlights • Methods using convolution or fitting are often the most accurate. • The existing TE method is fast and accurate on noise-free data. • The OP method is faster than existing, similarly accurate methods on real data. • Even modest errors in curvature notably impact curvature-based renderings. • On real data, GSTH, GSTI, and OP produce the best curvature-based renderings. Abstract Three methods developed for determining surface curvature in volumetric data are described, including one convolution-based method, one fitting-based method, and one method that uses normal estimates to directly determine curvature. Additionally, a study of the accuracy and computational performance of these methods and prior methods is presented. The study considers synthetic data, noise-added synthetic data, and real data. Sample volume renderings using curvature-based transfer functions, where curvatures were determined with the methods, are also exhibited.

Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs in CT Scans Using Q-Deformed Entropy and Deep Learning Features

  • Hasan, Ali M.
  • Al-Jawad, Mohammed M.
  • Jalab, Hamid A.
  • Shaiba, Hadil
  • Ibrahim, Rabha W.
  • Al-Shamasneh, Ala’a R.
Entropy 2020 Journal Article, cited 0 times
Website
Many health systems over the world have collapsed due to limited capacity and a dramatic increase of suspected COVID-19 cases. What has emerged is the need for finding an efficient, quick and accurate method to mitigate the overloading of radiologists’ efforts to diagnose the suspected cases. This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans. In this study, pre-processing is used to reduce the effect of intensity variations between CT slices. Then histogram thresholding is used to isolate the background of the CT lung scan. Each CT lung scan undergoes a feature extraction which involves deep learning and a Q-deformed entropy algorithm. The obtained features are classified using a long short-term memory (LSTM) neural network classifier. Subsequently, combining all extracted features significantly improves the performance of the LSTM network to precisely discriminate between COVID-19, pneumonia and healthy cases. The maximum achieved accuracy for classifying the collected dataset comprising 321 patients is 99.68%.

PET/CT radiomics signature of human papilloma virus association in oropharyngeal squamous cell carcinoma

  • Haider, S. P.
  • Mahajan, A.
  • Zeevi, T.
  • Baumeister, P.
  • Reichel, C.
  • Sharaf, K.
  • Forghani, R.
  • Kucukkaya, A. S.
  • Kann, B. H.
  • Judson, B. L.
  • Prasad, M. L.
  • Burtness, B.
  • Payabvash, S.
Eur J Nucl Med Mol Imaging 2020 Journal Article, cited 1 times
Website
PURPOSE: To devise, validate, and externally test PET/CT radiomics signatures for human papillomavirus (HPV) association in primary tumors and metastatic cervical lymph nodes of oropharyngeal squamous cell carcinoma (OPSCC). METHODS: We analyzed 435 primary tumors (326 for training, 109 for validation) and 741 metastatic cervical lymph nodes (518 for training, 223 for validation) using FDG-PET and non-contrast CT from a multi-institutional and multi-national cohort. Utilizing 1037 radiomics features per imaging modality and per lesion, we trained, optimized, and independently validated machine-learning classifiers for prediction of HPV association in primary tumors, lymph nodes, and combined "virtual" volumes of interest (VOI). PET-based models were additionally validated in an external cohort. RESULTS: Single-modality PET and CT final models yielded similar classification performance without significant difference in independent validation; however, models combining PET and CT features outperformed single-modality PET- or CT-based models, with receiver operating characteristic area under the curve (AUC) of 0.78, and 0.77 for prediction of HPV association using primary tumor lesion features, in cross-validation and independent validation, respectively. In the external PET-only validation dataset, final models achieved an AUC of 0.83 for a virtual VOI combining primary tumor and lymph nodes, and an AUC of 0.73 for a virtual VOI combining all lymph nodes. CONCLUSION: We found that PET-based radiomics signatures yielded similar classification performance to CT-based models, with potential added value from combining PET- and CT-based radiomics for prediction of HPV status. While our results are promising, radiomics signatures may not yet substitute tissue sampling for clinical decision-making.

Radiomics feature reproducibility under inter-rater variability in segmentations of CT images

  • Haarburger, Christoph
  • Müller-Franzes, Gustav
  • Weninger, Leon
  • Kuhl, Christiane
  • Truhn, Daniel
  • Merhof, Dorit
Sci RepScientific reports 2020 Journal Article, cited 0 times
Website

Radiomics feature reproducibility under inter-rater variability in segmentations of CT images

  • Haarburger, C.
  • Muller-Franzes, G.
  • Weninger, L.
  • Kuhl, C.
  • Truhn, D.
  • Merhof, D.
Sci RepScientific reports 2020 Journal Article, cited 0 times
Website
Identifying image features that are robust with respect to segmentation variability is a tough challenge in radiomics. So far, this problem has mainly been tackled in test-retest analyses. In this work we analyse radiomics feature reproducibility in two phases: first with manual segmentations provided by four expert readers and second with probabilistic automated segmentations using a recently developed neural network (PHiseg). We test feature reproducibility on three publicly available datasets of lung, kidney and liver lesions. We find consistent results both over manual and automated segmentations in all three datasets and show that there are subsets of radiomic features which are robust against segmentation variability and other radiomic features which are prone to poor reproducibility under differing segmentations. By providing a detailed analysis of robustness of the most common radiomics features across several datasets, we envision that more reliable and reproducible radiomic models can be built in the future based on this work.

Optimal Statistical incorporation of independent feature Stability information into Radiomics Studies

  • Götz, Michael
  • Maier-Hein, Klaus H
Sci RepScientific reports 2020 Journal Article, cited 0 times
Website

T2-FDL: A robust sparse representation method using adaptive type-2 fuzzy dictionary learning for medical image classification

  • Ghasemi, Majid
  • Kelarestaghi, Manoochehr
  • Eshghi, Farshad
  • Sharifi, Arash
Expert Systems with Applications 2020 Journal Article, cited 0 times
Website
In this paper, a robust sparse representation for medical image classification is proposed based on the adaptive type-2 fuzzy learning (T2-FDL) system. In the proposed method, sparse coding and dictionary learning processes are executed iteratively until a near-optimal dictionary is obtained. The sparse coding step aiming at finding a combination of dictionary atoms to represent the input data efficiently, and the dictionary learning step rigorously adjusts a minimum set of dictionary items. The two-step operation helps create an adaptive sparse representation algorithm by involving the type-2 fuzzy sets in the design process of image classification. Since the existing image measurements are not made under the same conditions and with the same accuracy, the performance of medical diagnosis is always affected by noise and uncertainty. By introducing an adaptive type-2 fuzzy learning method, a better approximation in an environment with higher degrees of uncertainty and noise is achieved. The experiments are executed over two open-access brain tumor magnetic resonance image databases, REMBRANDT and TCGA-LGG, from The Cancer Imaging Archive (TCIA). The experimental results of a brain tumor classification task show that the proposed T2-FDL method can adequately minimize the negative effects of uncertainty in the input images. The results demonstrate the outperformance of T2-FDL compared to other important classification methods in the literature, in terms of accuracy, specificity, and sensitivity.

Imaging-AMARETTO: An Imaging Genomics Software Tool to Interrogate Multiomics Networks for Relevance to Radiography and Histopathology Imaging Biomarkers of Clinical Outcomes

  • Gevaert, O.
  • Nabian, M.
  • Bakr, S.
  • Everaert, C.
  • Shinde, J.
  • Manukyan, A.
  • Liefeld, T.
  • Tabor, T.
  • Xu, J.
  • Lupberger, J.
  • Haas, B. J.
  • Baumert, T. F.
  • Hernaez, M.
  • Reich, M.
  • Quintana, F. J.
  • Uhlmann, E. J.
  • Krichevsky, A. M.
  • Mesirov, J. P.
  • Carey, V.
  • Pochet, N.
JCO Clin Cancer Inform 2020 Journal Article, cited 1 times
Website
PURPOSE: The availability of increasing volumes of multiomics, imaging, and clinical data in complex diseases such as cancer opens opportunities for the formulation and development of computational imaging genomics methods that can link multiomics, imaging, and clinical data. METHODS: Here, we present the Imaging-AMARETTO algorithms and software tools to systematically interrogate regulatory networks derived from multiomics data within and across related patient studies for their relevance to radiography and histopathology imaging features predicting clinical outcomes. RESULTS: To demonstrate its utility, we applied Imaging-AMARETTO to integrate three patient studies of brain tumors, specifically, multiomics with radiography imaging data from The Cancer Genome Atlas (TCGA) glioblastoma multiforme (GBM) and low-grade glioma (LGG) cohorts and transcriptomics with histopathology imaging data from the Ivy Glioblastoma Atlas Project (IvyGAP) GBM cohort. Our results show that Imaging-AMARETTO recapitulates known key drivers of tumor-associated microglia and macrophage mechanisms, mediated by STAT3, AHR, and CCR2, and neurodevelopmental and stemness mechanisms, mediated by OLIG2. Imaging-AMARETTO provides interpretation of their underlying molecular mechanisms in light of imaging biomarkers of clinical outcomes and uncovers novel master drivers, THBS1 and MAP2, that establish relationships across these distinct mechanisms. CONCLUSION: Our network-based imaging genomics tools serve as hypothesis generators that facilitate the interrogation of known and uncovering of novel hypotheses for follow-up with experimental validation studies. We anticipate that our Imaging-AMARETTO imaging genomics tools will be useful to the community of biomedical researchers for applications to similar studies of cancer and other complex diseases with available multiomics, imaging, and clinical data.

Simultaneous emission and attenuation reconstruction in time-of-flight PET using a reference object

  • Garcia-Perez, P.
  • Espana, S.
EJNMMI Phys 2020 Journal Article, cited 0 times
Website
BACKGROUND: Simultaneous reconstruction of emission and attenuation images in time-of-flight (TOF) positron emission tomography (PET) does not provide a unique solution. In this study, we propose to solve this limitation by including additional information given by a reference object with known attenuation placed outside the patient. Different configurations of the reference object were studied including geometry, material composition, and activity, and an optimal configuration was defined. In addition, this configuration was tested for different timing resolutions and noise levels. RESULTS: The proposed strategy was tested in 2D simulations obtained by forward projection of available PET/CT data and noise was included using Monte Carlo techniques. Obtained results suggest that the optimal configuration corresponds to a water cylinder inserted in the patient table and filled with activity. In that case, mean differences between reconstructed and true images were below 10%. However, better results can be obtained by increasing the activity of the reference object. CONCLUSION: This study shows promising results that might allow to obtain an accurate attenuation map from pure TOF-PET data without prior knowledge obtained from CT, MRI, or transmission scans.

A fast and scalable method for quality assurance of deformable image registration on lung CT scans using convolutional neural networks

  • Galib, Shaikat M
  • Lee, Hyoung K
  • Guy, Christopher L
  • Riblett, Matthew J
  • Hugo, Geoffrey D
Med Phys 2020 Journal Article, cited 1 times
Website
PURPOSE: To develop and evaluate a method to automatically identify and quantify deformable image registration (DIR) errors between lung computed tomography (CT) scans for quality assurance (QA) purposes. METHODS: We propose a deep learning method to flag registration errors. The method involves preparation of a dataset for machine learning model training and testing, design of a three-dimensional (3D) convolutional neural network architecture that classifies registrations into good or poor classes, and evaluation of a metric called registration error index (REI) which provides a quantitative measure of registration error. RESULTS: Our study shows that, despite having limited number of training images available (10 CT scan pairs for training and 17 CT scan pairs for testing), the method achieves 0.882 AUC-ROC on the test dataset. Furthermore, the combined standard uncertainty of the estimated REI by our model lies within +/- 0.11 (+/- 11% of true REI value), with a confidence level of approximately 68%. CONCLUSIONS: We have developed and evaluated our method using original clinical registrations without generating any synthetic/simulated data. Moreover, test data were acquired from a different environment than that of training data, so that the method was validated robustly. The results of this study showed that our algorithm performs reasonably well in challenging scenarios.

Identifying BAP1 Mutations in Clear-Cell Renal Cell Carcinoma by CT Radiomics: Preliminary Findings

  • Feng, Zhan
  • Zhang, Lixia
  • Qi, Zhong
  • Shen, Qijun
  • Hu, Zhengyu
  • Chen, Feng
Frontiers in Oncology 2020 Journal Article, cited 0 times
Website
To evaluate the potential application of computed tomography (CT) radiomics in the prediction of BRCA1-associated protein 1 (BAP1) mutation status in patients with clear-cell renal cell carcinoma (ccRCC). In this retrospective study, clinical and CT imaging data of 54 patients were retrieved from The Cancer Genome Atlas–Kidney Renal Clear Cell Carcinoma database. Among these, 45 patients had wild-type BAP1 and nine patients had BAP1 mutation. The texture features of tumor images were extracted using the Matlab-based IBEX package. To produce class-balanced data and improve the stability of prediction, we performed data augmentation for the BAP1 mutation group during cross validation. A model to predict BAP1 mutation status was constructed using Random Forest Classification algorithms, and was evaluated using leave-one-out-cross-validation. Random Forest model of predict BAP1 mutation status had an accuracy of 0.83, sensitivity of 0.72, specificity of 0.87, precision of 0.65, AUC of 0.77, F-score of 0.68. CT radiomics is a potential and feasible method for predicting BAP1 mutation status in patients with ccRCC.

Quantitative Imaging Informatics for Cancer Research

  • Fedorov, Andrey
  • Beichel, Reinhard
  • Kalpathy-Cramer, Jayashree
  • Clunie, David
  • Onken, Michael
  • Riesmeier, Jorg
  • Herz, Christian
  • Bauer, Christian
  • Beers, Andrew
  • Fillion-Robin, Jean-Christophe
  • Lasso, Andras
  • Pinter, Csaba
  • Pieper, Steve
  • Nolden, Marco
  • Maier-Hein, Klaus
  • Herrmann, Markus D
  • Saltz, Joel
  • Prior, Fred
  • Fennessy, Fiona
  • Buatti, John
  • Kikinis, Ron
JCO Clin Cancer Inform 2020 Journal Article, cited 0 times
Website
PURPOSE: We summarize Quantitative Imaging Informatics for Cancer Research (QIICR; U24 CA180918), one of the first projects funded by the National Cancer Institute (NCI) Informatics Technology for Cancer Research program. METHODS: QIICR was motivated by the 3 use cases from the NCI Quantitative Imaging Network. 3D Slicer was selected as the platform for implementation of open-source quantitative imaging (QI) tools. Digital Imaging and Communications in Medicine (DICOM) was chosen for standardization of QI analysis outputs. Support of improved integration with community repositories focused on The Cancer Imaging Archive (TCIA). Priorities included improved capabilities of the standard, toolkits and tools, reference datasets, collaborations, and training and outreach. RESULTS: Fourteen new tools to support head and neck cancer, glioblastoma, and prostate cancer QI research were introduced and downloaded over 100,000 times. DICOM was amended, with over 40 correction proposals addressing QI needs. Reference implementations of the standard in a popular toolkit and standalone tools were introduced. Eight datasets exemplifying the application of the standard and tools were contributed. An open demonstration/connectathon was organized, attracting the participation of academic groups and commercial vendors. Integration of tools with TCIA was improved by implementing programmatic communication interface and by refining best practices for QI analysis results curation. CONCLUSION: Tools, capabilities of the DICOM standard, and datasets we introduced found adoption and utility within the cancer imaging community. A collaborative approach is critical to addressing challenges in imaging informatics at the national and international levels. Numerous challenges remain in establishing and maintaining the infrastructure of analysis tools and standardized datasets for the imaging community. Ideas and technology developed by the QIICR project are contributing to the NCI Imaging Data Commons currently being developed.

Recurrent Attention Network for False Positive Reduction in the Detection of Pulmonary Nodules in Thoracic CT Scans

  • M. Mehdi Farhangi
  • Nicholas Petrick
  • Berkman Sahiner
  • Hichem Frigui
  • Amir A. Amini
  • Aria Pezeshk
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: Multi-view 2-D Convolutional Neural Networks (CNNs) and 3-D CNNs have been successfully used for analyzing volumetric data in many state-of-the-art medical imaging applications. We propose an alternative modular framework that analyzes volumetric data with an approach that is analogous to radiologists' interpretation, and apply the framework to reduce false positives that are generated in Computer-Aided Detection (CADe) systems for pulmonary nodules in thoracic CT scans. METHODS: In our approach, a deep network consisting of 2-D CNNs first processes slices individually. The features extracted in this stage are then passed to a Recurrent Neural Network (RNN), thereby modeling consecutive slices as a sequence of temporal data and capturing the contextual information across all three dimensions in the volume of interest. Outputs of the RNN layer are weighed before the final fully connected layer, enabling the network to scale the importance of different slices within a volume of interest in an end-to-end training framework. RESULTS: We validated the proposed architecture on the false positive reduction track of the Lung Nodule Analysis (LUNA) challenge for pulmonary nodule detection in chest CT scans, and obtained competitive results compared to 3-D CNNs. Our results show that the proposed approach can encode the 3-D information in volumetric data effectively by achieving a sensitivity > 0.8 with just 1/8 false positives per scan. CONCLUSIONS: Our experimental results demonstrate the effectiveness of temporal analysis of volumetric images for the application of false positive reduction in chest CT scans and show that state-of-the-art 2-D architectures from the literature can be directly applied to analyzing volumetric medical data. As newer and better 2-D architectures are being developed at a much faster rate compared to 3-D architectures, our approach makes it easy to obtain state-of-the-art performance on volumetric data using new 2-D architectures.

The Veterans Affairs Precision Oncology Data Repository, a Clinical, Genomic, and Imaging Research Database

  • Elbers, Danne C.
  • Fillmore, Nathanael R.
  • Sung, Feng-Chi
  • Ganas, Spyridon S.
  • Prokhorenkov, Andrew
  • Meyer, Christopher
  • Hall, Robert B.
  • Ajjarapu, Samuel J.
  • Chen, Daniel C.
  • Meng, Frank
  • Grossman, Robert L.
  • Brophy, Mary T.
  • Do, Nhan V.
Patterns 2020 Journal Article, cited 0 times
Website
The Veterans Affairs Precision Oncology Data Repository (VA-PODR) is a large, nationwide repository of de-identified data on patients diagnosed with cancer at the Department of Veterans Affairs (VA). Data include longitudinal clinical data from the VA's nationwide electronic health record system and the VA Central Cancer Registry, targeted tumor sequencing data, and medical imaging data including computed tomography (CT) scans and pathology slides. A subset of the repository is available at the Genomic Data Commons (GDC) and The Cancer Imaging Archive (TCIA), and the full repository is available through the Veterans Precision Oncology Data Commons (VPODC). By releasing this de-identified dataset, we aim to advance Veterans' health care through enabling translational research on the Veteran population by a wide variety of researchers.

Long short-term memory networks predict breast cancer recurrence in analysis of consecutive MRIs acquired during the course of neoadjuvant chemotherapy

  • Drukker, Karen
  • Edwards, Alexandra
  • Papaioannou, John
  • Giger, Maryellen
  • Hahn, Horst K.
  • Mazurowski, Maciej A.
2020 Conference Paper, cited 0 times
Website
The purpose of this study was to assess long short-term memory networks in the prediction of recurrence-free survival in breast cancer patients using features extracted from MRIs acquired during the course of neoadjuvant chemotherapy. In the I-SPY1 dataset, up to 4 MRI exams were available per patient acquired at pre-treatment, early-treatment, interregimen, and pre-surgery time points. Breast cancers were automatically segmented and 8 features describing kinetic curve characteristics were extracted. We assessed performance of long short-term memory networks in the prediction of recurrence-free survival status at 2 years and at 5 years post-surgery. For these predictions, we analyzed MRIs from women who had at least 2 (or 5) years of recurrence-free follow-up or experienced recurrence or death within that timeframe: 157 women and 73 women, respectively. One approach used features extracted from all available exams and the other approach used features extracted from only exams prior to the second cycle of neoadjuvant chemotherapy. The areas under the ROC curve in the prediction of recurrence-free survival status at 2 years post-surgery were 0.80, 95% confidence interval [0.68; 0.88] and 0.75 [0.62; 0.83] for networks trained with all 4 available exams and only the ‘early’ exams, respectively. Hazard ratios at the lowest, median, and highest quartile cut -points were 6.29 [2.91; 13.62], 3.27 [1.77; 6.03], 1.65 [0.83; 3.27] and 2.56 [1.20; 5.48], 3.01 [1.61; 5.66], 2.30 [1.14; 4.67]. Long short-term memory networks were able to predict recurrence-free survival in breast cancer patients, also when analyzing only MRIs acquired ‘early on’ during neoadjuvant treatment.

Investigation of inter-fraction target motion variations in the context of pencil beam scanned proton therapy in non-small cell lung cancer patients

  • den Otter, L. A.
  • Anakotta, R. M.
  • Weessies, M.
  • Roos, C. T. G.
  • Sijtsema, N. M.
  • Muijs, C. T.
  • Dieters, M.
  • Wijsman, R.
  • Troost, E. G. C.
  • Richter, C.
  • Meijers, A.
  • Langendijk, J. A.
  • Both, S.
  • Knopf, A. C.
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: For locally advanced-stage non-small cell lung cancer (NSCLC), inter-fraction target motion variations during the whole time span of a fractionated treatment course are assessed in a large and representative patient cohort. The primary objective is to develop a suitable motion monitoring strategy for pencil beam scanning proton therapy (PBS-PT) treatments of NSCLC patients during free breathing. METHODS: Weekly 4D computed tomography (4DCT; 41 patients) and daily 4D cone beam computed tomography (4DCBCT; 10 of 41 patients) scans were analyzed for a fully fractionated treatment course. Gross tumor volumes (GTVs) were contoured and the 3D displacement vectors of the centroid positions were compared for all scans. Furthermore, motion amplitude variations in different lung segments were statistically analyzed. The dosimetric impact of target motion variations and target motion assessment was investigated in exemplary patient cases. RESULTS: The median observed centroid motion was 3.4 mm (range: 0.2-12.4 mm) with an average variation of 2.2 mm (range: 0.1-8.8 mm). Ten of 32 patients (31.3%) with an initial motion <5 mm increased beyond a 5-mm motion amplitude during the treatment course. Motion observed in the 4DCBCT scans deviated on average 1.5 mm (range: 0.0-6.0 mm) from the motion observed in the 4DCTs. Larger motion variations for one example patient compromised treatment plan robustness while no dosimetric influence was seen due to motion assessment biases in another example case. CONCLUSIONS: Target motion variations were investigated during the course of radiotherapy for NSCLC patients. Patients with initial GTV motion amplitudes of < 2 mm can be assumed to be stable in motion during the treatment course. For treatments of NSCLC patients who exhibit motion amplitudes of > 2 mm, 4DCBCT should be considered for motion monitoring due to substantial motion variations observed.

AI-based Prognostic Imaging Biomarkers for Precision Neurooncology: the ReSPOND Consortium

  • Davatzikos, C.
  • Barnholtz-Sloan, J. S.
  • Bakas, S.
  • Colen, R.
  • Mahajan, A.
  • Quintero, C. B.
  • Font, J. C.
  • Puig, J.
  • Jain, R.
  • Sloan, A. E.
  • Badve, C.
  • Marcus, D. S.
  • Choi, Y. S.
  • Lee, S. K.
  • Chang, J. H.
  • Poisson, L. M.
  • Griffith, B.
  • Dicker, A. P.
  • Flanders, A. E.
  • Booth, T. C.
  • Rathore, S.
  • Akbari, H.
  • Sako, C.
  • Bilello, M.
  • Shukla, G.
  • Kazerooni, A. F.
  • Brem, S.
  • Lustig, R.
  • Mohan, S.
  • Bagley, S.
  • Nasrallah, M.
  • O'Rourke, D. M.
Neuro-oncology 2020 Journal Article, cited 0 times
Website

Immunotherapy in Metastatic Colorectal Cancer: Could the Latest Developments Hold the Key to Improving Patient Survival?

  • Damilakis, E.
  • Mavroudis, D.
  • Sfakianaki, M.
  • Souglakos, J.
Cancers (Basel) 2020 Journal Article, cited 0 times
Website
Immunotherapy has considerably increased the number of anticancer agents in many tumor types including metastatic colorectal cancer (mCRC). Anti-PD-1 (programmed death 1) and cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) immune checkpoint inhibitors (ICI) have been shown to benefit the mCRC patients with mismatch repair deficiency (dMMR) or high microsatellite instability (MSI-H). However, ICI is not effective in mismatch repair proficient (pMMR) colorectal tumors, which constitute a large population of patients. Several clinical trials evaluating the efficacy of immunotherapy combined with chemotherapy, radiation therapy, or other agents are currently ongoing to extend the benefit of immunotherapy to pMMR mCRC cases. In dMMR patients, MSI testing through immunohistochemistry and/or polymerase chain reaction can be used to identify patients that will benefit from immunotherapy. Next-generation sequencing has the ability to detect MSI-H using a low amount of nucleic acids and its application in clinical practice is currently being explored. Preliminary data suggest that radiomics is capable of discriminating MSI from microsatellite stable mCRC and may play a role as an imaging biomarker in the future. Tumor mutational burden, neoantigen burden, tumor-infiltrating lymphocytes, immunoscore, and gastrointestinal microbiome are promising biomarkers that require further investigation and validation.

Predicting the ISUP grade of clear cell renal cell carcinoma with multiparametric MR and multiphase CT radiomics

  • Cui, Enming
  • Li, Zhuoyong
  • Ma, Changyi
  • Li, Qing
  • Lei, Yi
  • Lan, Yong
  • Yu, Juan
  • Zhou, Zhipeng
  • Li, Ronggang
  • Long, Wansheng
  • Lin, Fan
Eur Radiol 2020 Journal Article, cited 0 times
Website
OBJECTIVE: To investigate externally validated magnetic resonance (MR)-based and computed tomography (CT)-based machine learning (ML) models for grading clear cell renal cell carcinoma (ccRCC). MATERIALS AND METHODS: Patients with pathologically proven ccRCC in 2009-2018 were retrospectively included for model development and internal validation; patients from another independent institution and The Cancer Imaging Archive dataset were included for external validation. Features were extracted from T1-weighted, T2-weighted, corticomedullary-phase (CMP), and nephrographic-phase (NP) MR as well as precontrast-phase (PCP), CMP, and NP CT. CatBoost was used for ML-model investigation. The reproducibility of texture features was assessed using intraclass correlation coefficient (ICC). Accuracy (ACC) was used for ML-model performance evaluation. RESULTS: Twenty external and 440 internal cases were included. Among 368 and 276 texture features from MR and CT, 322 and 250 features with good to excellent reproducibility (ICC >/= 0.75) were included for ML-model development. The best MR- and CT-based ML models satisfactorily distinguished high- from low-grade ccRCCs in internal (MR-ACC = 73% and CT-ACC = 79%) and external (MR-ACC = 74% and CT-ACC = 69%) validation. Compared to single-sequence or single-phase images, the classifiers based on all-sequence MR (71% to 73% in internal and 64% to 74% in external validation) and all-phase CT (77% to 79% in internal and 61% to 69% in external validation) images had significant increases in ACC. CONCLUSIONS: MR- and CT-based ML models are valuable noninvasive techniques for discriminating high- from low-grade ccRCCs, and multiparameter MR- and multiphase CT-based classifiers are potentially superior to those based on single-sequence or single-phase imaging. KEY POINTS: * Both the MR- and CT-based machine learning models are reliable predictors for differentiating high- from low-grade ccRCCs. * ML models based on multiparameter MR sequences and multiphase CT images potentially outperform those based on single-sequence or single-phase images in ccRCC grading.

Parallel Implementation of the DRLSE Algorithm

  • Coelho, Daniel Popp
  • Furuie, Sérgio Shiguemi
2020 Conference Paper, cited 0 times
Website

Machine learning and radiomic phenotyping of lower grade gliomas: improving survival prediction

  • Choi, Yoon Seong
  • Ahn, Sung Soo
  • Chang, Jong Hee
  • Kang, Seok-Gu
  • Kim, Eui Hyun
  • Kim, Se Hoon
  • Jain, Rajan
  • Lee, Seung-Koo
Eur Radiol 2020 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: Recent studies have highlighted the importance of isocitrate dehydrogenase (IDH) mutational status in stratifying biologically distinct subgroups of gliomas. This study aimed to evaluate whether MRI-based radiomic features could improve the accuracy of survival predictions for lower grade gliomas over clinical and IDH status. MATERIALS AND METHODS: Radiomic features (n = 250) were extracted from preoperative MRI data of 296 lower grade glioma patients from databases at our institutional (n = 205) and The Cancer Genome Atlas (TCGA)/The Cancer Imaging Archive (TCIA) (n = 91) datasets. For predicting overall survival, random survival forest models were trained with radiomic features; non-imaging prognostic factors including age, resection extent, WHO grade, and IDH status on the institutional dataset, and validated on the TCGA/TCIA dataset. The performance of the random survival forest (RSF) model and incremental value of radiomic features were assessed by time-dependent receiver operating characteristics. RESULTS: The radiomics RSF model identified 71 radiomic features to predict overall survival, which were successfully validated on TCGA/TCIA dataset (iAUC, 0.620; 95% CI, 0.501-0.756). Relative to the RSF model from the non-imaging prognostic parameters, the addition of radiomic features significantly improved the overall survival prediction accuracy of the random survival forest model (iAUC, 0.627 vs. 0.709; difference, 0.097; 95% CI, 0.003-0.209). CONCLUSION: Radiomic phenotyping with machine learning can improve survival prediction over clinical profile and genomic data for lower grade gliomas. KEY POINTS: * Radiomics analysis with machine learning can improve survival prediction over the non-imaging factors (clinical and molecular profiles) for lower grade gliomas, across different institutions.

Evaluation of data augmentation via synthetic images for improved breast mass detection on mammograms using deep learning

  • Cha, K. H.
  • Petrick, N.
  • Pezeshk, A.
  • Graff, C. G.
  • Sharma, D.
  • Badal, A.
  • Sahiner, B.
J Med Imaging (Bellingham) 2020 Journal Article, cited 1 times
Website
We evaluated whether using synthetic mammograms for training data augmentation may reduce the effects of overfitting and increase the performance of a deep learning algorithm for breast mass detection. Synthetic mammograms were generated using in silico procedural analytic breast and breast mass modeling algorithms followed by simulated x-ray projections of the breast models into mammographic images. In silico breast phantoms containing masses were modeled across the four BI-RADS breast density categories, and the masses were modeled with different sizes, shapes, and margins. A Monte Carlo-based x-ray transport simulation code, MC-GPU, was used to project the three-dimensional phantoms into realistic synthetic mammograms. 2000 mammograms with 2522 masses were generated to augment a real data set during training. From the Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM) data set, we used 1111 mammograms (1198 masses) for training, 120 mammograms (120 masses) for validation, and 361 mammograms (378 masses) for testing. We used faster R-CNN for our deep learning network with pretraining from ImageNet using the Resnet-101 architecture. We compared the detection performance when the network was trained using different percentages of the real CBIS-DDSM training set (100%, 50%, and 25%), and when these subsets of the training set were augmented with 250, 500, 1000, and 2000 synthetic mammograms. Free-response receiver operating characteristic (FROC) analysis was performed to compare performance with and without the synthetic mammograms. We generally observed an improved test FROC curve when training with the synthetic images compared to training without them, and the amount of improvement depended on the number of real and synthetic images used in training. Our study shows that enlarging the training data with synthetic samples can increase the performance of deep learning systems.

The Impact of Normalization Approaches to Automatically Detect Radiogenomic Phenotypes Characterizing Breast Cancer Receptors Status

  • Castaldo, Rossana
  • Pane, Katia
  • Nicolai, Emanuele
  • Salvatore, Marco
  • Franzese, Monica
Cancers (Basel) 2020 Journal Article, cited 0 times
Website
In breast cancer studies, combining quantitative radiomic with genomic signatures can help identifying and characterizing radiogenomic phenotypes, in function of molecular receptor status. Biomedical imaging processing lacks standards in radiomic feature normalization methods and neglecting feature normalization can highly bias the overall analysis. This study evaluates the effect of several normalization techniques to predict four clinical phenotypes such as estrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor receptor 2 (HER2), and triple negative (TN) status, by quantitative features. The Cancer Imaging Archive (TCIA) radiomic features from 91 T1-weighted Dynamic Contrast Enhancement MRI of invasive breast cancers were investigated in association with breast invasive carcinoma miRNA expression profiling from the Cancer Genome Atlas (TCGA). Three advanced machine learning techniques (Support Vector Machine, Random Forest, and Naive Bayesian) were investigated to distinguish between molecular prognostic indicators and achieved an area under the ROC curve (AUC) values of 86%, 93%, 91%, and 91% for the prediction of ER+ versus ER-, PR+ versus PR-, HER2+ versus HER2-, and triple-negative, respectively. In conclusion, radiomic features enable to discriminate major breast cancer molecular subtypes and may yield a potential imaging biomarker for advancing precision medicine.

Multimodal mixed reality visualisation for intraoperative surgical guidance

  • Cartucho, João
  • Shapira, David
  • Ashrafian, Hutan
  • Giannarou, Stamatia
International journal of computer assisted radiology and surgery 2020 Journal Article, cited 0 times
Website

Standardization of brain MR images across machines and protocols: bridging the gap for MRI-based radiomics

  • Carré, Alexandre
  • Klausner, Guillaume
  • Edjlali, Myriam
  • Lerousseau, Marvin
  • Briend-Diop, Jade
  • Sun, Roger
  • Ammari, Samy
  • Reuzé, Sylvain
  • Andres, Emilie Alvarez
  • Estienne, Théo
Sci RepScientific reports 2020 Journal Article, cited 0 times
Website

Head and neck cancer patient images for determining auto-segmentation accuracy in T2-weighted magnetic resonance imaging through expert manual segmentations

  • Cardenas, Carlos E
  • Mohamed, Abdallah S R
  • Yang, Jinzhong
  • Gooding, Mark
  • Veeraraghavan, Harini
  • Kalpathy-Cramer, Jayashree
  • Ng, Sweet Ping
  • Ding, Yao
  • Wang, Jihong
  • Lai, Stephen Y
  • Fuller, Clifton D
  • Sharp, Greg
Med Phys 2020 Dataset, cited 0 times
Website
PURPOSE: The use of magnetic resonance imaging (MRI) in radiotherapy treatment planning has rapidly increased due to its ability to evaluate patient's anatomy without the use of ionizing radiation and due to its high soft tissue contrast. For these reasons, MRI has become the modality of choice for longitudinal and adaptive treatment studies. Automatic segmentation could offer many benefits for these studies. In this work, we describe a T2-weighted MRI dataset of head and neck cancer patients that can be used to evaluate the accuracy of head and neck normal tissue auto-segmentation systems through comparisons to available expert manual segmentations. ACQUISITION AND VALIDATION METHODS: T2-weighted MRI images were acquired for 55 head and neck cancer patients. These scans were collected after radiotherapy computed tomography (CT) simulation scans using a thermoplastic mask to replicate patient treatment position. All scans were acquired on a single 1.5 T Siemens MAGNETOM Aera MRI with two large four-channel flex phased-array coils. The scans covered the region encompassing the nasopharynx region cranially and supraclavicular lymph node region caudally, when possible, in the superior-inferior direction. Manual contours were created for the left/right submandibular gland, left/right parotids, left/right lymph node level II, and left/right lymph node level III. These contours underwent quality assurance to ensure adherence to predefined guidelines, and were corrected if edits were necessary. DATA FORMAT AND USAGE NOTES: The T2-weighted images and RTSTRUCT files are available in DICOM format. The regions of interest are named based on AAPM's Task Group 263 nomenclature recommendations (Glnd_Submand_L, Glnd_Submand_R, LN_Neck_II_L, Parotid_L, Parotid_R, LN_Neck_II_R, LN_Neck_III_L, LN_Neck_III_R). This dataset is available on The Cancer Imaging Archive (TCIA) by the National Cancer Institute under the collection "AAPM RT-MAC Grand Challenge 2019" (https://doi.org/10.7937/tcia.2019.bcfjqfqb). POTENTIAL APPLICATIONS: This dataset provides head and neck patient MRI scans to evaluate auto-segmentation systems on T2-weighted images. Additional anatomies could be provided at a later time to enhance the existing library of contours.

Formal methods for prostate cancer gleason score and treatment prediction using radiomic biomarkers

  • Brunese, Luca
  • Mercaldo, Francesco
  • Reginelli, Alfonso
  • Santone, Antonella
Magnetic Resonance Imaging 2020 Journal Article, cited 11 times
Website

Integration of convolutional neural networks for pulmonary nodule malignancy assessment in a lung cancer classification pipeline

  • Bonavita, I.
  • Rafael-Palou, X.
  • Ceresa, M.
  • Piella, G.
  • Ribas, V.
  • Gonzalez Ballester, M. A.
Comput Methods Programs Biomed 2020 Journal Article, cited 3 times
Website
BACKGROUND AND OBJECTIVE: The early identification of malignant pulmonary nodules is critical for a better lung cancer prognosis and less invasive chemo or radio therapies. Nodule malignancy assessment done by radiologists is extremely useful for planning a preventive intervention but is, unfortunately, a complex, time-consuming and error-prone task. This explains the lack of large datasets containing radiologists malignancy characterization of nodules; METHODS: In this article, we propose to assess nodule malignancy through 3D convolutional neural networks and to integrate it in an automated end-to-end existing pipeline of lung cancer detection. For training and testing purposes we used independent subsets of the LIDC dataset; RESULTS: Adding the probabilities of nodules malignity in a baseline lung cancer pipeline improved its F1-weighted score by 14.7%, whereas integrating the malignancy model itself using transfer learning outperformed the baseline prediction by 11.8% of F1-weighted score; CONCLUSIONS: Despite the limited size of the lung cancer datasets, integrating predictive models of nodule malignancy improves prediction of lung cancer.

Multiparametric MRI and auto-fixed volume of interest-based radiomics signature for clinically significant peripheral zone prostate cancer

  • Bleker, J.
  • Kwee, T. C.
  • Dierckx, Rajo
  • de Jong, I. J.
  • Huisman, H.
  • Yakar, D.
Eur Radiol 2020 Journal Article, cited 2 times
Website
OBJECTIVES: To create a radiomics approach based on multiparametric magnetic resonance imaging (mpMRI) features extracted from an auto-fixed volume of interest (VOI) that quantifies the phenotype of clinically significant (CS) peripheral zone (PZ) prostate cancer (PCa). METHODS: This study included 206 patients with 262 prospectively called mpMRI prostate imaging reporting and data system 3-5 PZ lesions. Gleason scores > 6 were defined as CS PCa. Features were extracted with an auto-fixed 12-mm spherical VOI placed around a pin point in each lesion. The value of dynamic contrast-enhanced imaging(DCE), multivariate feature selection and extreme gradient boosting (XGB) vs. univariate feature selection and random forest (RF), expert-based feature pre-selection, and the addition of image filters was investigated using the training (171 lesions) and test (91 lesions) datasets. RESULTS: The best model with features from T2-weighted (T2-w) + diffusion-weighted imaging (DWI) + DCE had an area under the curve (AUC) of 0.870 (95% CI 0.980-0.754). Removal of DCE features decreased AUC to 0.816 (95% CI 0.920-0.710), although not significantly (p = 0.119). Multivariate and XGB outperformed univariate and RF (p = 0.028). Expert-based feature pre-selection and image filters had no significant contribution. CONCLUSIONS: The phenotype of CS PZ PCa lesions can be quantified using a radiomics approach based on features extracted from T2-w + DWI using an auto-fixed VOI. Although DCE features improve diagnostic performance, this is not statistically significant. Multivariate feature selection and XGB should be preferred over univariate feature selection and RF. The developed model may be a valuable addition to traditional visual assessment in diagnosing CS PZ PCa. KEY POINTS: * T2-weighted and diffusion-weighted imaging features are essential components of a radiomics model for clinically significant prostate cancer; addition of dynamic contrast-enhanced imaging does not significantly improve diagnostic performance. * Multivariate feature selection and extreme gradient outperform univariate feature selection and random forest. * The developed radiomics model that extracts multiparametric MRI features with an auto-fixed volume of interest may be a valuable addition to visual assessment in diagnosing clinically significant prostate cancer.

Isolation of Prostate Gland in T1-Weighted Magnetic Resonance Images using Computer Vision

  • Bhattacharya, Sayantan
  • Sharma, Apoorv
  • Gupta, Rinki
  • Bhan, Anupama
2020 Conference Proceedings, cited 0 times
Website

Deep-learning framework to detect lung abnormality – A study with chest X-Ray and lung CT scan images

  • Bhandary, Abhir
  • Prabhu, G. Ananth
  • Rajinikanth, V.
  • Thanaraj, K. Palani
  • Satapathy, Suresh Chandra
  • Robbins, David E.
  • Shasky, Charles
  • Zhang, Yu-Dong
  • Tavares, João Manuel R. S.
  • Raja, N. Sri Madhava
Pattern Recognition Letters 2020 Journal Article, cited 0 times
Website
Lung abnormalities are highly risky conditions in humans. The early diagnosis of lung abnormalities is essential to reduce the risk by enabling quick and efficient treatment. This research work aims to propose a Deep-Learning (DL) framework to examine lung pneumonia and cancer. This work proposes two different DL techniques to assess the considered problem: (i) The initial DL method, named a modified AlexNet (MAN), is proposed to classify chest X-Ray images into normal and pneumonia class. In the MAN, the classification is implemented using with Support Vector Machine (SVM), and its performance is compared against Softmax. Further, its performance is validated with other pre-trained DL techniques, such as AlexNet, VGG16, VGG19 and ResNet50. (ii) The second DL work implements a fusion of handcrafted and learned features in the MAN to improve classification accuracy during lung cancer assessment. This work employs serial fusion and Principal Component Analysis (PCA) based features selection to enhance the feature vector. The performance of this DL frame work is tested using benchmark lung cancer CT images of LIDC-IDRI and classification accuracy (97.27%) is attained.

Fuzzy volumetric delineation of brain tumor and survival prediction

  • Bhadani, Saumya
  • Mitra, Sushmita
  • Banerjee, Subhashis
Soft Computing 2020 Journal Article, cited 0 times
Website
A novel three-dimensional detailed delineation algorithm is introduced for Glioblastoma multiforme tumors in MRI. It efficiently delineates the whole tumor, enhancing core, edema and necrosis volumes using fuzzy connectivity and multi-thresholding, based on a single seed voxel. While the whole tumor volume delineation uses FLAIR and T2 MRI channels, the outlining of the enhancing core, necrosis and edema volumes employs the T1C channel. Discrete curve evolution is initially applied for multi-thresholding, to determine intervals around significant (visually critical) points, and a threshold is determined in each interval using bi-level Otsu’s method or Li and Lee’s entropy. This is followed by an interactive whole tumor volume delineation using FLAIR and T2 MRI sequences, requiring a single user-defined seed. An efficient and robust whole tumor extraction is executed using fuzzy connectedness and dynamic thresholding. Finally, the segmented whole tumor volume in T1C MRI channel is again subjected to multi-level segmentation, to delineate its sub-parts, encompassing enhancing core, necrosis and edema. This was followed by survival prediction of patients using the concept of habitats. Qualitative and quantitative evaluation, on FLAIR, T2 and T1C MR sequences of 29 GBM patients, establish its superiority over related methods, visually as well as in terms of Dice scores, Sensitivity and Hausdorff distance.

Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T

  • Bell, Laura C
  • Stokes, Ashley M
  • Quarles, C Chad
Journal of Magnetic Resonance Imaging 2020 Journal Article, cited 0 times
Website

Evaluating the Use of rCBV as a Tumor Grade and Treatment Response Classifier Across NCI Quantitative Imaging Network Sites: Part II of the DSC-MRI Digital Reference Object (DRO) Challenge

  • Bell, Laura C
  • Semmineh, Natenael
  • An, Hongyu
  • Eldeniz, Cihat
  • Wahl, Richard
  • Schmainda, Kathleen M
  • Prah, Melissa A
  • Erickson, Bradley J
  • Korfiatis, Panagiotis
  • Wu, Chengyue
  • Sorace, Anna G
  • Yankeelov, Thomas E
  • Rutledge, Neal
  • Chenevert, Thomas L
  • Malyarenko, Dariya
  • Liu, Yichu
  • Brenner, Andrew
  • Hu, Leland S
  • Zhou, Yuxiang
  • Boxerman, Jerrold L
  • Yen, Yi-Fen
  • Kalpathy-Cramer, Jayashree
  • Beers, Andrew L
  • Muzi, Mark
  • Madhuranthakam, Ananth J
  • Pinho, Marco
  • Johnson, Brian
  • Quarles, C Chad
Tomography 2020 Journal Article, cited 1 times
Website
We have previously characterized the reproducibility of brain tumor relative cerebral blood volume (rCBV) using a dynamic susceptibility contrast magnetic resonance imaging digital reference object across 12 sites using a range of imaging protocols and software platforms. As expected, reproducibility was highest when imaging protocols and software were consistent, but decreased when they were variable. Our goal in this study was to determine the impact of rCBV reproducibility for tumor grade and treatment response classification. We found that varying imaging protocols and software platforms produced a range of optimal thresholds for both tumor grading and treatment response, but the performance of these thresholds was similar. These findings further underscore the importance of standardizing acquisition and analysis protocols across sites and software benchmarking.

Radiogenomic-Based Survival Risk Stratification of Tumor Habitat on Gd-T1w MRI Is Associated with Biological Processes in Glioblastoma

  • Beig, Niha
  • Bera, Kaustav
  • Prasanna, Prateek
  • Antunes, Jacob
  • Correa, Ramon
  • Singh, Salendra
  • Saeed Bamashmos, Anas
  • Ismail, Marwa
  • Braman, Nathaniel
  • Verma, Ruchika
  • Hill, Virginia B
  • Statsevych, Volodymyr
  • Ahluwalia, Manmeet S
  • Varadan, Vinay
  • Madabhushi, Anant
  • Tiwari, Pallavi
Clin Cancer Res 2020 Journal Article, cited 0 times
Website
PURPOSE: To (i) create a survival risk score using radiomic features from the tumor habitat on routine MRI to predict progression-free survival (PFS) in glioblastoma and (ii) obtain a biological basis for these prognostic radiomic features, by studying their radiogenomic associations with molecular signaling pathways. EXPERIMENTAL DESIGN: Two hundred three patients with pretreatment Gd-T1w, T2w, T2w-FLAIR MRI were obtained from 3 cohorts: The Cancer Imaging Archive (TCIA; n = 130), Ivy GAP (n = 32), and Cleveland Clinic (n = 41). Gene-expression profiles of corresponding patients were obtained for TCIA cohort. For every study, following expert segmentation of tumor subcompartments (necrotic core, enhancing tumor, peritumoral edema), 936 3D radiomic features were extracted from each subcompartment across all MRI protocols. Using Cox regression model, radiomic risk score (RRS) was developed for every protocol to predict PFS on the training cohort (n = 130) and evaluated on the holdout cohort (n = 73). Further, Gene Ontology and single-sample gene set enrichment analysis were used to identify specific molecular signaling pathway networks associated with RRS features. RESULTS: Twenty-five radiomic features from the tumor habitat yielded the RRS. A combination of RRS with clinical (age and gender) and molecular features (MGMT and IDH status) resulted in a concordance index of 0.81 (P < 0.0001) on training and 0.84 (P = 0.03) on the test set. Radiogenomic analysis revealed associations of RRS features with signaling pathways for cell differentiation, cell adhesion, and angiogenesis, which contribute to chemoresistance in GBM. CONCLUSIONS: Our findings suggest that prognostic radiomic features from routine Gd-T1w MRI may also be significantly associated with key biological processes that affect response to chemotherapy in GBM.

Integration of proteomics with CT-based qualitative and radiomic features in high-grade serous ovarian cancer patients: an exploratory analysis

  • Beer, Lucian
  • Sahin, Hilal
  • Bateman, Nicholas W
  • Blazic, Ivana
  • Vargas, Hebert Alberto
  • Veeraraghavan, Harini
  • Kirby, Justin
  • Fevrier-Sullivan, Brenda
  • Freymann, John B
  • Jaffe, C Carl
European Radiology 2020 Journal Article, cited 1 times
Website

A Heterogeneous and Multi-Range Soft-Tissue Deformation Model for Applications in Adaptive Radiotherapy

  • Bartelheimer, Kathrin
2020 Thesis, cited 0 times
Website
Abstract During fractionated radiotherapy, anatomical changes result in uncertainties in the applied dose distribution. With increasing steepness of applied dose gradients, the relevance of patient deformations increases. Especially in proton therapy, small anatomical changes in the order of millimeters can result in large range uncertainties and therefore in substantial deviations from the planned dose. To quantify the anatomical changes, deformation models are required. With upcoming MR-guidance, the soft-tissue deformations gain visibility, but so far only few soft-tissue models meeting the requirements of high-precision radiotherapy exist. Most state-of-the-art models either lack anatomical detail or exhibit long computation times. In this work, a fast soft-tissue deformation model is developed which is capable of considering tissue properties of heterogeneous tissue. The model is based on the chainmail (CM)-concept, which is improved by three basic features. For the first time, rotational degrees of freedom are introduced into the CM-concept to improve the characteristic deformation behavior. A novel concept for handling multiple deformation initiators is developed to cope with global deformation input. And finally, a concept for handling various shapes of deformation input is proposed to provide a high flexibility concerning the design of deformation input. To demonstrate the model flexibility, it was coupled to a kinematic skeleton model for the head and neck region, which provides anatomically correct deformation input for the bones. For exemplary patient CTs, the combined model was shown to be capable of generating artificially deformed CT images with realistic appearance. This was achieved for small-range deformations in the order of interfractional deformations, as well as for large-range deformations like an arms-up to arms-down deformation, as can occur between images of different modalities. The deformation results showed a strong improvement in biofidelity, compared to the original chainmail-concept, as well as compared to clinically used image-based deformation methods. The computation times for the model are in the order of 30 min for single-threaded calculations, by simple code parallelization times in the order of 1 min can be achieved. Applications that require realistic forward deformations of CT images will benefit from the improved biofidelity of the developed model. Envisioned applications are the generation of plan libraries and virtual phantoms, as well as data augmentation for deep learning approaches. Due to the low computation times, the model is also well suited for image registration applications. In this context, it will contribute to an improved calculation of accumulated dose, as is required in high-precision adaptive radiotherapy. Translation of abstract (German) Anatomische Veränderungen im Laufe der fraktionierten Strahlentherapie erzeugen Unsicherheiten in der tatsächlich applizierten Dosisverteilung. Je steiler die Dosisgradienten in der Verteilung sind, desto größer wird der Einfluss von Patientendeformationen. Insbesondere in der Protonentherapie erzeugen schon kleine anatomische Veränderungen im mm-Bereich große Unsicherheiten in der Reichweite und somit extreme Unterschiede zur geplanten Dosis. Um solche anatomischen Veränderungen zu quantifizieren, werden Deformationsmodelle benötigt. Durch die aufkommenden Möglichkeiten von MR-guidance gewinnt das Weichgewebe an Sichtbarkeit. Allerdings gibt es bisher nur wenige Modelle für Weichgewebe, welche den Anforderungen von hochpräziser Strahlentherapie genügen. Die meisten Modelle berücksichtigen entweder nicht genügend anatomische Details oder benötigen lange Rechenzeiten. In dieser Arbeit wird ein schnelles Deformationsmodell für Weichgewebe entwickelt, welches es ermöglicht, Gewebeeigenschaften von heterogenem Gewebe zu berücksichtigen. Dieses Modell basiert auf dem Chainmail (CM)-Konzept, welches um drei grundlegende Eigenschaften erweitert wird. Rotationsfreiheitsgrade werden in das CM-Konzept eingebracht, um das charakteristische Deformationsverhalten zu verbessern. Es wird ein neues Konzept für multiple Deformationsinitiatoren entwickelt, um mit globalem Deformationsinput umgehen zu können. Und zuletzt wird ein Konzept zum Umgang mit verschiedenen Formen von Deformationsinput vorgestellt, welches eine hohe Flexibilität für die Kopplung zu anderen Modellen ermöglicht. Um diese Flexibilität des Modells zu zeigen, wurde es mit einem kinematischen Skelettmodell für die Kopf-Hals-Region gekoppelt, welches anatomisch korrekten Input für die Knochen liefert. Basierend auf exemplarischen Patientendatensätzen wurde gezeigt, dass das gekoppelte Modell realistisch aussehende, künstlich deformierte CTs erzeugen kann. Dies war sowohl für eine kleine Deformation im Bereich von interfraktionellen Bewegungen als auch für eine große Deformation, wie z.B. eine arms-up zu arms-down Bewegung, welche zwischen multimodalen Bildern auftreten kann, möglich. Die Ergebnisse zeigen eine starke Verbesserung der Biofidelity im Vergleich zum CM-Modell, und auch im Vergleich zu klinisch eingesetzten bildbasierten Deformationsmodellen. Die Rechenzeiten für das Modell liegen im Bereich von 30 min für single-threaded Berechnungen. Durch einfache Code-Parallelisierung können Zeiten im Bereich von 1 min erreicht werden. Anwendungen, die realistische CTs aus Vorwärtsdeformationen benötigen, werden von der verbesserten Biofidelity des entwickelten Modells profitieren. Mögliche Anwendungen sind die Erstellung von Plan-Bibliotheken und virtuellen Phantomen sowie Daten-Augmentation für deep-learning Ansätze. Aufgrund der geringen Rechenzeiten ist das Modell auch für Anwendungen in der Bildregistrierung gut geeignet. In diesem Kontext wird es zu einer verbesserten Berechnung der akkumulierten Dosis beitragen, welche für hochpräzise adaptive Strahlentherapie benötigt wird.

A novel fully automated MRI-based deep-learning method for classification of IDH mutation status in brain gliomas

  • Bangalore Yogananda, Chandan Ganesh
  • Shah, Bhavya R
  • Vejdani-Jahromi, Maryam
  • Nalawade, Sahil S
  • Murugesan, Gowtham K
  • Yu, Frank F
  • Pinho, Marco C
  • Wagner, Benjamin C
  • Mickey, Bruce
  • Patel, Toral R
Neuro-oncology 2020 Journal Article, cited 4 times
Website

Glioma Classification Using Deep Radiomics

  • Banerjee, Subhashis
  • Mitra, Sushmita
  • Masulli, Francesco
  • Rovetta, Stefano
SN Computer Science 2020 Journal Article, cited 1 times
Website

Semi-automatic classification of prostate cancer on multi-parametric MR imaging using a multi-channel 3D convolutional neural network

  • Aldoj, Nader
  • Lukas, Steffen
  • Dewey, Marc
  • Penzkofer, Tobias
European Radiology 2020 Journal Article, cited 1 times
Website

A Novel Approach to Improving Brain Image Classification Using Mutual Information-Accelerated Singular Value Decomposition

  • Al-Saffar, Zahraa A
  • Yildirim, Tülay
IEEE Access 2020 Journal Article, cited 0 times
Website

Pharmacokinetic modeling of dynamic contrast‐enhanced MRI using a reference region and input function tail

  • Ahmed, Zaki
  • Levesque, Ives R
Magnetic Resonance in Medicine 2020 Journal Article, cited 0 times
Website

3D-MCN: A 3D Multi-scale Capsule Network for Lung Nodule Malignancy Prediction

  • Afshar, P.
  • Oikonomou, A.
  • Naderkhani, F.
  • Tyrrell, P. N.
  • Plataniotis, K. N.
  • Farahani, K.
  • Mohammadi, A.
Sci RepScientific reports 2020 Journal Article, cited 1 times
Website
Despite the advances in automatic lung cancer malignancy prediction, achieving high accuracy remains challenging. Existing solutions are mostly based on Convolutional Neural Networks (CNNs), which require a large amount of training data. Most of the developed CNN models are based only on the main nodule region, without considering the surrounding tissues. Obtaining high sensitivity is challenging with lung nodule malignancy prediction. Moreover, the interpretability of the proposed techniques should be a consideration when the end goal is to utilize the model in a clinical setting. Capsule networks (CapsNets) are new and revolutionary machine learning architectures proposed to overcome shortcomings of CNNs. Capitalizing on the success of CapsNet in biomedical domains, we propose a novel model for lung tumor malignancy prediction. The proposed framework, referred to as the 3D Multi-scale Capsule Network (3D-MCN), is uniquely designed to benefit from: (i) 3D inputs, providing information about the nodule in 3D; (ii) Multi-scale input, capturing the nodule's local features, as well as the characteristics of the surrounding tissues, and; (iii) CapsNet-based design, being capable of dealing with a small number of training samples. The proposed 3D-MCN architecture predicted lung nodule malignancy with a high accuracy of 93.12%, sensitivity of 94.94%, area under the curve (AUC) of 0.9641, and specificity of 90% when tested on the LIDC-IDRI dataset. When classifying patients as having a malignant condition (i.e., at least one malignant nodule is detected) or not, the proposed model achieved an accuracy of 83%, and a sensitivity and specificity of 84% and 81% respectively.

A novel CAD system to automatically detect cancerous lung nodules using wavelet transform and SVM

  • Abu Baker, Ayman A.
  • Ghadi, Yazeed
International Journal of Electrical and Computer Engineering (IJECE) 2020 Journal Article, cited 0 times
Website
A novel cancerous nodules detection algorithm for computed tomography images (CT - images ) is presented in this paper. CT -images are large size images with high resolution. In some cases, number of cancerous lung nodule lesions may missed by the radiologist due to fatigue. A CAD system that is proposed in this paper can help the radiologist in detecting cancerous nodules in CT -images. The proposed algorithm is divided to four stages. In the first stage, an enhancement algorithm is implement to highlight the suspicious regions. Then in the second stage, the region of interest will be detected. The adaptive SVM and wavelet transform techniques are used to reduce the detected false positive regions. This algorithm is evaluated using 60 cases (normal and cancer ous cases), and it shows a high sensitivity in detecting the cancerous lung nodules with TP ration 94.5% and with FP ratio 7 cluster/image.

Three-dimensional visualization of brain tumor progression based accurate segmentation via comparative holographic projection

  • Abdelazeem, R. M.
  • Youssef, D.
  • El-Azab, J.
  • Hassab-Elnaby, S.
  • Agour, M.
PLoS One 2020 Journal Article, cited 0 times
Website
We propose a new optical method based on comparative holographic projection for visual comparison between two abnormal follow-up magnetic resonance (MR) exams of glioblastoma patients to effectively visualize and assess tumor progression. First, the brain tissue and tumor areas are segmented from the MR exams using the fast marching method (FMM). The FMM approach is implemented on a computed pixel weight matrix based on an automated selection of a set of initialized target points. Thereafter, the associated phase holograms are calculated for the segmented structures based on an adaptive iterative Fourier transform algorithm (AIFTA). Within this approach, a spatial multiplexing is applied to reduce the speckle noise. Furthermore, hologram modulation is performed to represent two different reconstruction schemes. In both schemes, all calculated holograms are superimposed into a single two-dimensional (2D) hologram which is then displayed on a reflective phase-only spatial light modulator (SLM) for optical reconstruction. The optical reconstruction of the first scheme displays a 3D map of the tumor allowing to visualize the volume of the tumor after treatment and at the progression. Whereas, the second scheme displays the follow-up exams in a side-by-side mode highlighting tumor areas, so the assessment of each case can be fast achieved. The proposed system can be used as a valuable tool for interpretation and assessment of the tumor progression with respect to the treatment method providing an improvement in diagnosis and treatment planning.

Automatic classification of lung nodule candidates based on a novel 3D convolution network and knowledge transferred from a 2D network

  • Zuo, Wangxia
  • Zhou, Fuqiang
  • He, Yuzhu
  • Li, Xiaosong
Med Phys 2019 Journal Article, cited 0 times
Website
OBJECTIVE: In the automatic lung nodule detection system, the authenticity of a large number of nodule candidates needs to be judged, which is a classification task. However, the variable shapes and sizes of the lung nodules have posed a great challenge to the classification of candidates. To solve this problem, we propose a method for classifying nodule candidates through three-dimensional (3D) convolution neural network (ConvNet) model which is trained by transferring knowledge from a multiresolution two-dimensional (2D) ConvNet model. METHODS: In this scheme, a novel 3D ConvNet model is preweighted with the weights of the trained 2D ConvNet model, and then the 3D ConvNet model is trained with 3D image volumes. In this way, the knowledge transfer method can make 3D network easier to converge and make full use of the spatial information of nodules with different sizes and shapes to improve the classification accuracy. RESULTS: The experimental results on 551 065 pulmonary nodule candidates in the LUNA16 dataset show that our method gains a competitive average score in the false-positive reduction track in lung nodule detection, with the sensitivities of 0.619 and 0.642 at 0.125 and 0.25 FPs per scan, respectively. CONCLUSIONS: The proposed method can maintain satisfactory classification accuracy even when the false-positive rate is extremely small in the face of nodules of different sizes and shapes. Moreover, as a transfer learning idea, the method to transfer knowledge from 2D ConvNet to 3D ConvNet is the first attempt to carry out full migration of parameters of various layers including convolution layers, full connection layers, and classifier between different dimensional models, which is more conducive to utilizing the existing 2D ConvNet resources and generalizing transfer learning schemes.

Comparison of Active Learning Strategies Applied to Lung Nodule Segmentation in CT Scans

  • Zotova, Daria
  • Lisowska, Aneta
  • Anderson, Owen
  • Dilys, Vismantas
  • O’Neil, Alison
2019 Book Section, cited 0 times
Supervised machine learning techniques require large amounts of annotated training data to attain good performance. Active learning aims to ease the data collection process by automatically detecting which instances an expert should annotate in order to train a model as quickly and effectively as possible. Such strategies have been previously reported for medical imaging, but for other tasks than focal pathologies where there is high class imbalance and heterogeneous background appearance. In this study we evaluate different data selection approaches (random, uncertain, and representative sampling) and a semi-supervised model training procedure (pseudo-labelling), in the context of lung nodule segmentation in CT volumes from the publicly available LIDC-IDRI dataset. We find that active learning strategies allow us to train a model with equal performance but less than half of the annotation effort; data selection by uncertainty sampling offers the most gain, with the incorporation of representativeness or the addition of pseudo-labelling giving further small improvements. We conclude that active learning is a valuable tool and that further development of these strategies can play a key role in making diagnostic algorithms viable.

The Utilization of Consignable Multi-Model in Detection and Classification of Pulmonary Nodules

  • Zia, Muhammad Bilal
  • Juan, Zhao Juan
  • Rehman, Zia Ur
  • Javed, Kamran
  • Rauf, Saad Abdul
  • Khan, Arooj
International Journal of Computer Applications 2019 Journal Article, cited 2 times
Website
Early stage Detection and Classification of pulmonary nodule diagnostics from CT images is a complicated task. The risk assessment for malignancy is usually used to assist the physician in assessing the cancer stage and creating a follow-up prediction strategy. Due to the difference in size, structure, and location of the nodules, the classification of nodules in the computer-assisted diagnostic system has been a great challenge. While deep learning is currently the most effective solution in terms of image detection and classification, there are many training information required, typically not readily accessible in most routine frameworks of medical imaging. Though, it is complicated for radiologists to recognize the inexplicability of deep neural networks. In this paper, a Consignable Multi-Model (CMM) is proposed for the detection and classification of a lung nodule, which first detect the lung nodule from CT images by different detection algorithms and then classify the lung nodules using Multi-Output DenseNet (MOD) technique. In order to enhance the interpretability of the proposed CMM, two inputs with multiple early outputs have been introduced in dense blocks. MOD accepts the detect patches into its two inputs which were identified from the detection phase and then classified it between benign and malignant using early outputs to gain more knowledge of a tumor. In addition, the experimental results on the LIDC-IDRI dataset demonstrate a 92.10% accuracy of CMM for the lung nodule classification, respectively. CMM made substantial progress in the diagnosis of nodules in contrast to the existing methods.

Deep Learning for Automated Medical Image Analysis

  • Wentao Zhu
2019 Thesis, cited 0 times
Website
Medical imaging is an essential tool in many areas of medical applications, used for both diagnosis and treatment. However, reading medical images and making diagnosis or treatment recommendations require specially trained medical specialists. The current practice of reading medical images is labor-intensive, time-consuming, costly, and error-prone. It would be more desirable to have a computer-aided system that can automatically make diagnosis and treatment recommendations. Recent advances in deep learning enable us to rethink the ways of clinician diagnosis based on medical images. Early detection has proven to be critical to give patients the best chance of recovery and survival. Advanced computer-aided diagnosis systems are expected to have high sensitivities and small low positive rates. How to provide accurate diagnosis results and explore different types of clinical data is an important topic in the current computer-aided diagnosis research. In this thesis, we will introduce 1) mammograms for detecting breast cancers, the most frequently diagnosed solid cancer for U.S. women, 2) lung Computed Tomography (CT) images for detecting lung cancers, the most frequently diagnosed malignant cancer, and 3) head and neck CT images for automated delineation of organs at risk in radiotherapy. First, we will show how to employ the adversarial concept to generate the hard examples improving mammogram mass segmentation. Second, we will demonstrate how to use the weakly labelled data for the mammogram breast cancer diagnosis by efficiently design deep learning for multiinstance learning. Third, the thesis will walk through DeepLung system which combines deep 3D ConvNets and Gradient Boosting Machine (GBM) for automated lung nodule detection and classification. Fourth, we will show how to use weakly labelled data to improve existing lung nodule detection system by integrating deep learning with a probabilistic graphic model. Lastly, we will demonstrate the AnatomyNet which is thousands of times faster and more accurate than previous methods on automated anatomy segmentation.

Preliminary Clinical Study of the Differences Between Interobserver Evaluation and Deep Convolutional Neural Network-Based Segmentation of Multiple Organs at Risk in CT Images of Lung Cancer

  • Zhu, Jinhan
  • Liu, Yimei
  • Zhang, Jun
  • Wang, Yixuan
  • Chen, Lixin
Frontiers in Oncology 2019 Journal Article, cited 0 times
Website
Background: In this study, publicly datasets with organs at risk (OAR) structures were used as reference data to compare the differences of several observers. Convolutional neural network (CNN)-based auto-contouring was also used in the analysis. We evaluated the variations among observers and the effect of CNN-based auto-contouring in clinical applications. Materials and methods: A total of 60 publicly available lung cancer CT with structures were used; 48 cases were used for training, and the other 12 cases were used for testing. The structures of the datasets were used as reference data. Three observers and a CNN-based program performed contouring for 12 testing cases, and the 3D dice similarity coefficient (DSC) and mean surface distance (MSD) were used to evaluate differences from the reference data. The three observers edited the CNN-based contours, and the results were compared to those of manual contouring. A value of P<0.05 was considered statistically significant. Results: Compared to the reference data, no statistically significant differences were observed for the DSCs and MSDs among the manual contouring performed by the three observers at the same institution for the heart, esophagus, spinal cord, and left and right lungs. The 95% confidence interval (CI) and P-values of the CNN-based auto-contouring results comparing to the manual results for the heart, esophagus, spinal cord, and left and right lungs were as follows: the DSCs were CNN vs. A: 0.914~0.939(P = 0.004), 0.746~0.808(P = 0.002), 0.866~0.887(P = 0.136), 0.952~0.966(P = 0.158) and 0.960~0.972 (P = 0.136); CNN vs. B: 0.913~0.936 (P = 0.002), 0.745~0.807 (P = 0.005), 0.864~0.894 (P = 0.239), 0.952~0.964 (P = 0.308), and 0.959~0.971 (P = 0.272); and CNN vs. C: 0.912~0.933 (P = 0.004), 0.748~0.804(P = 0.002), 0.867~0.890 (P = 0.530), 0.952~0.964 (P = 0.308), and 0.958~0.970 (P = 0.480), respectively. The P-values of MSDs are similar to DSCs. The P-values of heart and esophagus is smaller than 0.05. No significant differences were found between the edited CNN-based auto-contouring results and the manual results. Conclusion: For the spinal cord, both lungs, no statistically significant differences were found between CNN-based auto-contouring and manual contouring. Further modifications to contouring of the heart and esophagus are necessary. Overall, editing based on CNN-based auto-contouring can effectively shorten the contouring time without affecting the results. CNNs have considerable potential for automatic contouring applications.

Prior-aware Neural Network for Partially-Supervised Multi-Organ Segmentation

  • Zhou, Yuyin
  • Li, Zhe
  • Bai, Song
  • Wang, Chong
  • Chen, Xinlei
  • Han, Mei
  • Fishman, Elliot
  • Yuille, Alan L.
2019 Conference Paper, cited 0 times
Website
Accurate multi-organ abdominal CT segmentation is essential to many clinical applications such as computeraided intervention. As data annotation requires massive human labor from experienced radiologists, it is common that training data are partially labeled, e.g., pancreas datasets only have the pancreas labeled while leaving the rest marked as background. However, these background labels can be misleading in multi-organ segmentation since the “background” usually contains some other organs of interest. To address the background ambiguity in these partially-labeled datasets, we propose Prior-aware Neural Network (PaNN) via explicitly incorporating anatomical priors on abdominal organ sizes, guiding the training process with domain-specific knowledge. More specifically, PaNN assumes that the average organ size distributions in the abdomen should approximate their empirical distributions, prior statistics obtained from the fully-labeled dataset. As our training objective is difficult to be directly optimized using stochastic gradient descent, we propose to reformulate it in a min-max form and optimize it via the stochastic primal-dual gradient algorithm. PaNN achieves state-of-the-art performance on the MICCAI2015 challenge “Multi-Atlas Labeling Beyond the Cranial Vault”, a competition on organ segmentation in the abdomen. We report an average Dice score of 84.97%, surpassing the prior art by a large margin of 3.27%.

Improving Classification with CNNs using Wavelet Pooling with Nesterov-Accelerated Adam

  • Zhou, Wenjin
  • Rossetto, Allison
2019 Conference Proceedings, cited 0 times
Website
Wavelet pooling methods can improve the classification accuracy of Convolutional Neural Networks (CNNs). Combining wavelet pooling with the Nesterov-accelerated Adam (NAdam) gradient calculation method can improve both the accuracy of the CNN. We have implemented wavelet pooling with NAdam in this work using both a Haar wavelet (WavPool-NH) and a Shannon wavelet (WavPool-NS). The WavPool-NH and WavPool- NS methods are most accurate of the methods we considered for the MNIST and LIDC- IDRI lung tumor data-sets. The WavPool-NH and WavPool-NS implementations have an accuracy of 95.92% and 95.52%, respectively, on the LIDC-IDRI data-set. This is an improvement from the 92.93% accuracy obtained on this data-set with the max pooling method. The WavPool methods also avoid overfitting which is a concern with max pool- ing. We also found WavPool performed fairly well on the CIFAR-10 data-set, however, overfitting was an issue with all the methods we considered. Wavelet pooling, especially when combined with an adaptive gradient and wavelets chosen specifically for the data, has the potential to outperform current methods.

Machine learning reveals multimodal MRI patterns predictive of isocitrate dehydrogenase and 1p/19q status in diffuse low-and high-grade gliomas.

  • Zhou, H.
  • Chang, K.
  • Bai, H. X.
  • Xiao, B.
  • Su, C.
  • Bi, W. L.
  • Zhang, P. J.
  • Senders, J. T.
  • Vallieres, M.
  • Kavouridis, V. K.
  • Boaro, A.
  • Arnaout, O.
  • Yang, L.
  • Huang, R. Y.
Journal of neuro-oncology 2019 Journal Article, cited 0 times
Website
PURPOSE: Isocitrate dehydrogenase (IDH) and 1p19q codeletion status are importantin providing prognostic information as well as prediction of treatment response in gliomas. Accurate determination of the IDH mutation status and 1p19q co-deletion prior to surgery may complement invasive tissue sampling and guide treatment decisions. METHODS: Preoperative MRIs of 538 glioma patients from three institutions were used as a training cohort. Histogram, shape, and texture features were extracted from preoperative MRIs of T1 contrast enhanced and T2-FLAIR sequences. The extracted features were then integrated with age using a random forest algorithm to generate a model predictive of IDH mutation status and 1p19q codeletion. The model was then validated using MRIs from glioma patients in the Cancer Imaging Archive. RESULTS: Our model predictive of IDH achieved an area under the receiver operating characteristic curve (AUC) of 0.921 in the training cohort and 0.919 in the validation cohort. Age offered the highest predictive value, followed by shape features. Based on the top 15 features, the AUC was 0.917 and 0.916 for the training and validation cohort, respectively. The overall accuracy for 3 group prediction (IDH-wild type, IDH-mutant and 1p19q co-deletion, IDH-mutant and 1p19q non-codeletion) was 78.2% (155 correctly predicted out of 198). CONCLUSION: Using machine-learning algorithms, high accuracy was achieved in the prediction of IDH genotype in gliomas and moderate accuracy in a three-group prediction including IDH genotype and 1p19q codeletion.

Bronchus Segmentation and Classification by Neural Networks and Linear Programming

  • Zhao, Tianyi
  • Yin, Zhaozheng
  • Wang, Jiao
  • Gao, Dashan
  • Chen, Yunqiang
  • Mao, Yunxiang
2019 Book Section, cited 0 times
Airway segmentation is a critical problem for lung disease analysis. However, building a complete airway tree is still a challenging problem because of the complex tree structure, and tracing the deep bronchi is not trivial in CT images because there are numerous small airways with various directions. In this paper, we develop two-stage 2D+3D neural networks and a linear programming based tracking algorithm for airway segmentation. Furthermore, we propose a bronchus classification algorithm based on the segmentation results. Our algorithm is evaluated on a dataset collected from 4 resources. We achieved the dice coefficient of 0.94 and F1 score of 0.86 by a centerline based evaluation metric, compared to the ground-truth manually labeled by our radiologists.

A radiomics nomogram based on multiparametric MRI might stratify glioblastoma patients according to survival

  • Zhang, Xi
  • Lu, Hongbing
  • Tian, Qiang
  • Feng, Na
  • Yin, Lulu
  • Xu, Xiaopan
  • Du, Peng
  • Liu, Yang
European Radiology 2019 Journal Article, cited 0 times

Comparison of CT and MRI images for the prediction of soft-tissue sarcoma grading and lung metastasis via a convolutional neural networks model

  • Zhang, L.
  • Ren, Z.
Clin Radiol 2019 Journal Article, cited 0 times
Website
AIM: To realise the automated prediction of soft-tissue sarcoma (STS) grading and lung metastasis based on computed tomography (CT), T1-weighted (T1W) magnetic resonance imaging (MRI), and fat-suppressed T2-weighted MRI (FST2W) via the convolutional neural networks (CNN) model. MATERIALS AND METHODS: MRI and CT images of 51 patients diagnosed with STS were analysed retrospectively. The patients could be divided into three groups based on disease grading: high-grade group (n=28), intermediate-grade group (n=15), low-grade group (n=8). Among these patients, 32 had lung metastasis, while the remaining 19 had no lung metastasis. The data were divided into the training, validation, and testing groups according to the ratio of 5:2:3. The receiver operating characteristic (ROC) curves and accuracy values were acquired using the testing dataset to evaluate the performance of the CNN model. RESULTS: For STS grading, the accuracy of the T1W, FST2W, CT, and the fusion of T1W and FST2W testing data were 0.86, 0.89, 0.86, and 0.85, respectively. In addition, Area Under Curve (AUC) were 0.96, 0.97, 0.97, and 0.94 respectively. For the prediction of lung metastasis, the accuracy of the T1W, FST2W, CT, and the fusion of T1W and FST2W test data were 0.92, 0.93, 0.88, and 0.91, respectively. The corresponding AUC values were 0.97, 0.96, 0.95, and 0.95, respectively. FST2W MRI performed best for predicting STS grading and lung metastasis. CONCLUSION: MRI and CT images combined with the CNN model can be useful for making predictions regarding STS grading and lung metastasis, thus providing help for patient diagnosis and treatment.

Brain tumor detection based on Naïve Bayes Classification

  • Zaw, Hein Tun
  • Maneerat, Noppadol
  • Win, Khin Yadanar
2019 Conference Paper, cited 2 times
Website
Brain cancer is caused by the population of abnormal cells called glial cells that takes place in the brain. Over the years, the number of patients who have brain cancer is increasing with respect to the aging population, is a worldwide health problem. The objective of this paper is to develop a method to detect the brain tissues which are affected by cancer especially for grade-4 tumor, Glioblastoma multiforme (GBM). GBM is one of the most malignant cancerous brain tumors as they are fast growing and more likely to spread to other parts of the brain. In this paper, Naïve Bayes classification is utilized for recognition of a tumor region accurately that contains all spreading cancerous tissues. Brain MRI database, preprocessing, morphological operations, pixel subtraction, maximum entropy threshold, statistical features extraction, and Naïve Bayes classifier based prediction algorithm are used in this research. The goal of this method is to detect the tumor area from different brain MRI images and to predict that detected area whether it is a tumor or not. When compared to other methods, this method can properly detect the tumor located in different regions of the brain including the middle region (aligned with eye level) which is the significant advantage of this method. When tested on 50 MRI images, this method develops 81.25% detection rate on tumor images and 100% detection rate on non-tumor images with the overall accuracy 94%.

Prediction of pathologic stage in non-small cell lung cancer using machine learning algorithm based on CT image feature analysis

  • Yu, L.
  • Tao, G.
  • Zhu, L.
  • Wang, G.
  • Li, Z.
  • Ye, J.
  • Chen, Q.
BMC cancer 2019 Journal Article, cited 11 times
Website
PURPOSE: To explore imaging biomarkers that can be used for diagnosis and prediction of pathologic stage in non-small cell lung cancer (NSCLC) using multiple machine learning algorithms based on CT image feature analysis. METHODS: Patients with stage IA to IV NSCLC were included, and the whole dataset was divided into training and testing sets and an external validation set. To tackle imbalanced datasets in NSCLC, we generated a new dataset and achieved equilibrium of class distribution by using SMOTE algorithm. The datasets were randomly split up into a training/testing set. We calculated the importance value of CT image features by means of mean decrease gini impurity generated by random forest algorithm and selected optimal features according to feature importance (mean decrease gini impurity > 0.005). The performance of prediction model in training and testing sets were evaluated from the perspectives of classification accuracy, average precision (AP) score and precision-recall curve. The predictive accuracy of the model was externally validated using lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) samples from TCGA database. RESULTS: The prediction model that incorporated nine image features exhibited a high classification accuracy, precision and recall scores in the training and testing sets. In the external validation, the predictive accuracy of the model in LUAD outperformed that in LUSC. CONCLUSIONS: The pathologic stage of patients with NSCLC can be accurately predicted based on CT image features, especially for LUAD. Our findings extend the application of machine learning algorithms in CT image feature prediction for pathologic staging and identify potential imaging biomarkers that can be used for diagnosis of pathologic stage in NSCLC patients.

Correlative hierarchical clustering-based low-rank dimensionality reduction of radiomics-driven phenotype in non-small cell lung cancer

  • Bardia Yousefi
  • Nariman Jahani
  • Michael J. LaRiviere
  • Eric Cohen
  • Meng-Kang Hsieh
  • José Marcio Luna
  • Rhea D. Chitalia
  • Jeffrey C. Thompson
  • Erica L. Carpenter
  • Sharyn I. Katz
  • Despina Kontos
2019 Conference Paper, cited 0 times
Website
Background: Lung cancer is one of the most common cancers in the United States and the most fatal, with 142,670 deaths in 2019. Accurately determining tumor response is critical to clinical treatment decisions, ultimately impacting patient survival. To better differentiate between non-small cell lung cancer (NSCLC) responders and non-responders to therapy, radiomic analysis is emerging as a promising approach to identify associated imaging features undetectable by the human eye. However, the plethora of variables extracted from an image may actually undermine the performance of computer-aided prognostic assessment, known as the curse of dimensionality. In the present study, we show that correlative-driven hierarchical clustering improves high-dimensional radiomics-based feature selection and dimensionality reduction, ultimately predicting overall survival in NSCLC patients. Methods: To select features for high-dimensional radiomics data, a correlation-incorporated hierarchical clustering algorithm automatically categorizes features into several groups. The truncation distance in the resulting dendrogram graph is used to control the categorization of the features, initiating low-rank dimensionality reduction in each cluster, and providing descriptive features for Cox proportional hazards (CPH)-based survival analysis. Using a publicly available non- NSCLC radiogenomic dataset of 204 patients’ CT images, 429 established radiomics features were extracted. Low-rank dimensionality reduction via principal component analysis (PCA) was employed (k=1, n<1) to find the representative components of each cluster of features and calculate cluster robustness using the relative weighted consistency metric. Results: Hierarchical clustering categorized radiomic features into several groups without primary initialization of cluster numbers using the correlation distance metric (as a function) to truncate the resulting dendrogram into different distances. The dimensionality was reduced from 429 to 67 features (for truncation distance of 0.1). The robustness within the features in clusters was varied from -1.12 to -30.02 for truncation distances of 0.1 to 1.8, respectively, which indicated that the robustness decreases with increasing truncation distance when smaller number of feature classes (i.e., clusters) are selected. The best multivariate CPH survival model had a C-statistic of 0.71 for truncation distance of 0.1, outperforming conventional PCA approaches by 0.04, even when the same number of principal components was considered for feature dimensionality. Conclusions: Correlative hierarchical clustering algorithm truncation distance is directly associated with robustness of the clusters of features selected and can effectively reduce feature dimensionality while improving outcome prediction.

A Novel Deep Learning Framework for Standardizing the Label of OARs in CT

  • Yang, Qiming
  • Chao, Hongyang
  • Nguyen, Dan
  • Jiang, Steve
2019 Conference Paper, cited 0 times
When organs at risk (OARs) are contoured in computed tomography (CT) images for radiotherapy treatment planning, the labels are often inconsistent, which severely hampers the collection and curation of clinical data for research purpose. Currently, data cleaning is mainly done manually, which is time-consuming. The existing methods for automatically relabeling OARs remain unpractical with real patient data, due to the inconsistent delineation and similar small-volume OARs. This paper proposes an improved data augmentation technique according to the characteristics of clinical data. Besides, a novel 3D non-local convolutional neural network is proposed, which includes a decision making network with voting strategy. The resulting model can automatically identify OARs and solve the problems in existing methods, achieving the accurate OAR re-labeling goal. We used partial data from a public head-and-neck dataset (HN_PETCT) for training, and then tested the model on datasets from three different medical institutions. We have obtained the state-of-the-art results for identifying 28 OARs in the head-and-neck region, and also our model is capable of handling multi-center datasets indicating strong generalization ability. Compared to the baseline, the final result of our model achieved a significant improvement in the average true positive rate (TPR) on the three test datasets (+8.27%, +2.39%, +5.53%, respectively). More importantly, the F1 score of small-volume OAR with only 9 training samples increased from 28.63% to 91.17%.

Computational Models for Automated Histopathological Assessment of Colorectal Liver Metastasis Progression

  • XU, Xiaoyang
2019 Thesis, cited 0 times
Website
Histopathology imaging is a type of microscopy imaging commonly used for the microlevel clinical examination of a patient’s pathology. Due to the extremely large size of histopathology images, especially whole slide images (WSIs), it is difficult for pathologists to make a quantitative assessment by inspecting the details of a WSI. Hence, a computeraided system is necessary to provide a subjective and consistent assessment of the WSI for personalised treatment decisions. In this thesis, a deep learning framework for the automatic analysis of whole slide histopathology images is presented for the first time, which aims to address the challenging task of assessing and grading colorectal liver metastasis (CRLM). Quantitative evaluations of a patient’s condition with CRLM are conducted through quantifying different tissue components in resected tumorous specimens. This study mimics the visual examination process of human experts, by focusing on three levels of information, the tissue level, cell level and pixel level, to achieve the step by step segmentation of histopathology images. At the tissue level, patches with category information are utilised to analyse the WSIs. Both classification-based approaches and segmentation-based approaches are investigated to locate the metastasis region and quantify different components of the WSI. For the classification-based method, different factors that might affect the classification accuracy are explored using state-of-the-art deep convolutional neural networks (DCNNs). Furthermore, a novel network is proposed to merge the information from different magnification levels to include contextual information to support the final decision. With the support by the segmentation-based method, edge information from the image is integrated with the proposed fully convolutional neural network to further enhance the segmentation results. At the cell level, nuclei related information is examined to tackle the challenge of inadequate annotations. The problem is approached from two aspects: a weakly supervised nuclei detection and classification method is presented to model the nuclei in the CRLM by integrating a traditional image processing method and variational auto-encoder (VAE). A novel nuclei instance segmentation framework is proposed to boost the accuracy of the nuclei detection and segmentation using the idea of transfer learning. Afterwards, a fusion framework is proposed to enhance the tissue level segmentation results by leveraging the statistical and spatial properties of the cells. At the pixel level, the segmentation problem is tackled by introducing the information from the immunohistochemistry (IHC) stained images. Firstly, two data augmentation approaches, synthesis-based and transfer-based, are proposed to address the problem of insufficient pixel level segmentation. Afterwards, with the paired image and masks having been obtained, an end-to-end model is trained to achieve pixel level segmentation. Secondly, another novel weakly supervised approach based on the generative adversarial network (GAN) is proposed to explore the feasibility of transforming unpaired haematoxylin and eosin (HE) images to IHC stained images. Extensive experiments reveal that the virtually stained images can also be used for pixel level segmentation.

Prostate cancer detection using residual networks

  • Xu, Helen
  • Baxter, John S H
  • Akin, Oguz
  • Cantor-Rivera, Diego
Int J Comput Assist Radiol Surg 2019 Journal Article, cited 0 times
Website
PURPOSE: To automatically identify regions where prostate cancer is suspected on multi-parametric magnetic resonance images (mp-MRI). METHODS: A residual network was implemented based on segmentations from an expert radiologist on T2-weighted, apparent diffusion coefficient map, and high b-value diffusion-weighted images. Mp-MRIs from 346 patients were used in this study. RESULTS: The residual network achieved a hit or miss accuracy of 93% for lesion detection, with an average Jaccard score of 71% that compared the agreement between network and radiologist segmentations. CONCLUSION: This paper demonstrated the ability for residual networks to learn features for prostate lesion segmentation.

Semi-supervised Adversarial Model for Benign-Malignant Lung Nodule Classification on Chest CT

  • Xie, Yutong
  • Zhang, Jianpeng
  • Xia, Yong
Medical Image Analysis 2019 Journal Article, cited 0 times
Classification of benign-malignant lung nodules on chest CT is the most critical step in the early detection of lung cancer and prolongation of patient survival. Despite their success in image classification, deep convolutional neural networks (DCNNs) always require a large number of labeled training data, which are not available for most medical image analysis applications due to the work required in image acquization and particularly image annotation. In this paper, we propose a semi-supervised adversarial classification (SSAC) model that can be trained by using both labeled and unlabeled data for benign-malignant lung nodule classification. This model consists of an adversarial autoencoder-based unsupervised reconstruction network R, a supervised classification network C, and learnable transition layers that enable the adaption of the image representation ability learned by R to C. The SSAC model has been extended to the multi-view knowledge-based collaborative learning, aiming to employ three SSACs to characterize each nodule’s overall appearance, heterogeneity in shape and texture, respectively, and to perform such characterization on nine planar views. The MK-SSAC model has been evaluated on the benchmark LIDC-IDRI dataset and achieves an accuracy of 92.53% and an AUC of 95.81%, which are superior to the performance of other lung nodule classification and semi-supervised learning approaches.

Efficient copyright protection for three CT images based on quaternion polar harmonic Fourier moments

  • Xia, Zhiqiu
  • Wang, Xingyuan
  • Li, Xiaoxiao
  • Wang, Chunpeng
  • Unar, Salahuddin
  • Wang, Mingxu
  • Zhao, Tingting
Signal Processing 2019 Journal Article, cited 0 times

Automatic glioma segmentation based on adaptive superpixel

  • Wu, Yaping
  • Zhao, Zhe
  • Wu, Weiguo
  • Lin, Yusong
  • Wang, Meiyun
BMC Med Imaging 2019 Journal Article, cited 0 times
Website
BACKGROUND: The automatic glioma segmentation is of great significance for clinical practice. This study aims to propose an automatic method based on superpixel for glioma segmentation from the T2 weighted Magnetic Resonance Imaging. METHODS: The proposed method mainly includes three steps. First, we propose an adaptive superpixel generation algorithm based on simple linear iterative clustering version with 0 parameter (ASLIC0). This algorithm can acquire a superpixel image with fewer superpixels and better fit the boundary of region of interest (ROI) by automatically selecting the optimal number of superpixels. Second, we compose a training set by calculating the statistical, texture, curvature and fractal features for each superpixel. Third, Support Vector Machine (SVM) is used to train classification model based on the features of the second step. RESULTS: The experimental results on Multimodal Brain Tumor Image Segmentation Benchmark 2017 (BraTS2017) show that the proposed method has good segmentation performance. The average Dice, Hausdorff distance, sensitivity, and specificity for the segmented tumor against the ground truth are 0.8492, 3.4697 pixels, 81.47, and 99.64%, respectively. The proposed method shows good stability on high- and low-grade glioma samples. Comparative experimental results show that the proposed method has superior performance. CONCLUSIONS: This provides a close match to expert delineation across all grades of glioma, leading to a fast and reproducible method of glioma segmentation.

Development of a method for automating effective patient diameter estimation for digital radiography

  • Worrall, Mark
2019 Thesis, cited 0 times
Website
National patient dose audit of paediatric radiographic examinations is complicated by a lack of data containing a direct measurement of the patient diameter in the examination orientation or height and weight. This has meant that National Diagnostic Reference Levels (NDRLs) for paediatric radiographic examinations have not been updated in the UK since 2000, despite significant changes in imaging technology over that period. This work is the first step in the development of a computational model intended to automate an estimate of paediatric patient diameter. Whilst the application is intended for a paediatric population, its development within this thesis uses an adult cohort. The computational model uses the radiographic image, the examination exposure factors and a priori information relating to the x-ray system and the digital detector. The computational model uses the Beer-Lambert law. A hypothesis was developed that this would work for clinical exposures despite its single energy photon basis. Values of initial air kerma are estimated from the examination exposure factors and measurements made on the x-ray system. Values of kerma at the image receptor are estimated from a measurement of pixel value made at the centre of the radiograph and the measured calibration between pixel value and kerma for the image receptor. Values of effective linear attenuation coefficient are estimated from Monte Carlo simulations. Monte Carlo simulations were created for two x-ray systems. The simulations were optimised and thoroughly validated to ensure that any result obtained is accurate. The validation process compared simulation results with measurements made on the x-ray units themselves, producing values for effective linear attenuation coefficient that were demonstrated to be accurate. Estimates of attenuator thickness can be made using the estimated values for each variable. The computational model was demonstrated to accurately estimate the thickness of single composition attenuators across a range of thicknesses and exposure factors on three different x-ray systems. The computational model was used in a clinical validation study of 20 adult patients undergoing AP abdominal x-ray examinations. For 19 of these examinations, it estimated the true patient thickness to within ±9%. This work presents a feasible computational model that could be used to automate the estimation of paediatric patient thickness during radiographic examinations allowing for automation of paediatric radiographic dose audit.

Comparing deep learning-based auto-segmentation of organs at risk and clinical target volumes to expert inter-observer variability in radiotherapy planning

  • Wong, Jordan
  • Fong, Allan
  • McVicar, Nevin
  • Smith, Sally
  • Giambattista, Joshua
  • Wells, Derek
  • Kolbeck, Carter
  • Giambattista, Jonathan
  • Gondara, Lovedeep
  • Alexander, Abraham
Radiother Oncol 2019 Journal Article, cited 0 times
Website
BACKGROUND: Deep learning-based auto-segmented contours (DC) aim to alleviate labour intensive contouring of organs at risk (OAR) and clinical target volumes (CTV). Most previous DC validation studies have a limited number of expert observers for comparison and/or use a validation dataset related to the training dataset. We determine if DC models are comparable to Radiation Oncologist (RO) inter-observer variability on an independent dataset. METHODS: Expert contours (EC) were created by multiple ROs for central nervous system (CNS), head and neck (H&N), and prostate radiotherapy (RT) OARs and CTVs. DCs were generated using deep learning-based auto-segmentation software trained by a single RO on publicly available data. Contours were compared using Dice Similarity Coefficient (DSC) and 95% Hausdorff distance (HD). RESULTS: Sixty planning CT scans had 2-4 ECs, for a total of 60 CNS, 53 H&N, and 50 prostate RT contour sets. The mean DC and EC contouring times were 0.4 vs 7.7 min for CNS, 0.6 vs 26.6 min for H&N, and 0.4 vs 21.3 min for prostate RT contours. There were minimal differences in DSC and 95% HD involving DCs for OAR comparisons, but more noticeable differences for CTV comparisons. CONCLUSIONS: The accuracy of DCs trained by a single RO is comparable to expert inter-observer variability for the RT planning contours in this study. Use of deep learning-based auto-segmentation in clinical practice will likely lead to significant benefits to RT planning workflow and resources.

General purpose radiomics for multi-modal clinical research

  • Wels, Michael G.
  • Suehling, Michael
  • Muehlberg, Alexander
  • Lades, Félix
2019 Conference Proceedings, cited 0 times
Website
In this paper we present an integrated software solution∗ targeting clinical researchers for discovering relevant radiomic biomarkers covering the entire value chain of clinical radiomics research. Its intention is to make this kind of research possible even for less experienced scientists. The solution provides means to create, collect, manage, and statistically analyze patient cohorts consisting of potentially multimodal 3D medical imaging data, associated volume of interest annotations, and radiomic features. Volumes of interest can be created by an extensive set of semi-automatic segmentation tools. Radiomic feature computation relies on the de facto standard library PyRadiomics and ensures comparability and reproducibility of carried out studies. Tabular cohort studies containing the radiomics of the volumes of interest can be managed directly within the software solution. The integrated statistical analysis capabilities introduce an additional layer of abstraction allowing non-experts to benefit from radiomics research as well. There are ready-to-use methods for clustering, uni- and multivariate statistics, and machine learning to be applied to the collected cohorts. They are validated in two case studies: for one thing, on a subset of the publicly available NSCLC-Radiomics data collection containing pretreatment CT scans of 317 non-small cell lung cancer (NSCLC) patients and for another, on the Lung Image Database Consortium imaging study with diagnostic and lung cancer screening CT scans including 2,753 distinct lesions from 870 patients. Integrated software solutions with optimized workflows like the one presented and further developments thereof may play an important role in making precision medicine come to life in clinical environments.

IILS: Intelligent imaging layout system for automatic imaging report standardization and intra-interdisciplinary clinical workflow optimization

  • Wang, Yang
  • Yan, Fangrong
  • Lu, Xiaofan
  • Zheng, Guanming
  • Zhang, Xin
  • Wang, Chen
  • Zhou, Kefeng
  • Zhang, Yingwei
  • Li, Hui
  • Zhao, Qi
  • Zhu, Hu
  • Chen, Fei
  • Gao, Cailiang
  • Qing, Zhao
  • Ye, Jing
  • Li, Aijing
  • Xin, Xiaoyan
  • Li, Danyan
  • Wang, Han
  • Yu, Hongming
  • Cao, Lu
  • Zhao, Chaowei
  • Deng, Rui
  • Tan, Libo
  • Chen, Yong
  • Yuan, Lihua
  • Zhou, Zhuping
  • Yang, Wen
  • Shao, Mingran
  • Dou, Xin
  • Zhou, Nan
  • Zhou, Fei
  • Zhu, Yue
  • Lu, Guangming
  • Zhang, Bing
EBioMedicine 2019 Journal Article, cited 1 times
Website
BACKGROUND: To achieve imaging report standardization and improve the quality and efficiency of the intra-interdisciplinary clinical workflow, we proposed an intelligent imaging layout system (IILS) for a clinical decision support system-based ubiquitous healthcare service, which is a lung nodule management system using medical images. METHODS: We created a lung IILS based on deep learning for imaging report standardization and workflow optimization for the identification of nodules. Our IILS utilized a deep learning plus adaptive auto layout tool, which trained and tested a neural network with imaging data from all the main CT manufacturers from 11,205 patients. Model performance was evaluated by the receiver operating characteristic curve (ROC) and calculating the corresponding area under the curve (AUC). The clinical application value for our IILS was assessed by a comprehensive comparison of multiple aspects. FINDINGS: Our IILS is clinically applicable due to the consistency with nodules detected by IILS, with its highest consistency of 0.94 and an AUC of 90.6% for malignant pulmonary nodules versus benign nodules with a sensitivity of 76.5% and specificity of 89.1%. Applying this IILS to a dataset of chest CT images, we demonstrate performance comparable to that of human experts in providing a better layout and aiding in diagnosis in 100% valid images and nodule display. The IILS was superior to the traditional manual system in performance, such as reducing the number of clicks from 14.45+/-0.38 to 2, time consumed from 16.87+/-0.38s to 6.92+/-0.10s, number of invalid images from 7.06+/-0.24 to 0, and missing lung nodules from 46.8% to 0%. INTERPRETATION: This IILS might achieve imaging report standardization, and improve the clinical workflow therefore opening a new window for clinical application of artificial intelligence. FUND: The National Natural Science Foundation of China.

An Appraisal of Lung Nodules Automatic Classification Algorithms for CT Images

  • Xinqi Wang
  • Keming Mao
  • Lizhe Wang
  • Peiyi Yang
  • Duo Lu
  • Ping He
Sensors (Basel) 2019 Journal Article, cited 0 times
Website
Lung cancer is one of the most deadly diseases around the world representing about 26% of all cancers in 2017. The five-year cure rate is only 18% despite great progress in recent diagnosis and treatment. Before diagnosis, lung nodule classification is a key step, especially since automatic classification can help clinicians by providing a valuable opinion. Modern computer vision and machine learning technologies allow very fast and reliable CT image classification. This research area has become very hot for its high efficiency and labor saving. The paper aims to draw a systematic review of the state of the art of automatic classification of lung nodules. This research paper covers published works selected from the Web of Science, IEEEXplore, and DBLP databases up to June 2018. Each paper is critically reviewed based on objective, methodology, research dataset, and performance evaluation. Mainstream algorithms are conveyed and generic structures are summarized. Our work reveals that lung nodule classification based on deep learning becomes dominant for its excellent performance. It is concluded that the consistency of the research objective and integration of data deserves more attention. Moreover, collaborative works among developers, clinicians, and other parties should be strengthened.

Deep Learning for Automatic Identification of Nodule Morphology Features and Prediction of Lung Cancer

  • Wang, Weilun
  • Chakraborty, Goutam
2019 Conference Paper, cited 0 times
Website
Lung Cancer is the most common and deadly cancer in the world. Correct prognosis affects the survival rate of patient. The most important symptom for early diagnosis is nodules images in CT scan. Diagnosis performed in hospital is divided into 2 steps : (1) Firstly, detect nodules from CT scan. (2) Secondly, evaluate the morphological features of nodules and give the diagnostic results. In this work, we proposed an automatic lung cancer prognosis system. The system has 3 steps : (1) In the first step, we trained two models, one based on convolutional neural network (CNN), and the other recurrent neural network (RNN), to detect nodules in CT scan. (2) In the second step, convolutional neural networks (CNN) are trained to evaluate the value of nine morphological features of nodules. (3) In the final step, logistic regression between values of features and cancer probability is trained using XGBoost model. In addition, we give an analysis of which features are important for cancer prediction. Overall, we achieved 82.39% accuracy for lung cancer prediction. By logistic regression analysis, we find that features of diameter, spiculation and lobulation are useful for reducing false positive.

Evaluation of Malignancy of Lung Nodules from CT Image Using Recurrent Neural Network

  • Wang, Weilun
  • Chakraborty, Goutam
2019 Journal Article, cited 0 times
The efficacy of treatment of cancer depends largely on early detection and correct prognosis. It is more important in case of pulmonary cancer, where the detection is based on identifying malignant nodules in the Computed Tomography (CT) scans of the lung. There are two problems for making correct decision about malignancy: (1) At early stage, the nodule size is small (length 5 to 10 mm). As the CT scan covers a volume of 30cm.×30cm.×40cm., manually searching for nodules takes a very long time (approximately 10 minutes for an expert). (2) There are benign nodules and nodules due to other ailments like bronchitis, pneumonia, tuberculosis. To identify whether the nodule is carcinogenic needs long experience and expertise.In recent years, several works have been reported to classify lung cancer using not only the CT scan image, but also other features causing or related to cancer. In all recent works, for CT image analysis, 3-D Convolution Neural Network (CNN) is used to identify cancerous nodules. In spite of various preprocessing used to improve training efficiency, 3-D CNN is extremely slow. The aim of this work is to improve training efficiency by proposing a new deep NN model. It consists of a hierarchical (sliced) structure of recurrent neural network (RNN), where different layers of the hierarchy can be trained simultaneously, decreasing training time. In addition, selective attention (alignment) during training improves convergence rate. The result shows a 3-fold increase in training efficiency, compared to recent state-of-the-art work using 3-D CNN.

Correlation between CT based radiomics features and gene expression data in non-small cell lung cancer

  • Wang, Ting
  • Gong, Jing
  • Duan, Hui-Hong
  • Wang, Li-Jia
  • Ye, Xiao-Dan
  • Nie, Sheng-Dong
Journal of X-ray science and technology 2019 Journal Article, cited 0 times
Website

Inter-rater agreement in glioma segmentations on longitudinal MRI

  • Visser, M.
  • Muller, D. M. J.
  • van Duijn, R. J. M.
  • Smits, M.
  • Verburg, N.
  • Hendriks, E. J.
  • Nabuurs, R. J. A.
  • Bot, J. C. J.
  • Eijgelaar, R. S.
  • Witte, M.
  • van Herk, M. B.
  • Barkhof, F.
  • de Witt Hamer, P. C.
  • de Munck, J. C.
Neuroimage Clin 2019 Journal Article, cited 0 times
Website
BACKGROUND: Tumor segmentation of glioma on MRI is a technique to monitor, quantify and report disease progression. Manual MRI segmentation is the gold standard but very labor intensive. At present the quality of this gold standard is not known for different stages of the disease, and prior work has mainly focused on treatment-naive glioblastoma. In this paper we studied the inter-rater agreement of manual MRI segmentation of glioblastoma and WHO grade II-III glioma for novices and experts at three stages of disease. We also studied the impact of inter-observer variation on extent of resection and growth rate. METHODS: In 20 patients with WHO grade IV glioblastoma and 20 patients with WHO grade II-III glioma (defined as non-glioblastoma) both the enhancing and non-enhancing tumor elements were segmented on MRI, using specialized software, by four novices and four experts before surgery, after surgery and at time of tumor progression. We used the generalized conformity index (GCI) and the intra-class correlation coefficient (ICC) of tumor volume as main outcome measures for inter-rater agreement. RESULTS: For glioblastoma, segmentations by experts and novices were comparable. The inter-rater agreement of enhancing tumor elements was excellent before surgery (GCI 0.79, ICC 0.99) poor after surgery (GCI 0.32, ICC 0.92), and good at progression (GCI 0.65, ICC 0.91). For non-glioblastoma, the inter-rater agreement was generally higher between experts than between novices. The inter-rater agreement was excellent between experts before surgery (GCI 0.77, ICC 0.92), was reasonable after surgery (GCI 0.48, ICC 0.84), and good at progression (GCI 0.60, ICC 0.80). The inter-rater agreement was good between novices before surgery (GCI 0.66, ICC 0.73), was poor after surgery (GCI 0.33, ICC 0.55), and poor at progression (GCI 0.36, ICC 0.73). Further analysis showed that the lower inter-rater agreement of segmentation on postoperative MRI could only partly be explained by the smaller volumes and fragmentation of residual tumor. The median interquartile range of extent of resection between raters was 8.3% and of growth rate was 0.22mm/year. CONCLUSION: Manual tumor segmentations on MRI have reasonable agreement for use in spatial and volumetric analysis. Agreement in spatial overlap is of concern with segmentation after surgery for glioblastoma and with segmentation of non-glioblastoma by non-experts.

An intelligent lung tumor diagnosis system using whale optimization algorithm and support vector machine

  • Vijh, Surbhi
  • Gaur, Deepak
  • Kumar, Sushil
International Journal of System Assurance Engineering and Management 2019 Journal Article, cited 0 times
Medical image processing technique are widely used for detection of tumor to increase the survival rate of patients. The development of computer-aided diagnosis system shows improvement in observing the medical image and determining the treatment stages. The earlier detection of tumor reduces the mortality of lung cancer by increasing the probability of successful treatment. In this paper, the intelligent lung tumor diagnosis system is developed using various image processing technique. The simulated steps involve image enhancement, image segmentation, post-processing, feature extraction, feature selection and classification using support vector machine (SVM) kernel. Gray level co-occurrence matrix method is used for extracting the 19 texture and statistical features of lung computed tomography (CT) image. Whale optimization algorithm (WOA) is considered for selection of best prominent feature subset. The contribution provided in this paper is the development of WOA_SVM to automate the aided diagnosis system for determining whether the lung CT image is normal or abnormal. An improved technique is developed using whale optimization algorithm for optimal feature selection to obtain accurate results and constructing the robust model. The performance of proposed methodology is evaluated using accuracy, sensitivity and specificity and obtained as 95%, 100% and 92% using radial bias function support vector kernel.

Identification and classification of DICOM files with burned-in text content

  • Vcelak, Petr
  • Kryl, Martin
  • Kratochvil, Michal
  • Kleckova, Jana
International Journal of Medical Informatics 2019 Journal Article, cited 0 times
Website
Background Protected health information burned in pixel data is not indicated for various reasons in DICOM. It complicates the secondary use of such data. In recent years, there have been several attempts to anonymize or de-identify DICOM files. Existing approaches have different constraints. No completely reliable solution exists. Especially for large datasets, it is necessary to quickly analyse and identify files potentially violating privacy. Methods Classification is based on adaptive-iterative algorithm designed to identify one of three classes. There are several image transformations, optical character recognition, and filters; then a local decision is made. A confirmed local decision is the final one. The classifier was trained on a dataset composed of 15,334 images of various modalities. Results The false positive rates are in all cases below 4.00%, and 1.81% in the mission-critical problem of detecting protected health information. The classifier's weighted average recall was 94.85%, the weighted average inverse recall was 97.42% and Cohen's Kappa coefficient was 0.920. Conclusion The proposed novel approach for classification of burned-in text is highly configurable and able to analyse images from different modalities with a noisy background. The solution was validated and is intended to identify DICOM files that need to have restricted access or be thoroughly de-identified due to privacy issues. Unlike with existing tools, the recognised text, including its coordinates, can be further used for de-identification.

Predicting the 1p/19q co-deletion status of presumed low grade glioma with an externally validated machine learning algorithm

  • van der Voort, Sebastian R
  • Incekara, Fatih
  • Wijnenga, Maarten MJ
  • Kapsas, Georgios
  • Gardeniers, Mayke
  • Schouten, Joost W
  • Starmans, Martijn PA
  • Tewarie, Rishie Nandoe
  • Lycklama, Geert J
  • French, Pim J
Clinical Cancer Research 2019 Journal Article, cited 0 times

Eliminating biasing signals in lung cancer images for prognosis predictions with deep learning.

  • van Amsterdam, W. A. C.
  • Verhoeff, J. J. C.
  • de Jong, P. A.
  • Leiner, T.
  • Eijkemans, M. J. C.
NPJ Digit Med 2019 Journal Article, cited 0 times
Website
Deep learning has shown remarkable results for image analysis and is expected to aid individual treatment decisions in health care. Treatment recommendations are predictions with an inherently causal interpretation. To use deep learning for these applications in the setting of observational data, deep learning methods must be made compatible with the required causal assumptions. We present a scenario with real-world medical images (CT-scans of lung cancer) and simulated outcome data. Through the data simulation scheme, the images contain two distinct factors of variation that are associated with survival, but represent a collider (tumor size) and a prognostic factor (tumor heterogeneity), respectively. When a deep network would use all the information available in the image to predict survival, it would condition on the collider and thereby introduce bias in the estimation of the treatment effect. We show that when this collider can be quantified, unbiased individual prognosis predictions are attainable with deep learning. This is achieved by (1) setting a dual task for the network to predict both the outcome and the collider and (2) enforcing a form of linear independence of the activation distributions of the last layer. Our method provides an example of combining deep learning and structural causal models to achieve unbiased individual prognosis predictions. Extensions of machine learning methods for applications to causal questions are required to attain the long-standing goal of personalized medicine supported by artificial intelligence.

Novel approaches for glioblastoma treatment: Focus on tumor heterogeneity, treatment resistance, and computational tools

  • Valdebenito, Silvana
  • D'Amico, Daniela
  • Eugenin, Eliseo
Cancer Reports 2019 Journal Article, cited 0 times
Background Glioblastoma (GBM) is a highly aggressive primary brain tumor. Currently, the suggested line of action is the surgical resection followed by radiotherapy and treatment with the adjuvant temozolomide, a DNA alkylating agent. However, the ability of tumor cells to deeply infiltrate the surrounding tissue makes complete resection quite impossible, and, in consequence, the probability of tumor recurrence is high, and the prognosis is not positive. GBM is highly heterogeneous and adapts to treatment in most individuals. Nevertheless, these mechanisms of adaption are unknown. Recent findings In this review, we will discuss the recent discoveries in molecular and cellular heterogeneity, mechanisms of therapeutic resistance, and new technological approaches to identify new treatments for GBM. The combination of biology and computer resources allow the use of algorithms to apply artificial intelligence and machine learning approaches to identify potential therapeutic pathways and to identify new drug candidates. Conclusion These new approaches will generate a better understanding of GBM pathogenesis and will result in novel treatments to reduce or block the devastating consequences of brain cancers.

Enabling machine learning in X-ray-based procedures via realistic simulation of image formation

  • Unberath, Mathias
  • Zaech, Jan-Nico
  • Gao, Cong
  • Bier, Bastian
  • Goldmann, Florian
  • Lee, Sing Chun
  • Fotouhi, Javad
  • Taylor, Russell
  • Armand, Mehran
  • Navab, Nassir
International journal of computer assisted radiology and surgery 2019 Journal Article, cited 0 times

Impact of image preprocessing on the scanner dependence of multi-parametric MRI radiomic features and covariate shift in multi-institutional glioblastoma datasets

  • Um, Hyemin
  • Tixier, Florent
  • Bermudez, Dalton
  • Deasy, Joseph O
  • Young, Robert J
  • Veeraraghavan, Harini
Physics in Medicine & Biology 2019 Journal Article, cited 0 times
Website
Recent advances in radiomics have enhanced the value of medical imaging in various aspects of clinical practice, but a crucial component that remains to be investigated further is the robustness of quantitative features to imaging variations and across multiple institutions. In the case of MRI, signal intensity values vary according to the acquisition parameters used, yet no consensus exists on which preprocessing techniques are favorable in reducing scanner-dependent variability of image-based features. Hence, the purpose of this study was to assess the impact of common image preprocessing methods on the scanner dependence of MRI radiomic features in multi-institutional glioblastoma multiforme (GBM) datasets. Two independent GBM cohorts were analyzed: 50 cases from the TCGA-GBM dataset and 111 cases acquired in our institution, and each case consisted of 3 MRI sequences viz. FLAIR, T1-weighted, and T1-weighted post-contrast. Five image preprocessing techniques were examined: 8-bit global rescaling, 8-bit local rescaling, bias field correction, histogram standardization, and isotropic resampling. A total of 420 features divided into 8 categories representing texture, shape, edge, and intensity histogram were extracted. Two distinct imaging parameters were considered: scanner manufacturer and scanner magnetic field strength. Wilcoxon tests identified features robust to the considered acquisition parameters under the selected image preprocessing techniques. A machine learning-based strategy was implemented to measure the covariate shift between the analyzed datasets using features computed using the aforementioned preprocessing methods. Finally, radiomic scores (rad-scores) were constructed by identifying features relevant to patients' overall survival after eliminating those impacted by scanner variability. These were then evaluated for their prognostic significance through Kaplan-Meier and Cox hazards regression analyses. Our results demonstrate that overall, histogram standardization contributes the most in reducing radiomic feature variability as it is the technique to reduce the covariate shift for 3 feature categories and successfully discriminate patients into groups of different survival risks.

Stability and reproducibility of computed tomography radiomic features extracted from peritumoral regions of lung cancer lesions

  • Tunali, Ilke
  • Hall, Lawrence O
  • Napel, Sandy
  • Cherezov, Dmitry
  • Guvenis, Albert
  • Gillies, Robert J
  • Schabath, Matthew B
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: Recent efforts have demonstrated that radiomic features extracted from the peritumoral region, the area surrounding the tumor parenchyma, have clinical utility in various cancer types. However, as like any radiomic features, peritumoral features could also be unstable and/or nonreproducible. Hence, the purpose of this study was to assess the stability and reproducibility of computed tomography (CT) radiomic features extracted from the peritumoral regions of lung lesions where stability was defined as the consistency of a feature by different segmentations, and reproducibility was defined as the consistency of a feature to different image acquisitions. METHODS: Stability was measured utilizing the "moist run" dataset and reproducibility was measured utilizing the Reference Image Database to Evaluate Therapy Response test-retest dataset. Peritumoral radiomic features were extracted from incremental distances of 3-12 mm outside the tumor segmentation. A total of 264 statistical, histogram, and texture radiomic features were assessed from the selected peritumoral region-of-interests (ROIs). All features (except wavelet texture features) were extracted using standardized algorithms defined by the Image Biomarker Standardisation Initiative. Stability and reproducibility of features were assessed using the concordance correlation coefficient. The clinical utility of stable and reproducible peritumoral features was tested in three previously published lung cancer datasets using overall survival as the endpoint. RESULTS: Features found to be stable and reproducible, regardless of the peritumoral distances, included statistical, histogram, and a subset of texture features suggesting that these features are less affected by changes (e.g., size or shape) of the peritumoral region due to different segmentations and image acquisitions. The stability and reproducibility of Laws and wavelet texture features were inconsistent across all peritumoral distances. The analyses also revealed that a subset of features were consistently stable irrespective of the initial parameters (e.g., seed point) for a given segmentation algorithm. No significant differences were found in stability for features that were extracted from ROIs bounded by a lung parenchyma mask versus ROIs that were not bounded by a lung parenchyma mask (i.e., peritumoral regions that extended outside of lung parenchyma). After testing the clinical utility of peritumoral features, stable and reproducible features were shown to be more likely to create repeatable models than unstable and nonreproducible features. CONCLUSIONS: This study identified a subset of stable and reproducible CT radiomic features extracted from the peritumoral region of lung lesions. The stable and reproducible features identified in this study could be applied to a feature selection pipeline for CT radiomic analyses. According to our findings, top performing features in survival models were more likely to be stable and reproducible hence, it may be best practice to utilize them to achieve repeatable studies and reduce the chance of overfitting.

Detection of lung cancer on chest CT images using minimum redundancy maximum relevance feature selection method with convolutional neural networks

  • Toğaçar, Mesut
  • Ergen, Burhan
  • Cömert, Zafer
Biocybernetics and Biomedical Engineering 2019 Journal Article, cited 0 times
Lung cancer is a disease caused by the involuntary increase of cells in the lung tissue. Early detection of cancerous cells is of vital importance in the lungs providing oxygen to the human body and excretion of carbon dioxide in the body as a result of vital activities. In this study, the detection of lung cancers is realized using LeNet, AlexNet and VGG-16 deep learning models. The experiments were carried out on an open dataset composed of Computed Tomography (CT) images. In the experiment, convolutional neural networks (CNNs) were used for feature extraction and classification purposes. In order to increase the success rate of the classification, the image augmentation techniques, such as cutting, zooming, horizontal turning and filling, were applied to the dataset during the training of the models. Because of the outstanding success of AlexNet model, the features obtained from the last fully-connected layer of the model were separately applied as the input to linear regression (LR), linear discriminant analysis (LDA), decision tree (DT), support vector machine (SVM), -nearest neighbor (kNN) and softmax classifiers. A combination of AlexNet model and NN classifier achieved the most efficient classification accuracy as 98.74 %. Then, the minimum redundancy maximum relevance (mRMR) feature selection method was applied to the deep feature set to choose the most efficient features. Consequently, the success rate was yielded as 99.51 % by reclassifying the dataset with the selected features and NN model. The proposed model is consistent diagnosis model for lung cancer detection using chest CT images.

Reliability of tumor segmentation in glioblastoma: impact on the robustness of MRI‐radiomic features

  • Tixier, Florent
  • Um, Hyemin
  • Young, Robert J
  • Veeraraghavan, Harini
Med Phys 2019 Journal Article, cited 0 times
Website
Purpose The use of radiomic features as biomarkers of treatment response and outcome or as correlates to genomic variations requires that the computed features are robust and reproducible. Segmentation, a crucial step in radiomic analysis, is a major source of variability in the computed radiomic features. Therefore, we studied the impact of tumor segmentation variability on the robustness of MRI radiomic features. Method Fluid‐attenuated inversion recovery (FLAIR) and contrast‐enhanced T1‐weighted (T1WICE) MRI of 90 patients diagnosed with glioblastoma were segmented using a semi‐automatic algorithm and an interactive segmentation with two different raters. We analyzed the robustness of 108 radiomic features from 5 categories (intensity histogram, gray‐level co‐occurrence matrix, gray‐level size‐zone matrix (GLSZM), edge maps and shape) using intra‐class correlation coefficient (ICC) and Bland and Altman analysis. Results Our results show that both segmentation methods are reliable with ICC ≥ 0.96 and standard deviation (SD) of mean differences between the two raters (SDdiffs) ≤ 30%. Features computed from the histogram and co‐occurrence matrices were found to be the most robust (ICC ≥ 0.8 and SDdiffs ≤ 30% for most features in these groups). Features from GLSZM were shown to have mixed robustness. Edge, shape and GLSZM features were the most impacted by the choice of segmentation method with the interactive method resulting in more robust features than the semi‐automatic method. Finally, features computed from T1WICE and FLAIR images were found to have similar robustness when computed with the interactive segmentation method. Conclusion Semi‐automatic and interactive segmentation methods using two raters are both reliable. The interactive method produced more robust features than the semi‐automatic method. We also found that the robustness of radiomic features varied by categories. Therefore, this study could help motivate segmentation methods and feature selection in MRI radiomic studies.

Proton vs photon: A model-based approach to patient selection for reduction of cardiac toxicity in locally advanced lung cancer

  • Teoh, S.
  • Fiorini, F.
  • George, B.
  • Vallis, K. A.
  • Van den Heuvel, F.
Radiother Oncol 2019 Journal Article, cited 0 times
Website
PURPOSE/OBJECTIVE: To use a model-based approach to identify a sub-group of patients with locally advanced lung cancer who would benefit from proton therapy compared to photon therapy for reduction of cardiac toxicity. MATERIAL/METHODS: Volumetric modulated arc photon therapy (VMAT) and robust-optimised intensity modulated proton therapy (IMPT) plans were generated for twenty patients with locally advanced lung cancer to give a dose of 70Gy (relative biological effectiveness (RBE)) in 35 fractions. Cases were selected to represent a range of anatomical locations of disease. Contouring, treatment planning and organs-at-risk constraints followed RTOG-1308 protocol. Whole heart and ub-structure doses were compared. Risk estimates of grade3 cardiac toxicity were calculated based on normal tissue complication probability (NTCP) models which incorporated dose metrics and patients baseline risk-factors (pre-existing heart disease (HD)). RESULTS: There was no statistically significant difference in target coverage between VMAT and IMPT. IMPT delivered lower doses to the heart and cardiac substructures (mean, heart V5 and V30, P<.05). In VMAT plans, there were statistically significant positive correlations between heart dose and the thoracic vertebral level that corresponded to the most inferior limit of the disease. The median level at which the superior aspect of the heart contour began was the T7 vertebrae. There was a statistically significant difference in dose (mean, V5 and V30) to the heart and all substructures (except mean dose to left coronary artery and V30 to sino-atrial node) when disease overlapped with or was inferior to the T7 vertebrae. In the presence of pre-existing HD and disease overlapping with or inferior to the T7 vertebrae, the mean estimated relative risk reduction of grade3 toxicities was 24-59%. CONCLUSION: IMPT is expected to reduce cardiac toxicity compared to VMAT by reducing dose to the heart and substructures. Patients with both pre-existing heart disease and tumour and nodal spread overlapping with or inferior to the T7 vertebrae are likely to benefit most from proton over photon therapy.

Is an analytical dose engine sufficient for intensity modulated proton therapy in lung cancer?

  • Teoh, S.
  • Fiorini, F.
  • George, B.
  • Vallis, K. A.
  • Van den Heuvel, F.
Br J Radiol 2019 Journal Article, cited 0 times
Website
OBJECTIVE: To identify a subgroup of lung cancer plans where the analytical dose calculation (ADC) algorithm may be clinically acceptable compared to Monte Carlo (MC) dose calculation in intensity modulated proton therapy (IMPT). METHODS: Robust-optimised IMPT plans were generated for 20 patients to a dose of 70 Gy (relative biological effectiveness) in 35 fractions in Raystation. For each case, four plans were generated: three with ADC optimisation using the pencil beam (PB) algorithm followed by a final dose calculation with the following algorithms: PB (PB-PB), MC (PB-MC) and MC normalised to prescription dose (PB-MC scaled). A fourth plan was generated where MC optimisation and final dose calculation was performed (MC-MC). Dose comparison and gamma analysis (PB-PB vs PB-MC) at two dose thresholds were performed: 20% (D20) and 99% (D99) with PB-PB plans as reference. RESULTS: Overestimation of the dose to 99% and mean dose of the clinical target volume was observed in all PB-MC compared to PB-PB plans (median: 3.7 Gy(RBE) (5%) (range: 2.3 to 6.9 Gy(RBE)) and 1.8 Gy(RBE) (3%) (0.5 to 4.6 Gy(RBE))). PB-MC scaled plans resulted in significantly higher CTVD2 compared to PB-PB (median difference: -4 Gy(RBE) (-6%) (-5.3 to -2.4 Gy(RBE)), p </= .001). The overall median gamma pass rates (3%-3 mm) at D20 and D99 were 93.2% (range:62.2-97.5%) and 71.3 (15.4-92.0%). On multivariate analysis, presence of mediastinal disease and absence of range shifters were significantly associated with high gamma pass rates. Median D20 and D99 pass rates with these predictors were 96.0% (95.3-97.5%) and 85.4% (75.1-92.0%). MC-MC achieved similar target coverage and doses to OAR compared to PB-PB plans. CONCLUSION: In the presence of mediastinal involvement and absence of range shifters Raystation ADC may be clinically acceptable in lung IMPT. Otherwise, MC algorithm would be recommended to ensure accuracy of treatment plans. ADVANCES IN KNOWLEDGE: Although MC algorithm is more accurate compared to ADC in lung IMPT, ADC may be clinically acceptable where there is mediastinal involvement and absence of range shifters.

Automated Detection of Early Pulmonary Nodule in Computed Tomography Images

  • Tariq, Ahmed Usama
2019 Thesis, cited 0 times
Website
Classification of lung cancer in CT scans majorly have two steps, detect all suspicious lesions also known as pulmonary nodules and calculate the malignancy. Currently, a lot of studies are about nodules detection, but some are about the evaluation of nodule malignancy. Since the presence of nodule does not unquestionably define the presence lung cancer and the morphology of nodule has a complex association with malignant growth, the diagnosis of lung cancer requests cautious examinations on each suspicious nodule and integrateed information every nodule. We propose a 3D CNN CAD systemto solve this problem. The system consists of two modules a 3D CNN for nodule detection, which outputs all suspicious nodules for a subject and second module train on XGBoost classifier with selective data to acquire the probability of lung malignancy for the subject.

Clinically applicable deep learning framework for organs at risk delineation in CT images

  • Tang, Hao
  • Chen, Xuming
  • Liu, Yang
  • Lu, Zhipeng
  • You, Junhua
  • Yang, Mingzhou
  • Yao, Shengyu
  • Zhao, Guoqi
  • Xu, Yi
  • Chen, Tingfeng
  • Liu, Yong
  • Xie, Xiaohui
Nature Machine Intelligence 2019 Journal Article, cited 0 times
Radiation therapy is one of the most widely used therapies for cancer treatment. A critical step in radiation therapy planning is to accurately delineate all organs at risk (OARs) to minimize potential adverse effects to healthy surrounding organs. However, manually delineating OARs based on computed tomography images is time-consuming and error-prone. Here, we present a deep learning model to automatically delineate OARs in head and neck, trained on a dataset of 215 computed tomography scans with 28 OARs manually delineated by experienced radiation oncologists. On a hold-out dataset of 100 computed tomography scans, our model achieves an average Dice similarity coefficient of 78.34% across the 28 OARs, significantly outperforming human experts and the previous state-of-the-art method by 10.05% and 5.18%, respectively. Our model takes only a few seconds to delineate an entire scan, compared to over half an hour by human experts. These findings demonstrate the potential for deep learning to improve the quality and reduce the treatment planning time of radiation therapy.

Five Classifications of Mammography IMages Based on Deep Cooperation Convolutional Neural Network

  • Tang, Chun-ming
  • Cui, Xiao-Mei
  • Yu, Xiang
  • Yang, Fan
American Scientific Research Journal for Engineering, Technology, and Sciences (ASRJETS) 2019 Journal Article, cited 0 times
Website
Mammography is currently the preferred imaging method for breast cancer screening. Masses and calcificationare the main positive signs of mammography. Due to the variable appearance of masses and calcification, asignificant number of breast cancer cases are missed or misdiagnosed if it is only depended on the radiologists’subjective judgement. At present, most of the studies are based on the classical Convolutional Neural Networks(CNN), which uses the transfer learning to classify the benign and malignant masses in the mammographyimages. However, the CNN is designed for natural images which are substantially different from medicalimages. Therefore, we propose a Deep Cooperation CNN (DCCNN) to classify mammography images of a dataset into five categories including benign calcification, benign mass, malignant calcification, malignant mass andnormal breast. The data set consists of 695 normal cases from DDSM, 753 calcification cases and 891 masscases from CBIS-DDSM. Finally, DCCNN achieves 91% accuracy and 0.98 AUC on the test set, whoseperformance is superior to VGG16, GoogLeNet and InceptionV3 models. Therefore, DCCNN can aidradiologists to make more accurate judgments, greatly reducing the rate of missed and misdiagnosis.

Investigation of thoracic four-dimensional CT-based dimension reduction technique for extracting the robust radiomic features

  • Tanaka, S.
  • Kadoya, N.
  • Kajikawa, T.
  • Matsuda, S.
  • Dobashi, S.
  • Takeda, K.
  • Jingu, K.
Phys Med 2019 Journal Article, cited 0 times
Website
Robust feature selection in radiomic analysis is often implemented using the RIDER test-retest datasets. However, the CT Protocol between the facility and test-retest datasets are different. Therefore, we investigated possibility to select robust features using thoracic four-dimensional CT (4D-CT) scans that are available from patients receiving radiation therapy. In 4D-CT datasets of 14 lung cancer patients who underwent stereotactic body radiotherapy (SBRT) and 14 test-retest datasets of non-small cell lung cancer (NSCLC), 1170 radiomic features (shape: n = 16, statistics: n = 32, texture: n = 1122) were extracted. A concordance correlation coefficient (CCC) > 0.85 was used to select robust features. We compared the robust features in various 4D-CT group with those in test-retest. The total number of robust features was a range between 846/1170 (72%) and 970/1170 (83%) in all 4D-CT groups with three breathing phases (40%–60%); however, that was a range between 44/1170 (4%) and 476/ 1170 (41%) in all 4D-CT groups with 10 breathing phases. In test-retest, the total number of robust features was 967/1170 (83%); thus, the number of robust features in 4D-CT was almost equal to that in test-retest by using 40–60% breathing phases. In 4D-CT, respiratory motion is a factor that greatly affects the robustness of features, thus by using only 40–60% breathing phases, excessive dimension reduction will be able to be prevented in any 4D-CT datasets, and select robust features suitable for CT protocol of your own facility.

Automatic estimation of the aortic lumen geometry by ellipse tracking

  • Tahoces, Pablo G
  • Alvarez, Luis
  • González, Esther
  • Cuenca, Carmelo
  • Trujillo, Agustín
  • Santana-Cedrés, Daniel
  • Esclarín, Julio
  • Gomez, Luis
  • Mazorra, Luis
  • Alemán-Flores, Miguel
International journal of computer assisted radiology and surgery 2019 Journal Article, cited 0 times

Advancing Semantic Interoperability of Image Annotations: Automated Conversion of Non-standard Image Annotations in a Commercial PACS to the Annotation and Image Markup

  • Swinburne, Nathaniel C
  • Mendelson, David
  • Rubin, Daniel L
J Digit Imaging 2019 Journal Article, cited 0 times
Website
Sharing radiologic image annotations among multiple institutions is important in many clinical scenarios; however, interoperability is prevented because different vendors’ PACS store annotations in non-standardized formats that lack semantic interoperability. Our goal was to develop software to automate the conversion of image annotations in a commercial PACS to the Annotation and Image Markup (AIM) standardized format and demonstrate the utility of this conversion for automated matching of lesion measurements across time points for cancer lesion tracking. We created a software module in Java to parse the DICOM presentation state (DICOM-PS) objects (that contain the image annotations) for imaging studies exported from a commercial PACS (GE Centricity v3.x). Our software identifies line annotations encoded within the DICOM-PS objects and exports the annotations in the AIM format. A separate Python script processes the AIM annotation files to match line measurements (on lesions) across time points by tracking the 3D coordinates of annotated lesions. To validate the interoperability of our approach, we exported annotations from Centricity PACS into ePAD (http://epad.stanford.edu) (Rubin et al., Transl Oncol 7(1):23–35, 2014), a freely available AIM-compliant workstation, and the lesion measurement annotations were correctly linked by ePAD across sequential imaging studies. As quantitative imaging becomes more prevalent in radiology, interoperability of image annotations gains increasing importance. Our work demonstrates that image annotations in a vendor system lacking standard semantics can be automatically converted to a standardized metadata format such as AIM, enabling interoperability and potentially facilitating large-scale analysis of image annotations and the generation of high-quality labels for deep learning initiatives. This effort could be extended for use with other vendors’ PACS.

Image Correction in Emission Tomography Using Deep Convolution Neural Network

  • Suzuki, T
  • Kudo, H
2019 Conference Proceedings, cited 0 times

Machine learning to predict lung nodule biopsy method using CT image features: A pilot study

  • Sumathipala, Yohan
  • Shafiq, Majid
  • Bongen, Erika
  • Brinton, Connor
  • Paik, David
Computerized Medical Imaging and Graphics 2019 Journal Article, cited 0 times
Website

Context Dependent Fuzzy Associated Statistical Model for Intensity Inhomogeneity Correction from Magnetic Resonance Images

  • Subudhi, BN
  • Veerakumar, T
  • Esakkirajan, S
  • Ghosh, A
IEEE Journal of Translational Engineering in Health and Medicine 2019 Journal Article, cited 0 times
Website
In this article, a novel context dependent fuzzy set associated statistical model based intensity inhomogeneity correction technique for Magnetic Resonance Image (MRI) is proposed. The observed MRI is considered to be affected by intensity inhomogeneity and it is assumed to be a multiplicative quantity. In the proposed scheme the intensity inhomogeneity correction and MRI segmentation is considered as a combined task. The maximum a posteriori probability (MAP) estimation principle is explored to solve this problem. A fuzzy set associated Gibbs' Markov random field (MRF) is considered to model the spatio-contextual information of an MRI. It is observed that the MAP estimate of the MRF model does not yield good results with any local searching strategy, as it gets trapped to local optimum. Hence, we have exploited the advantage of variable neighborhood searching (VNS) based iterative global convergence criterion for MRF-MAP estimation. The effectiveness of the proposed scheme is established by testing it on different MRIs. Three performance evaluation measures are considered to evaluate the performance of the proposed scheme against existing state-of-the-art techniques. Simulation results establish the effectiveness of the proposed technique.

ALTIS: A fast and automatic lung and trachea CT-image segmentation method

  • Sousa, A. M.
  • Martins, S. B.
  • Falcão, A. X.
  • Reis, F.
  • Bagatin, E.
  • Irion, K.
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: The automated segmentation of each lung and trachea in CT scans is commonly taken as a solved problem. Indeed, existing approaches may easily fail in the presence of some abnormalities caused by a disease, trauma, or previous surgery. For robustness, we present ALTIS (implementation is available at http://lids.ic.unicamp.br/downloads) - a fast automatic lung and trachea CT-image segmentation method that relies on image features and relative shape- and intensity-based characteristics less affected by most appearance variations of abnormal lungs and trachea. METHODS: ALTIS consists of a sequence of image foresting transforms (IFTs) organized in three main steps: (a) lung-and-trachea extraction, (b) seed estimation inside background, trachea, left lung, and right lung, and (c) their delineation such that each object is defined by an optimum-path forest rooted at its internal seeds. We compare ALTIS with two methods based on shape models (SOSM-S and MALF), and one algorithm based on seeded region growing (PTK). RESULTS: The experiments involve the highest number of scans found in literature - 1255 scans, from multiple public data sets containing many anomalous cases, being only 50 normal scans used for training and 1205 scans used for testing the methods. Quantitative experiments are based on two metrics, DICE and ASSD. Furthermore, we also demonstrate the robustness of ALTIS in seed estimation. Considering the test set, the proposed method achieves an average DICE of 0.987 for both lungs and 0.898 for the trachea, whereas an average ASSD of 0.938 for the right lung, 0.856 for the left lung, and 1.316 for the trachea. These results indicate that ALTIS is statistically more accurate and considerably faster than the compared methods, being able to complete segmentation in a few seconds on modern PCs. CONCLUSION: ALTIS is the most effective and efficient choice among the compared methods to segment left lung, right lung, and trachea in anomalous CT scans for subsequent detection, segmentation, and quantitative analysis of abnormal structures in the lung parenchyma and pleural space.

Dynamic Co-occurrence of Local Anisotropic Gradient Orientations (DyCoLIAGe) Descriptors from Pre-treatment Perfusion DSC-MRI to Predict Overall Survival in Glioblastoma

  • Song, Bolin
2019 Thesis, cited 0 times
Website
A significant clinical challenge in glioblastoma is to risk-stratify patients for clinical trials, preferably using MRI scans. Radiomics involves mining of sub-visual features that could serve as surrogate markers of tumor heterogeneity from routine imaging. Previously our group had developed a new gradient-based radiomic descriptor, Co-occurrence of Local Anisotropic Gradient Orientations (COLLAGE), to capture tumor heterogeneity on structural MRI. I present an extension of CoLLAGE on perfusion MRI, termed dynamic COLLAGE (DyCoLIAGe), and demonstrate its application in predicting overall survival in glioblastoma. Following manual segmentation, 52 CoLIAGe features were extracted from edema and enhancing tumor at different time phases during contrast administration of perfusion MRI. Each feature was separately plotted across different time-points, and a 3rd-order polynomial was fit to each feature curve. The corresponding polynomial coefficients were evaluated in terms of their prognosis performance. My results suggest that DyCoLIAGe may be prognostic of overall survival in glioblastoma.

Recovering Physiological Changes in Nasal Anatomy with Confidence Estimates

  • Sinha, Ayushi
  • Liu, Xingtong
  • Ishii, Masaru
  • Hager, Gregory D
  • Taylor, Russell H
2019 Book Section, cited 0 times

Recovering Physiological Changes in Nasal Anatomy with Confidence Estimates

  • Sinha, A.
  • Liu, X.
  • Ishii, M.
  • Hager, G. D.
  • Taylor, Russell H
2019 Conference Proceedings, cited 0 times
Purpose Clinical examinations that involve endoscopic exploration of the nasal cavity and sinuses often do not have a reference preoperative image, like a computed tomography (CT) scan, to provide structural context to the clinician. The aim of this work is to provide structural context during clinical exploration without requiring additional CT acquisition. Methods We present a method for registration during clinical endoscopy in the absence of CT scans by making use of shape statistics from past CT scans. Using a deformable registration algorithm that uses these shape statistics along with dense point clouds from video, we simultaneously achieve two goals: (1) register the statistically mean shape of the target anatomy with the video point cloud, and (2) estimate patient shape by deforming the mean shape to fit the video point cloud. Finally, we use statistical tests to assign confidence to the computed registration. Results We are able to achieve submillimeter errors in registrations and patient shape reconstructions using simulated data. We establish and evaluate the confidence criteria for our registrations using simulated data. Finally, we evaluate our registration method on in vivo clinical data and assign confidence to these registrations using the criteria established in simulation. All registrations that are not rejected by our criteria produce submillimeter residual errors. Conclusion Our deformable registration method can produce submillimeter registrations and reconstructions as well as statistical scores that can be used to assign confidence to the registrations.

Endoscopic navigation in the clinic: registration in the absence of preoperative imaging

  • Sinha, A.
  • Ishii, M.
  • Hager, G. D.
  • Taylor, R. H.
Int J Comput Assist Radiol Surg 2019 Journal Article, cited 0 times
Website
PURPOSE: Clinical examinations that involve endoscopic exploration of the nasal cavity and sinuses often do not have a reference preoperative image, like a computed tomography (CT) scan, to provide structural context to the clinician. The aim of this work is to provide structural context during clinical exploration without requiring additional CT acquisition. METHODS: We present a method for registration during clinical endoscopy in the absence of CT scans by making use of shape statistics from past CT scans. Using a deformable registration algorithm that uses these shape statistics along with dense point clouds from video, we simultaneously achieve two goals: (1) register the statistically mean shape of the target anatomy with the video point cloud, and (2) estimate patient shape by deforming the mean shape to fit the video point cloud. Finally, we use statistical tests to assign confidence to the computed registration. RESULTS: We are able to achieve submillimeter errors in registrations and patient shape reconstructions using simulated data. We establish and evaluate the confidence criteria for our registrations using simulated data. Finally, we evaluate our registration method on in vivo clinical data and assign confidence to these registrations using the criteria established in simulation. All registrations that are not rejected by our criteria produce submillimeter residual errors. CONCLUSION: Our deformable registration method can produce submillimeter registrations and reconstructions as well as statistical scores that can be used to assign confidence to the registrations.

The deformable most-likely-point paradigm

  • Sinha, A.
  • Billings, S. D.
  • Reiter, A.
  • Liu, X.
  • Ishii, M.
  • Hager, G. D.
  • Taylor, R. H.
Med Image Anal 2019 Journal Article, cited 1 times
Website
In this paper, we present three deformable registration algorithms designed within a paradigm that uses 3D statistical shape models to accomplish two tasks simultaneously: 1) register point features from previously unseen data to a statistically derived shape (e.g., mean shape), and 2) deform the statistically derived shape to estimate the shape represented by the point features. This paradigm, called the deformable most-likely-point paradigm, is motivated by the idea that generative shape models built from available data can be used to estimate previously unseen data. We developed three deformable registration algorithms within this paradigm using statistical shape models built from reliably segmented objects with correspondences. Results from several experiments show that our algorithms produce accurate registrations and reconstructions in a variety of applications with errors up to CT resolution on medical datasets. Our code is available at https://github.com/AyushiSinha/cisstICP.

Brain Tumor Extraction from MRI Using Clustering Methods and Evaluation of Their Performance

  • Singh, Vipula
  • Tunga, P. Prakash
2019 Conference Paper, cited 0 times
Website
In this paper, we consider the extraction of brain tumor from MRI (Magnetic Resonance Imaging) images using K-means, Fuzzy c-means and Region growing clustering methods. After extraction, various parameters related to performance of clustering methods, and also, parameters related to description of tumor are calculated. MRI is a non-invasive method which provides the view of structural features of tissues in the body at very high resolution (typically on 100 μm scale). Therefore, it will be advantageous if the detection and segmentation of brain tumors are based on MRI. This work is in the direction of replacing the manual identification and separation of tumor structures from brain MRI by computer aided techniques, which would add great value with respect to accuracy, reproducibility, diagnosis and treatment planning. The brain tumor separated from original image is referred as Region of Interest (ROI) and remaining portion of original image is referred as Non-region of Interest (NROI).

Tumor Heterogeneity and Genomics to Predict Radiation Therapy Outcome for Head-and-Neck Cancer: A Machine Learning Approach

  • Singh, A.
  • Goyal, S.
  • Rao, Y. J.
  • Loew, M.
International Journal of Radiation Oncology*Biology*Physics 2019 Journal Article, cited 0 times
Website
Head and Neck Squamous Cell Carcinoma (HNSCC) is usually treated with Radiation Therapy (RT). Recurrence of the tumor occurs in some patients. The purpose of this study was to determine whether information present in the heterogeneity of tumor regions in the pre-treatment PET scans of HNSCC patients can be used to predict recurrence. We then extended our study to include gene mutation information of a patient group to assess its value as an additional feature to determine treatment efficacy. Materials/Methods Pre-treatment PET scans of 20 patients from the first database (HNSCC), included in The Cancer Imaging Archive (TCIA), were analyzed. The follow-up duration for those patients varied between two and ten years. Accompanying clinical data were used to divide the patients into two categories according to whether they had a recurrence of the tumor. Radiation structures included in the database were overlain on the PET scans to delineate the tumor, whose heterogeneity is measured by texture analysis. The classification is carried out in two ways: making a decision for each image slice, and treating the collection of slices as a 3D volume. This approach was tested on an independent set of 53 patients from a second TCIA database (Head-Neck-PET-CT [HNPC]). The Cancer Genome Atlas (TCGA) identified frequent mutations in the expression of PIK3CA, CDKN2A and TP53 genes in HNSCC patients. We combined gene expression features with texture features for 11 patients of the third database (TCGA-HNSC), and re-evaluated the classification accuracies.

A Novel Imaging-Genomic Approach to Predict Outcomes of Radiation Therapy

  • Singh, Apurva
  • Goyal, Sharad
  • Rao, Yuan James
  • Loew, Murray
2019 Thesis, cited 0 times
Introduction: Tumor regions are populated by various cellular species. Intra-tumor radiogenomic heterogeneity can be attributed to factors including variations in the blood flow to the different parts of the tumor and variations in the gene mutation frequencies. This heterogeneity is further propagated by cancer cells which adopt an “evolutionarily enlightened” growth approach. This growth, which focuses on developing an adaptive mechanism to progressively develop a strong resistance to therapy, follows a unique pattern in each patient. This makes the development of a uniform treatment technique very challenging and makes the concept of “precision medicine”, which is developed using information unique to each patient, very crucial to the development of effective cancer treatment methods. Our study aims to determine whether information present in the heterogeneity of tumor regions in the pre-treatment PET scans of patients and in their gene mutation status can measure the efficacy of radiation therapy in their treatment. We wish to develop a scheme which could predict the effectiveness of therapy at the pre-treatment stage, reduce the unnecessary exposure of the patient to radiation which would ultimately not be helpful in curing the patient and thus help in choosing alternative cancer therapy measures for the patients under consideration. Materials and methods: Our radiomics analysis was developed using PET scans for 20 patients from the HNSCC database from TCIA (The Cancer Imaging Archive). Clinical data were used to divide the patients into two categories based on the recurrence status of the tumor. Radiation structures are overlain on the PET scans for tumor delineation. Texture features extracted from tumor regions are reduced using correlation matrix-based technique and are classified by methods including Weighted KNN, Linear SVM and Bagged Trees. Slice-wise classification results are computed, treating each slice as a 2D image and treating the collection of slices as a 3D volume. Patient-wise results are computed by a voting scheme which assigns to each patient the class label possessed by more than half of its slices. After the voting is complete, the assigned labels are compared to the actual labels to compute the patient-wise classification accuracies. This workflow was tested on a group of 53 patients of the database- Head-Neck-PET-CT. We further proceeded to develop a radiogenomic workflow by combining gene expression features with tumor texture features for a group of 11 patients of our third database: TCGA-HNSC. We developed geometric transform-based database augmentation method and used it to generate PET scans using images from the existing dataset. To evaluate our analysis, we decided to test our workflow on patients with tumors at different sites, using scans of different modalities. We included PET scans for 24 lung cancer patients (15 from TCGA-LUSC (Lung Squamous Cell Carcinoma) and 9 from TCGA-LUAD (Lung Adenocarcinoma) databases). We used wavelet features along with the existing group of texture features to improve the classification scores. Further, we used non-rigid transform-based techniques for database augmentation. We also included MR scans for 54 cervical cancer patients (from TCGA-CESC (Cervical Squamous Cell Carcinoma and Endocervical Carcinoma) database) in our study and employed Fisher based selection technique for reduction of the high dimensional feature space. Results: The classification accuracy obtained by the 2D and 3D texture analysis is about 70% for slice-wise classification and 80% for patient-wise classification for the head and neck cancer patients (HNSCC and Head-Neck-PT-CT databases). The overall classification accuracies obtained from the transformed tumor slices are comparable to the original tumor slices. Thus, geometric transformation is an effective method for database augmentation. The addition of binary genomic features to the texture features (TCGA-HNSC patients) increases the classification accuracies (from 80%-100% for 2D and from 60%-100% for 3D patient-wise classification). The classification accuracies increase from 58% to 84% (2D slice-wise) and from 58% to 70% (2D patient-wise) in the case of lung cancer patients with the inclusion of wavelet features to the existing texture feature group and by augmenting the database (non-rigid transformation) to include equal number of patients and slices in the recurrent and non-recurrent categories. The accuracies are about 64% for 2D slice-wise and patient-wise classification for cervical cancer patients (using correlation-matrix based feature selection) and increase to about 72% using Fisher- based selection criteria Conclusion: Our study has introduced the novel approach of fusing the information present in The Cancer Imaging Archive (TCIA) and TCGA to develop a combined imaging phenotype and genotype expression for therapy personalization. Texture measures provide a measure of tumor heterogeneity, which can be used to predict recurrence status. Information from gene expression patterns of the patients, when combined with texture measures, provides a unique radiogenomic feature which substantially improves therapy response prediction scores.

Predicting Lung Cancer Patients’ Survival Time via Logistic Regression-based Models in a Quantitative Radiomic Framework

  • Shayesteh, S. P.
  • Shiri, I.
  • Karami, A. H.
  • Hashemian, R.
  • Kooranifar, S.
  • Ghaznavi, H.
  • Shakeri-Zadeh, A.
Journal of Biomedical Physics and Engineering 2019 Journal Article, cited 0 times
Objectives: The aim of this study was to predict the survival time of lung cancer patients using the advantages of both radiomics and logistic regression-based classification models. Material and Methods: Fifty-nine patients with primary lung adenocarcinoma were included in this retrospective study and pre-treatment contrast-enhanced CT images were acquired. The patients lived more than 2 years were classified as the ‘Alive’ class and otherwise as the ‘Dead’ class. In our proposed quantitative radiomic framework, we first extracted the associated regions of each lung lesion from pre-treatment CT images for each patient via grow cut segmentation algorithm. Then, 40 radiomic features were extracted from the segmented lung lesions. In order to enhance the generalizability of the classification models, the mutual information-based feature selection method was applied to each feature vector. We investigated the performance of six logistic regression-based classification models with consider to acceptable evaluation measures such as F1 score and accuracy. Results: It was observed that the mutual information feature selection method can help the classifier to achieve better predictive results. In our study, the Logistic regression (LR) and Dual Coordinate Descent method for Logistic Regression (DCD-LR) models achieved the best results indicating that these classification models have strong potential for classifying the more important class (i.e., the ‘Alive’ class). Conclusion: The proposed quantitative radiomic framework yielded promising results, which can guide physicians to make better and more precise decisions and increase the chance of treatment success.

A Block Adaptive Near-Lossless Compression Algorithm for Medical Image Sequences and Diagnostic Quality Assessment

  • Sharma, Urvashi
  • Sood, Meenakshi
  • Puthooran, Emjee
J Digit Imaging 2019 Journal Article, cited 0 times
Website
The near-lossless compression technique has better compression ratio than lossless compression technique while maintaining a maximum error limit for each pixel. It takes the advantage of both the lossy and lossless compression methods providing high compression ratio, which can be used for medical images while preserving diagnostic information. The proposed algorithm uses a resolution and modality independent threshold-based predictor, optimal quantization (q) level, and adaptive block size encoding. The proposed method employs resolution independent gradient edge detector (RIGED) for removing inter-pixel redundancy and block adaptive arithmetic encoding (BAAE) is used after quantization to remove coding redundancy. Quantizer with an optimum q level is used to implement the proposed method for high compression efficiency and for the better quality of the recovered images. The proposed method is implemented on volumetric 8-bit and 16-bit standard medical images and also validated on real time 16-bit-depth images collected from government hospitals. The results show the proposed algorithm yields a high coding performance with BPP of 1.37 and produces high peak signal-to-noise ratio (PSNR) of 51.35 dB for 8-bit-depth image dataset as compared with other near-lossless compression. The average BPP values of 3.411 and 2.609 are obtained by the proposed technique for 16-bit standard medical image dataset and real-time medical dataset respectively with maintained image quality. The improved near-lossless predictive coding technique achieves high compression ratio without losing diagnostic information from the image.

Technical Note‐In silico imaging tools from the VICTRE clinical trial

  • Sharma, Diksha
  • Graff, Christian G.
  • Badal, Andreu
  • Zeng, Rongping
  • Sawant, Purva
  • Sengupta, Aunnasha
  • Dahal, Eshan
  • Badano, Aldo
Medical physics 2019 Journal Article, cited 0 times
Website
PURPOSE: In silico imaging clinical trials are emerging alternative sources of evidence for regulatory evaluation and are typically cheaper and faster than human trials. In this Note, we describe the set of in silico imaging software tools used in the VICTRE (Virtual Clinical Trial for Regulatory Evaluation) which replicated a traditional trial using a computational pipeline. MATERIALS AND METHODS: We describe a complete imaging clinical trial software package for comparing two breast imaging modalities (digital mammography and digital breast tomosynthesis). First, digital breast models were developed based on procedural generation techniques for normal anatomy. Second, lesions were inserted in a subset of breast models. The breasts were imaged using GPU-accelerated Monte Carlo transport methods and read using image interpretation models for the presence of lesions. All in silico components were assembled into a computational pipeline. The VICTRE images were made available in DICOM format for ease of use and visualization. RESULTS: We describe an open-source collection of in silico tools for running imaging clinical trials. All tools and source codes have been made freely available. CONCLUSION: The open-source tools distributed as part of the VICTRE project facilitate the design and execution of other in silico imaging clinical trials. The entire pipeline can be run as a complete imaging chain, modified to match needs of other trial designs, or used as independent components to build additional pipelines.

Content based medical image retrieval using topic and location model

  • Shamna, P.
  • Govindan, V. K.
  • Abdul Nazeer, K. A.
Journal of biomedical informatics 2019 Journal Article, cited 0 times
Website
Background and objective Retrieval of medical images from an anatomically diverse dataset is a challenging task. Objective of our present study is to analyse the automated medical image retrieval system incorporating topic and location probabilities to enhance the performance. Materials and methods In this paper, we present an automated medical image retrieval system using Topic and Location Model. The topic information is generated using Guided Latent Dirichlet Allocation (GuidedLDA) method. A novel Location Model is proposed to incorporate the spatial information of visual words. We also introduce a new metric called position weighted Precision (wPrecision) to measure the rank order of the retrieved images. Results Experiments on two large medical image datasets - IRMA 2009 and Multimodal dataset - revealed that the proposed method outperforms existing medical image retrieval systems in terms of Precision and Mean Average Precision. The proposed method achieved better Mean Average Precision (86.74%) compared to the recent medical image retrieval systems using the Multimodal dataset with 7200 images. The proposed system achieved better Precision (97.5%) for top ten images compared to the recent medical image retrieval systems using IRMA 2009 dataset with 14,410 images. Conclusion Supplementing spatial details of visual words to the Topic Model enhances the retrieval efficiency of medical images from large repositories. Such automated medical image retrieval systems can be used to assist physician to retrieve medical images with better precision compared to the state-of-the-art retrieval systems.

Radiomics based likelihood functions for cancer diagnosis

  • Shakir, Hina
  • Deng, Yiming
  • Rasheed, Haroon
  • Khan, Tariq Mairaj Rasool
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
Radiomic features based classifiers and neural networks have shown promising results in tumor classification. The classification performance can be further improved greatly by exploring and incorporating the discriminative features towards cancer into mathematical models. In this research work, we have developed two radiomics driven likelihood models in Computed Tomography(CT) images to classify lung, colon, head and neck cancer. Initially, two diagnostic radiomic signatures were derived by extracting 105 3-D features from 200 lung nodules and by selecting the features with higher average scores from several supervised as well as unsupervised feature ranking algorithms. The signatures obtained from both the ranking approaches were integrated into two mathematical likelihood functions for tumor classification. Validation of the likelihood functions was performed on 265 public data sets of lung, colon, head and neck cancer with high classification rate. The achieved results show robustness of the models and suggest that diagnostic mathematical functions using general tumor phenotype can be successfully developed for cancer diagnosis.

A new approach for brain tumor diagnosis system: Single image super resolution based maximum fuzzy entropy segmentation and convolutional neural network

  • Sert, Eser
  • Özyurt, Fatih
  • Doğantekin, Akif
Med Hypotheses 2019 Journal Article, cited 0 times
Website
Magnetic resonance imaging (MRI) images can be used to diagnose brain tumors. Thanks to these images, some methods have so far been proposed in order to distinguish between benign and malignant brain tumors. Many systems attempting to define these tumors are based on tissue analysis methods. However, various factors such as the quality of an MRI device, noisy images and low image resolution may decrease the quality of MRI images. To eliminate these problems, super resolution approaches are preferred as a complementary source for brain tumor images. The proposed method benefits from single image super resolution (SISR) and maximum fuzzy entropy segmentation (MFES) for brain tumor segmentation on an MRI image. Later, pre-trained ResNet architecture, which is a convolutional neural network (CNN) architecture, and support vector machine (SVM) are used to perform feature extraction and classification, respectively. It was observed in experimental studies that SISR displayed a higher performance in terms of brain tumor segmentation. Similarly, it displayed a higher performance in terms of classifying brain tumor regions as well as benign and malignant brain tumors. As a result, the present study indicated that SISR yielded an accuracy rate of 95% in the diagnosis of segmented brain tumors, which exceeds brain tumor segmentation using MFES without SISR by 7.5%.

Deep Learning Architectures for Automated Image Segmentation

  • Sengupta, Debleena
2019 Thesis, cited 0 times
Website
Image segmentation is widely used in a variety of computer vision tasks, such as object localization and recognition, boundary detection, and medical imaging. This thesis proposes deep learning architectures to improve automatic object localization and boundary delineation for salient object segmentation in natural images and for 2D medical image segmentation. First, we propose and evaluate a novel dilated dense encoder-decoder architecture with a custom dilated spatial pyramid pooling block to accurately localize and delineate boundaries for salient object segmentation. The dilation offers better spatial understanding and the dense connectivity preserves features learned at shallower levels of the network for better localization. Tested on three publicly available datasets, our architecture outperforms the state-of-the-art for one and is very competitive on the other two. Second, we propose and evaluate a custom 2D dilated dense UNet architecture for accurate lesion localization and segmentation in medical images. This architecture can be utilized as a stand alone segmentation framework or used as a rich feature extracting backbone to aid other models in medical image segmentation. Our architecture outperforms all baseline models for accurate lesion localization and segmentation on a new dataset. We furthermore explore the main considerations that should be taken into account for 3D medical image segmentation, among them preprocessing techniques and specialized loss functions.

Predicting all-cause and lung cancer mortality using emphysema score progression rate between baseline and follow-up chest CT images: A comparison of risk model performances

  • Schreuder, Anton
  • Jacobs, Colin
  • Gallardo-Estrella, Leticia
  • Prokop, Mathias
  • Schaefer-Prokop, Cornelia M
  • van Ginneken, Bram
PLoS One 2019 Journal Article, cited 0 times
Website

Quantitative Delta T1 (dT1) as a Replacement for Adjudicated Central Reader Analysis of Contrast-Enhancing Tumor Burden: A Subanalysis of the American College of Radiology Imaging Network 6677/Radiation Therapy Oncology Group 0625 Multicenter Brain Tumor Trial.

  • Schmainda, K M
  • Prah, M A
  • Zhang, Z
  • Snyder, B S
  • Rand, S D
  • Jensen, T R
  • Barboriak, D P
  • Boxerman, J L
AJNR Am J Neuroradiol 2019 Journal Article, cited 0 times
Website
BACKGROUND AND PURPOSE: Brain tumor clinical trials requiring solid tumor assessment typically rely on the 2D manual delineation of enhancing tumors by >/=2 expert readers, a time-consuming step with poor interreader agreement. As a solution, we developed quantitative dT1 maps for the delineation of enhancing lesions. This retrospective analysis compares dT1 with 2D manual delineation of enhancing tumors acquired at 2 time points during the post therapeutic surveillance period of the American College of Radiology Imaging Network 6677/Radiation Therapy Oncology Group 0625 (ACRIN 6677/RTOG 0625) clinical trial. MATERIALS AND METHODS: Patients enrolled in ACRIN 6677/RTOG 0625, a multicenter, randomized Phase II trial of bevacizumab in recurrent glioblastoma, underwent standard MR imaging before and after treatment initiation. For 123 patients from 23 institutions, both 2D manual delineation of enhancing tumors and dT1 datasets were evaluable at weeks 8 (n = 74) and 16 (n = 57). Using dT1, we assessed the radiologic response and progression at each time point. Percentage agreement with adjudicated 2D manual delineation of enhancing tumor reads and association between progression status and overall survival were determined. RESULTS: For identification of progression, dT1 and adjudicated 2D manual delineation of enhancing tumor reads were in perfect agreement at week 8, with 73.7% agreement at week 16. Both methods showed significant differences in overall survival at each time point. When nonprogressors were further divided into responders versus nonresponders/nonprogressors, the agreement decreased to 70.3% and 52.6%, yet dT1 showed a significant difference in overall survival at week 8 (P = .01), suggesting that dT1 may provide greater sensitivity for stratifying subpopulations. CONCLUSIONS: This study shows that dT1 can predict early progression comparable with the standard method but offers the potential for substantial time and cost savings for clinical trials.

Regression based overall survival prediction of glioblastoma multiforme patients using a single discovery cohort of multi-institutional multi-channel MR images

  • Sanghani, Parita
  • Ang, Beng Ti
  • King, Nicolas Kon Kam
  • Ren, Hongliang
Med Biol Eng ComputMed Biol Eng Comput 2019 Journal Article, cited 0 times
Website
Glioblastoma multiforme (GBM) are malignant brain tumors, associated with poor overall survival (OS). This study aims to predict OS of GBM patients (in days) using a regression framework and assess the impact of tumor shape features on OS prediction. Multi-channel MR image derived texture features, tumor shape, and volumetric features, and patient age were obtained for 163 GBM patients. In order to assess the impact of tumor shape features on OS prediction, two feature sets, with and without tumor shape features, were created. For the feature set with tumor shape features, the mean prediction error (MPE) was 14.6 days and its 95% confidence interval (CI) was 195.8 days. For the feature set excluding shape features, the MPE was 17.1 days and its 95% CI was observed to be 212.7 days. The coefficient of determination (R2) value obtained for the feature set with shape features was 0.92, while it was 0.90 for the feature set excluding shape features. Although marginal, inclusion of shape features improves OS prediction in GBM patients. The proposed OS prediction method using regression provides good accuracy and overcomes the limitations of GBM OS classification, like choosing data-derived or pre-decided thresholds to define the OS groups.

Real-time interactive holographic 3D display with a 360 degrees horizontal viewing zone

  • Sando, Yusuke
  • Satoh, Kazuo
  • Barada, Daisuke
  • Yatagai, Toyohiko
Appl Opt 2019 Journal Article, cited 0 times
Website
To realize a real-time interactive holographic three-dimensional (3D) display system, we synthesize a set of 24 full high-definition (HD) binary computer-generated holograms (CGHs) based on a 3D fast-Fourier-transform-based approach. These 24 CGHs are streamed into a digital micromirror device (DMD) as a single 24-bit image at 60 Hz: 1440 CGHs are synthesized in less than a second. Continual updates of the CGHs displayed on the DMD and synchronization with a rotating mirror enlarges the horizontal viewing zone to 360 degrees using a time-division approach. We successfully demonstrate interactive manipulation, such as object rotation, rendering mode switching, and threshold value alteration, for a medical dataset of a human head obtained by X-ray computed tomography.

Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks

  • Sandfort, Veit
  • Yan, Ke
  • Pickhardt, Perry J
  • Summers, Ronald M
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
Labeled medical imaging data is scarce and expensive to generate. To achieve generalizable deep learning models large amounts of data are needed. Standard data augmentation is a method to increase generalizability and is routinely performed. Generative adversarial networks offer a novel method for data augmentation. We evaluate the use of CycleGAN for data augmentation in CT segmentation tasks. Using a large image database we trained a CycleGAN to transform contrast CT images into non-contrast images. We then used the trained CycleGAN to augment our training using these synthetic non-contrast images. We compared the segmentation performance of a U-Net trained on the original dataset compared to a U-Net trained on the combined dataset of original data and synthetic non-contrast images. We further evaluated the U-Net segmentation performance on two separate datasets: The original contrast CT dataset on which segmentations were created and a second dataset from a different hospital containing only non-contrast CTs. We refer to these 2 separate datasets as the in-distribution and out-of-distribution datasets, respectively. We show that in several CT segmentation tasks performance is improved significantly, especially in out-of-distribution (noncontrast CT) data. For example, when training the model with standard augmentation techniques, performance of segmentation of the kidneys on out-of-distribution non-contrast images was dramatically lower than for in-distribution data (Dice score of 0.09 vs. 0.94 for out-of-distribution vs. in-distribution data, respectively, p < 0.001). When the kidney model was trained with CycleGAN augmentation techniques, the out-of-distribution (non-contrast) performance increased dramatically (from a Dice score of 0.09 to 0.66, p < 0.001). Improvements for the liver and spleen were smaller, from 0.86 to 0.89 and 0.65 to 0.69, respectively. We believe this method will be valuable to medical imaging researchers to reduce manual segmentation effort and cost in CT imaging.

Resolving the molecular complexity of brain tumors through machine learning approaches for precision medicine

  • Sandanaraj, Edwin
2019 Thesis, cited 0 times
Website
Glioblastoma (GBM) tumors are highly aggressive malignant brain tumors and are resistant to conventional therapies. The Cancer Genome Atlas (TCGA) efforts distinguished histologically similar GBM tumors into unique molecular subtypes. The World Health Organization (WHO) has also since incorporated key molecular indicators such as IDH mutations and 1p/19q co-deletions in the clinical classification scheme. The National Neuroscience Institute (NNI) Brain Tumor Resource distinguishes itself as the exclusive collection of patient tumors with corresponding live cells capable of re-creating the full spectrum of the original patient tumor molecular heterogeneity. These cells are thus important to re-create “mouse-patient tumor replicas” that can be prospectively tested with novel compounds, yet have retrospective clinical history, transcriptomic data and tissue paraffin blocks for data mining. My thesis aims to establish a computational framework for the molecular subtyping of brain tumors using machine learning approaches. The applicability of the empirical Bayes model has been demonstrated in the integration of various transcriptomic databases. We utilize predictive algorithms such as template-based, centroid-based, connectivity map (CMAP) and recursive feature elimination combined with random forest approaches to stratify primary tumors and GBM cells. These subtyping approaches serve as key factors for the development of predictive models and eventually, improving precision medicine strategies. We validate the robustness and clinical relevance of our Brain Tumor Resource by evaluating two critical pathways for GBM maintenance. We identify a sialyltransferase enzyme (ST3Gal1) transcriptomic program contributing to tumorigenicity and tumor cell invasiveness. Further, we generate a STAT3 functionally-tuned signature and demonstrate its pivotal role in patient prognosis and chemoresistance. We show that IGF1-R mediates resistance in non-responders to STAT3 inhibitors. Taken together, our studies demonstrate the application of machine learning approaches in revealing molecular insights into brain tumors and subsequently, the translation of these integrative analyses into more effective targeted therapies in the clinics.

Classification of Lung CT Images using BRISK Features

  • Sambasivarao, B.
  • Prathiba, G.
International Journal of Engineering and Advanced Technology (IJEAT) 2019 Journal Article, cited 0 times
Website
Lung cancer is the major cause of death in humans. To increase the survival rate of the people, early detection of cancer is required. Lung cancer that starts in the cells of lung is mainly of two types i.e., cancerous (malignant) and non-cancerous cell (benign). In this paper, work is done on the lung images obtained from the Society of Photographic Instrumentation Engineers (SPIE) database. This SPIE database contains normal, benign and malignant images. In this work, 300 images from the database are used out of which 150 are benign and 150 are malignant. Feature points of lung tumor images are extracted by using Binary Robust Invariant Scale Keypoints (BRISK). BRISK attains commensurate characteristic of correspondence at much less computation time. BRISK is adaptive, high quality accomplishments in avant-grade algorithms. BRISK features divide the pairs of pixels surrounding the keypoint into two subsets: short-distance and long-distance pairs. The orientation of the feature point is calculated by Local intensity gradients from long distance pairs. Rotation of Short distance pairs is obtained using this orientation. These BRISK features are used by classifier for classifying the lung tumors as either benign or malignant. The performance is evaluated by calculating the accuracy.

Automated delineation of non‐small cell lung cancer: A step toward quantitative reasoning in medical decision science

  • Saad, Maliazurina
  • Lee, Ik Hyun
  • Choi, Tae‐Sun
International Journal of Imaging Systems and Technology 2019 Journal Article, cited 0 times
Website
Quantitative reasoning in medical decision science relies on the delineation of pathological objects. For example, evidence‐based clinical decisions regarding lung diseases require the segmentation of nodules, tumors, or cancers. Non‐small cell lung cancer (NSCLC) tends to be large sized, irregularly shaped, and grows against surrounding structures imposing challenges in the segmentation, even for expert clinicians. An automated delineation tool based on spatial analysis was developed and studied on 25 sets of computed tomography scans of NSCLC. Manual and automated delineations were compared, and the proposed method exhibited robustness in terms of the tumor size (5.32–18.24 mm), shape (spherical or irregular), contouring (lobulated, spiculated, or cavitated), localization (solitary, pleural, mediastinal, endobronchial, or tagging), and laterality (left or right lobe) with accuracy between 80% and 99%. Small discrepancies observed between the manual and automated delineations may arise from the variability in the practitioners' definitions of region of interest or imaging artifacts that reduced the tissue resolution.

Are shape morphologies associated with survival? A potential shape-based biomarker predicting survival in lung cancer

  • Saad, Maliazurina
  • Lee, Ik Hyun
  • Choi, Tae-Sun
J Cancer Res Clin Oncol 2019 Journal Article, cited 0 times
Website
PURPOSE: Imaging biomarkers (IBMs) are increasingly investigated as prognostic indicators. IBMs might be capable of assisting treatment selection by providing useful insights into tumor-specific factors in a non-invasive manner. METHODS: We investigated six three-dimensional shape-based IBMs: eccentricities between (I) intermediate-major axis (Eimaj), (II) intermediate-minor axis (Eimin), (III) major-minor axis (Emj-mn) and volumetric index of (I) sphericity (VioS), (II) flattening (VioF), (III) elongating (VioE). Additionally, we investigated previously established two-dimensional shape IBMs: eccentricity (E), index of sphericity (IoS), and minor-to-major axis length (Mn_Mj). IBMs were compared in terms of their predictive performance for 5-year overall survival in two independent cohorts of patients with lung cancer. Cohort 1 received surgical excision, while cohort 2 received radiation therapy alone or chemo-radiation therapy. Univariate and multivariate survival analyses were performed. Correlations with clinical parameters were evaluated using analysis of variance. IBM reproducibility was assessed using concordance correlation coefficients (CCCs). RESULTS: E was associated with reduced survival in cohort 1 (hazard ratio [HR]: 0.664). Eimin and VioF were associated with reduced survival in cohort 2 (HR 1.477 and 1.701). VioS was associated with reduced survival in cohorts 1 and 2 (HR 1.758 and 1.472). Spherical tumors correlated with shorter survival durations than did irregular tumors (median survival difference: 1.21 and 0.35 years in cohorts 1 and 2, respectively). VioS was a significant predictor of survival in multivariate analyses of both cohorts. All IBMs showed good reproducibility (CCC ranged between 0.86-0.98). CONCLUSIONS: In both investigated cohorts, VioS successfully linked shape morphology to patient survival.

Multi-Disease Segmentation of Gliomas and White Matter Hyperintensities in the BraTS Data Using a 3D Convolutional Neural Network

  • Rudie, Jeffrey D.
  • Weiss, David A.
  • Saluja, Rachit
  • Rauschecker, Andreas M.
  • Wang, Jiancong
  • Sugrue, Leo
  • Bakas, Spyridon
  • Colby, John B.
Frontiers in computational neuroscience 2019 Journal Article, cited 0 times
An important challenge in segmenting real-world biomedical imaging data is the presence of multiple disease processes within individual subjects. Most adults above age 60 exhibit a variable degree of small vessel ischemic disease, as well as chronic infarcts, which will manifest as white matter hyperintensities (WMH) on brain MRIs. Subjects diagnosed with gliomas will also typically exhibit some degree of abnormal T2 signal due to WMH, rather than just due to tumor. We sought to develop a fully automated algorithm to distinguish and quantify these distinct disease processes within individual subjects’ brain MRIs. To address this multi-disease problem, we trained a 3D U-Net to distinguish between abnormal signal arising from tumors vs. WMH in the 3D multi-parametric MRI (mpMRI, i.e., native T1-weighted, T1-post-contrast, T2, T2-FLAIR) scans of the International Brain Tumor Segmentation (BraTS) 2018 dataset (ntraining = 285, nvalidation = 66). Our trained neuroradiologist manually annotated WMH on the BraTS training subjects, finding that 69% of subjects had WMH. Our 3D U-Net model had a 4-channel 3D input patch (80 × 80 × 80) from mpMRI, four encoding and decoding layers, and an output of either four [background, active tumor (AT), necrotic core (NCR), peritumoral edematous/infiltrated tissue (ED)] or five classes (adding WMH as the fifth class). For both the four- and five-class output models, the median Dice for whole tumor (WT) extent (i.e., union of AT, ED, NCR) was 0.92 in both training and validation sets. Notably, the five-class model achieved significantly (p = 0.002) lower/better Hausdorff distances for WT extent in the training subjects. There was strong positive correlation between manually segmented and predicted volumes for WT (r = 0.96) and WMH (r = 0.89). Larger lesion volumes were positively correlated with higher/better Dice scores for WT (r = 0.33), WMH (r = 0.34), and across all lesions (r = 0.89) on a log(10) transformed scale. While the median Dice for WMH was 0.42 across training subjects with WMH, the median Dice was 0.62 for those with at least 5 cm3 of WMH. We anticipate the development of computational algorithms that are able to model multiple diseases within a single subject will be a critical step toward translating and integrating artificial intelligence systems into the heterogeneous real-world clinical workflow.

Conditional Generative Adversarial Refinement Networks for Unbalanced Medical Image Semantic Segmentation

  • Rezaei, Mina
  • Yang, Haojin
  • Harmuth, Konstantin
  • Meinel, Christoph
2019 Conference Proceedings, cited 0 times
Website

Optimizing deep belief network parameters using grasshopper algorithm for liver disease classification

  • Renukadevi, Thangavel
  • Karunakaran, Saminathan
International Journal of Imaging Systems and Technology 2019 Journal Article, cited 0 times
Image processing plays a vital role in many areas such as healthcare, military, scientific and business due to its wide variety of advantages and applications. Detection of computed tomography (CT) liver disease is one of the difficult tasks in the medical field. Hand crafted features and classifications are the two types of methods used in the previous approaches, to classify liver disease. But these classification results are not optimal. In this article, we propose a novel method utilizing deep belief network (DBN) with grasshopper optimization algorithm (GOA) for liver disease classification. Initially, the image quality is enhanced by preprocessing techniques and then features like texture, color and shape are extracted. The extracted features are reduced by utilizing the dimensionality reduction method like principal component analysis (PCA). Here, the DBN parameters are optimized using GOA for recognizing liver disease. The experiments are performed on the real time and open source CT image datasets which embraces normal, cyst, hepatoma, and cavernous hemangiomas, fatty liver, metastasis, cirrhosis, and tumor samples. The proposed method yields 98% accuracy, 95.82% sensitivity, 97.52% specificity, 98.53% precision, and 96.8% F‐1 score in simulation process when compared with other existing techniques.

Accelerating Machine Learning with Training Data Management

  • Ratner, Alexander Jason
2019 Thesis, cited 1 times
Website
One of the biggest bottlenecks in developing machine learning applications today is the need for large hand-labeled training datasets. Even at the world's most sophisticated technology companies, and especially at other organizations across science, medicine, industry, and government, the time and monetary cost of labeling and managing large training datasets is often the blocking factor in using machine learning. In this thesis, we describe work on training data management systems that enable users to programmatically build and manage training datasets, rather than labeling and managing them by hand, and present algorithms and supporting theory for automatically modeling this noisier process of training set specification in order to improve the resulting training set quality. We then describe extensive empirical results and real-world deployments demonstrating that programmatically building, managing, and modeling training sets in this way can lead to radically faster, more flexible, and more accessible ways of developing machine learning applications. We start by describing data programming, a paradigm for labeling training datasets programmatically rather than by hand, and Snorkel, an open source training data management system built around data programming that has been used by major technology companies, academic labs, and government agencies to build machine learning applications in days or weeks rather than months or years. In Snorkel, rather than hand-labeling training data, users write programmatic operators called labeling functions, which label data using various heuristic or weak supervision strategies such as pattern matching, distant supervision, and other models. These labeling functions can have noisy, conflicting, and correlated outputs, which Snorkel models and combines into clean training labels without requiring any ground truth using theoretically consistent modeling approaches we develop. We then report on extensive empirical validations, user studies, and real-world applications of Snorkel in industrial, scientific, medical, and other use cases ranging from knowledge base construction from text data to medical monitoring over image and video data. Next, we will describe two other approaches for enabling users to programmatically build and manage training datasets, both currently integrated into the Snorkel open source framework: Snorkel MeTaL, an extension of data programming and Snorkel to the setting where users have multiple related classification tasks, in particular focusing on multi-task learning; and TANDA, a system for optimizing and managing strategies for data augmentation, a critical training dataset management technique wherein a labeled dataset is artificially expanded by transforming data points. Finally, we will conclude by outlining future research directions for further accelerating and democratizing machine learning workflows, such as higher-level programmatic interfaces and massively multi-task frameworks.

Multivariate Analysis of Preoperative Magnetic Resonance Imaging Reveals Transcriptomic Classification of de novo Glioblastoma Patients

  • Rathore, Saima
  • Akbari, Hamed
  • Bakas, Spyridon
  • Pisapia, Jared M
  • Shukla, Gaurav
  • Rudie, Jeffrey D
  • Da, Xiao
  • Davuluri, Ramana V
  • Dahmane, Nadia
  • O'Rourke, Donald M
Frontiers in computational neuroscience 2019 Journal Article, cited 0 times

Reg R-CNN: Lesion Detection and Grading Under Noisy Labels

  • Ramien, Gregor N.
  • Jaeger, Paul F.
  • Kohl, Simon A. A.
  • Maier-Hein, Klaus H.
2019 Conference Proceedings, cited 0 times
Website
For the task of concurrently detecting and categorizing objects, the medical imaging community commonly adopts methods developed on natural images. Current state-of-the-art object detectors are comprised of two stages: the first stage generates region proposals, the second stage subsequently categorizes them. Unlike in natural images, however, for anatomical structures of interest such as tumors, the appearance in the image (e.g., scale or intensity) links to a malignancy grade that lies on a continuous ordinal scale. While classification models discard this ordinal relation between grades by discretizing the continuous scale to an unordered bag of categories, regression models are trained with distance metrics, which preserve the relation. This advantage becomes all the more important in the setting of label confusions on ambiguous data sets, which is the usual case with medical images. To this end, we propose Reg R-CNN, which replaces the second-stage classification model of a current object detector with a regression model. We show the superiority of our approach on a public data set with 1026 patients and a series of toy experiments. Code will be available at github.com/MIC-DKFZ/RegRCNN.

Brain Tumor Classification Using MRI Images with K-Nearest Neighbor Method

  • Ramdlon, Rafi Haidar
  • Martiana Kusumaningtyas, Entin
  • Karlita, Tita
2019 Conference Proceedings, cited 0 times
The accuracy level in diagnosing tumor type through MRI results is required to establish appropriate medical treatment. MRI results can be computationally examined using K-Nearest Neighbor method, a basic science application and classification technique of image processing. Tumor classification system is designed to detect tumor and edema in T1 and T2 images sequences, as well as to label and classify tumor type. Data interpretation of such system derives from Axial section of MRI results only, which is classified into three classes: Astrocytoma, Glioblastoma, and Oligodendroglioma. To detect tumor area, basic image processing technique is employed, comprising of image enhancement, image binarization, morphological image, and watershed. Tumor classification is applied after segmentation process of Shape Extration Feature is undertaken. The results of tumor classification obtained was 89.5 percent, which is able to provide information regarding tumor detection more clearly and specifically.

Texture Classification Study of MR Images for Hepatocellular Carcinoma

  • QIU, Jia-jun
  • WU, Yue
  • HUI, Bei
  • LIU, Yan-bo
电子科技大学学报Bioelectronics 2019 Journal Article, cited 0 times
Website
Combining wavelet multi-resolution analysis method and statistical analysis method, a composite texture classification model is proposed to evaluate its value in computer-aided diagnosis of hepatocellular carcinoma (HCC) and normal liver tissue based on magnetic resonance (MR) images. First, training samples are divided into two groups by two categories, statistics of wavelet coefficients are calculated in each group. Second, two discretizations are performed on wavelet coefficients of a new sample based on the two sets of statistical results, and two groups of features can be extracted by histogram, co-occurrence matrix, and run-length matrix, etc. Finally, classification is performed twice based on the two groups of features to calculate the category attribute probabilities, then a decision is conducted. The experimental results demonstrate that the proposed model can obtain better classification performance than routine methods, it is rewarding for the computer-aided diagnosis of HCC and normal liver tissue based on MR images.

A Reversible and Imperceptible Watermarking Approach for Ensuring the Integrity and Authenticity of Brain MR Images

  • Qasim, Asaad Flayyih
2019 Thesis, cited 0 times
Website
The digital medical workflow has many circumstances in which the image data can be manipulated both within the secured Hospital Information Systems (HIS) and outside, as images are viewed, extracted and exchanged. This potentially grows ethical and legal concerns regarding modifying images details that are crucial in medical examinations. Digital watermarking is recognised as a robust technique for enhancing trust within medical imaging by detecting alterations applied to medical images. Despite its efficiency, digital watermarking has not been widely used in medical imaging. Existing watermarking approaches often suffer from validation of their appropriateness to medical domains. Particularly, several research gaps have been identified: (i) essential requirements for the watermarking of medical images are not well defined; (ii) no standard approach can be found in the literature to evaluate the imperceptibility of watermarked images; and (iii) no study has been conducted before to test digital watermarking in a medical imaging workflow. This research aims to investigate digital watermarking to designing, analysing and applying it to medical images to confirm manipulations can be detected and tracked. In addressing these gaps, a number of original contributions have been presented. A new reversible and imperceptible watermarking approach is presented to detect manipulations of brain Magnetic Resonance (MR) images based on Difference Expansion (DE) technique. Experimental results show that the proposed method, whilst fully reversible, can also realise a watermarked image with low degradation for reasonable and controllable embedding capacity. This is fulfilled by encoding the data into smooth regions (blocks that have least differences between their pixels values) inside the Region of Interest (ROI) part of medical images and also through the elimination of the large location map (location of pixels used for encoding the data) required at extraction to retrieve the encoded data. This compares favourably to outcomes reported under current state-of-art techniques in terms of visual image quality of watermarked images. This was also evaluated through conducting a novel visual assessment based on relative Visual Grading Analysis (relative VGA) to define a perceptual threshold in which modifications become noticeable to radiographers. The proposed approach is then integrated into medical systems to verify its validity and applicability in a real application scenario of medical imaging where medical images are generated, exchanged and archived. This enhanced security measure, therefore, enables the detection of image manipulations, by an imperceptible and reversible watermarking approach, that may establish increased trust in the digital medical imaging workflow.

Unpaired Synthetic Image Generation in Radiology Using GANs

  • Prokopenko, Denis
  • Stadelmann, Joël Valentin
  • Schulz, Heinrich
  • Renisch, Steffen
  • Dylov, Dmitry V.
2019 Journal Article, cited 1 times
Website
In this work, we investigate approaches to generating synthetic Computed Tomography (CT) images from the real Magnetic Resonance Imaging (MRI) data. Generating the radiological scans has grown in popularity in the recent years due to its promise to enable single-modality radiotherapy planning in clinical oncology, where the co-registration of the radiological modalities is cumbersome. We rely on the Generative Adversarial Network (GAN) models with cycle consistency which permit unpaired image-to-image translation between the modalities. We also introduce the perceptual loss function term and the coordinate convolutional layer to further enhance the quality of translated images. The Unsharp masking and the Super-Resolution GAN (SRGAN) were considered to improve the quality of synthetic images. The proposed architectures were trained on the unpaired MRI-CT data and then evaluated on the paired brain dataset. The resulting CT scans were generated with the mean absolute error (MAE), the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) scores of 60.83 HU, 17.21 dB, and 0.8, respectively. DualGAN with perceptual loss function term and coordinate convolutional layer proved to perform best. The MRI-CT translation approach holds potential to eliminate the need for the patients to undergo both examinations and to be clinically accepted as a new tool for radiotherapy planning.

Disorder in Pixel-Level Edge Directions on T1WI Is Associated with the Degree of Radiation Necrosis in Primary and Metastatic Brain Tumors: Preliminary Findings

  • Prasanna, P
  • Rogers, L
  • Lam, TC
  • Cohen, M
  • Siddalingappa, A
  • Wolansky, L
  • Pinho, M
  • Gupta, A
  • Hatanpaa, KJ
  • Madabhushi, A
American Journal of Neuroradiology 2019 Journal Article, cited 0 times
Website

Deep multi-modality collaborative learning for distant metastases predication in PET-CT soft-tissue sarcoma studies

  • Peng, Yige
  • Bi, Lei
  • Guo, Yuyu
  • Feng, Dagan
  • Fulham, Michael
  • Kim, Jinman
2019 Conference Proceedings, cited 0 times

CT-based radiomic features predict tumor grading and have prognostic value in patients with soft tissue sarcomas treated with neoadjuvant radiation therapy

  • Peeken, J. C.
  • Bernhofer, M.
  • Spraker, M. B.
  • Pfeiffer, D.
  • Devecka, M.
  • Thamer, A.
  • Shouman, M. A.
  • Ott, A.
  • Nusslin, F.
  • Mayr, N. A.
  • Rost, B.
  • Nyflot, M. J.
  • Combs, S. E.
Radiother Oncol 2019 Journal Article, cited 0 times
Website
PURPOSE: In soft tissue sarcoma (STS) patients systemic progression and survival remain comparably low despite low local recurrence rates. In this work, we investigated whether quantitative imaging features ("radiomics") of radiotherapy planning CT-scans carry a prognostic value for pre-therapeutic risk assessment. METHODS: CT-scans, tumor grade, and clinical information were collected from three independent retrospective cohorts of 83 (TUM), 87 (UW) and 51 (McGill) STS patients, respectively. After manual segmentation and preprocessing, 1358 radiomic features were extracted. Feature reduction and machine learning modeling for the prediction of grading, overall survival (OS), distant (DPFS) and local (LPFS) progression free survival were performed followed by external validation. RESULTS: Radiomic models were able to differentiate grade 3 from non-grade 3 STS (area under the receiver operator characteristic curve (AUC): 0.64). The Radiomic models were able to predict OS (C-index: 0.73), DPFS (C-index: 0.68) and LPFS (C-index: 0.77) in the validation cohort. A combined clinical-radiomics model showed the best prediction for OS (C-index: 0.76). The radiomic scores were significantly associated in univariate and multivariate cox regression and allowed for significant risk stratification for all three endpoints. CONCLUSION: This is the first report demonstrating a prognostic potential and tumor grading differentiation by CT-based radiomics.

Decorin Expression Is Associated With Diffusion MR Phenotypes in Glioblastoma

  • Patel, Kunal S.
  • Raymond, Catalina
  • Yao, Jingwen
  • Tsung, Joseph
  • Liau, Linda M.
  • Everson, Richard
  • Cloughesy, Timothy F.
  • Ellingson, Benjamin
Neurosurgery 2019 Journal Article, cited 0 times
Abstract INTRODUCTION Significant evidence from multiple phase II trials have suggested diffusion-weighted imaging estimates of apparent diffusion coefficient (ADC) are a predictive imaging biomarker for survival benefit for recurrent glioblastoma when treated with anti-VEGF therapies, including bevacizumab, cediranib, and cabozantinib. Despite this observation, the underlying mechanism linking anti-VEGF therapeutic efficacy with diffusion MR characteristics remains unknown. We hypothesized that a high expression of decorin, a small proteoglycan that has been associated with sequestration of pro-angiogenic signaling as well as reduction in the viscosity of the extracellular environment, may be associated with elevated ADC. METHODS A differential gene expression analysis was carried out in human glioblastoma samples in whom preoperative diffusion imaging was obtained. ADC histogram analysis was carried out to calculate preoperative ADCL values, the average ADC in the lower distribution using a double Gaussian mixed model. The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA) databases were queried to identify diffusion imaging and levels of decorin protein expression. Patients with recurrent glioblastoma who undergo resection prospectively had targeted biopsies based on the ADC analysis collected. These samples were stained for decorin and quantified using whole-slide image analysis software. RESULTS Differential gene expression analysis between tumors associated with high and low preoperative ADCL showed that patients with high ADCL had increased decorin gene expression. Patients from the TCGA database with elevated ADCL had a significantly higher level of decorin gene expression (P = .01). These patients had a survival advantage with a log-rank analysis (P = .002). Patients with preoperative diffusion imaging had multiple targeted intraoperative biopsies stained for decorin. Patients with high ADCL had increased decorin expression on immunohistochemistry (P = .002). CONCLUSION Increased ADCL on diffusion MR imaging is associated with high decorin expression as well as increased survival in glioblastoma. Decorin may play an important role the imaging features on diffusion MR and anti-VEGF treatment efficacy. Decorin expression may serve as a future therapeutic target in patients with favorable diffusion MR characteristics.

Brain tumor detection based on Convolutional Neural Network with neutrosophic expert maximum fuzzy sure entropy

  • Özyurt, Fatih
  • Sert, Eser
  • Avci, Engin
  • Dogantekin, Esin
Measurement 2019 Journal Article, cited 0 times
Brain tumor classification is a challenging task in the field of medical image processing. The present study proposes a hybrid method using Neutrosophy and Convolutional Neural Network (NS-CNN). It aims to classify tumor region areas that are segmented from brain images as benign and malignant. In the first stage, MRI images were segmented using the neutrosophic set – expert maximum fuzzy-sure entropy (NS-EMFSE) approach. The features of the segmented brain images in the classification stage were obtained by CNN and classified using SVM and KNN classifiers. Experimental evaluation was carried out based on 5-fold cross-validation on 80 of benign tumors and 80 of malign tumors. The findings demonstrated that the CNN features displayed a high classification performance with different classifiers. Experimental results indicate that CNN features displayed a better classification performance with SVM as simulation results validated output data with an average success of 95.62%.

Effective Diagnosis and Treatment through Content-Based Medical Image Retrieval (CBMIR) by Using Artificial Intelligence

  • Owais, Muhammad
  • Arsalan, Muhammad
  • Choi, Jiho
  • Park, Kang Ryoung
J Clin Med 2019 Journal Article, cited 0 times
Website
Medical-image-based diagnosis is a tedious task‚ and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%).

Autocorrection of lung boundary on 3D CT lung cancer images

  • Nurfauzi, R.
  • Nugroho, H. A.
  • Ardiyanto, I.
  • Frannita, E. L.
Journal of King Saud University - Computer and Information Sciences 2019 Journal Article, cited 0 times
Website
Lung cancer in men has the highest mortality rate among all types of cancer. Juxta-pleural and juxta-vascular are the most common nodules located on the lung surface. A computer aided detection (CADe) system is effective for assisting radiologists in diagnosing lung nodules. However, the lung segmentation step requires sophisticated methods when juxta-pleural and juxta-vascular nodules are present. Fast computational time and low error in covering nodule areas are the aims of this study. The proposed method consists of five stages, namely ground truth (GT) extraction, data preparation, tracheal extraction, separation of lung fusion and lung border correction. The used data consist of 57 3D CT lung cancer images taken from selected LIDC-IDRI dataset. These nodules are determined as the outer areas labeled by four radiologists. The proposed method achieves the fastest computational time of 0.32 s per slice or 60 times faster than that of conventional adaptive border marching (ABM). Moreover, it produces under segmentation of nodule value as low as 14.6%. It indicates that the proposed method has a potential to be embedded in the lung CADe system to cover pleural juxta and vascular nodule areas in lung segmentation.

Projection-domain scatter correction for cone beam computed tomography using a residual convolutional neural network

  • Nomura, Yusuke
  • Xu, Qiong
  • Shirato, Hiroki
  • Shimizu, Shinichi
  • Xing, Lei
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: Scatter is a major factor degrading the image quality of cone beam computed tomography (CBCT). Conventional scatter correction strategies require handcrafted analytical models with ad hoc assumptions, which often leads to less accurate scatter removal. This study aims to develop an effective scatter correction method using a residual convolutional neural network (CNN). METHODS: A U-net based 25-layer CNN was constructed for CBCT scatter correction. The establishment of the model consists of three steps: model training, validation, and testing. For model training, a total of 1800 pairs of x-ray projection and the corresponding scatter-only distribution in nonanthropomorphic phantoms taken in full-fan scan were generated using Monte Carlo simulation of a CBCT scanner installed with a proton therapy system. An end-to-end CNN training was implemented with two major loss functions for 100 epochs with a mini-batch size of 10. Image rotations and flips were randomly applied to augment the training datasets during training. For validation, 200 projections of a digital head phantom were collected. The proposed CNN-based method was compared to a conventional projection-domain scatter correction method named fast adaptive scatter kernel superposition (fASKS) method using 360 projections of an anthropomorphic head phantom. Two different loss functions were applied for the same CNN to evaluate the impact of loss functions on the final results. Furthermore, the CNN model trained with full-fan projections was fine-tuned for scatter correction in half-fan scan by using transfer learning with additional 360 half-fan projection pairs of nonanthropomorphic phantoms. The tuned-CNN model for half-fan scan was compared with the fASKS method as well as the CNN-based method without the fine-tuning using additional lung phantom projections. RESULTS: The CNN-based method provides projections with significantly reduced scatter and CBCT images with more accurate Hounsfield Units (HUs) than that of the fASKS-based method. Root mean squared error of the CNN-corrected projections was improved to 0.0862 compared to 0.278 for uncorrected projections or 0.117 for the fASKS-corrected projections. The CNN-corrected reconstruction provided better HU quantification, especially in regions near the air or bone interfaces. All four image quality measures, which include mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM), indicated that the CNN-corrected images were significantly better than that of the fASKS-corrected images. Moreover, the proposed transfer learning technique made it possible for the CNN model trained with full-fan projections to be applicable to remove scatters in half-fan projections after fine-tuning with only a small number of additional half-fan training datasets. SSIM value of the tuned-CNN-corrected images was 0.9993 compared to 0.9984 for the non-tuned-CNN-corrected images or 0.9990 for the fASKS-corrected images. Finally, the CNN-based method is computationally efficient - the correction time for the 360 projections only took less than 5 s in the reported experiments on a PC (4.20 GHz Intel Core-i7 CPU) with a single NVIDIA GTX 1070 GPU. CONCLUSIONS: The proposed deep learning-based method provides an effective tool for CBCT scatter correction and holds significant value for quantitative imaging and image-guided radiation therapy.

Classification of brain tumor isocitrate dehydrogenase status using MRI and deep learning

  • Nalawade, S.
  • Murugesan, G. K.
  • Vejdani-Jahromi, M.
  • Fisicaro, R. A.
  • Bangalore Yogananda, C. G.
  • Wagner, B.
  • Mickey, B.
  • Maher, E.
  • Pinho, M. C.
  • Fei, B.
  • Madhuranthakam, A. J.
  • Maldjian, J. A.
J Med Imaging (Bellingham) 2019 Journal Article, cited 0 times
Website
Isocitrate dehydrogenase (IDH) mutation status is an important marker in glioma diagnosis and therapy. We propose an automated pipeline for noninvasively predicting IDH status using deep learning and T2-weighted (T2w) magnetic resonance (MR) images with minimal preprocessing (N4 bias correction and normalization to zero mean and unit variance). T2w MR images and genomic data were obtained from The Cancer Imaging Archive dataset for 260 subjects (120 high-grade and 140 low-grade gliomas). A fully automated two-dimensional densely connected model was trained to classify IDH mutation status on 208 subjects and tested on another held-out set of 52 subjects using fivefold cross validation. Data leakage was avoided by ensuring subject separation during the slice-wise randomization. Mean classification accuracy of 90.5% was achieved for each axial slice in predicting the three classes of no tumor, IDH mutated, and IDH wild type. Test accuracy of 83.8% was achieved in predicting IDH mutation status for individual subjects on the test dataset of 52 subjects. We demonstrate a deep learning method to predict IDH mutation status using T2w MRI alone. Radiologic imaging studies using deep learning methods must address data leakage (subject duplication) in the randomization process to avoid upward bias in the reported classification accuracy.

Prediction of malignant glioma grades using contrast-enhanced T1-weighted and T2-weighted magnetic resonance images based on a radiomic analysis

  • Nakamoto, Takahiro
  • Takahashi, Wataru
  • Haga, Akihiro
  • Takahashi, Satoshi
  • Kiryu, Shigeru
  • Nawa, Kanabu
  • Ohta, Takeshi
  • Ozaki, Sho
  • Nozawa, Yuki
  • Tanaka, Shota
  • Mukasa, Akitake
  • Nakagawa, Keiichi
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
We conducted a feasibility study to predict malignant glioma grades via radiomic analysis using contrast-enhanced T1-weighted magnetic resonance images (CE-T1WIs) and T2-weighted magnetic resonance images (T2WIs). We proposed a framework and applied it to CE-T1WIs and T2WIs (with tumor region data) acquired preoperatively from 157 patients with malignant glioma (grade III: 55, grade IV: 102) as the primary dataset and 67 patients with malignant glioma (grade III: 22, grade IV: 45) as the validation dataset. Radiomic features such as size/shape, intensity, histogram, and texture features were extracted from the tumor regions on the CE-T1WIs and T2WIs. The Wilcoxon-Mann-Whitney (WMW) test and least absolute shrinkage and selection operator logistic regression (LASSO-LR) were employed to select the radiomic features. Various machine learning (ML) algorithms were used to construct prediction models for the malignant glioma grades using the selected radiomic features. Leave-one-out cross-validation (LOOCV) was implemented to evaluate the performance of the prediction models in the primary dataset. The selected radiomic features for all folds in the LOOCV of the primary dataset were used to perform an independent validation. As evaluation indices, accuracies, sensitivities, specificities, and values for the area under receiver operating characteristic curve (or simply the area under the curve (AUC)) for all prediction models were calculated. The mean AUC value for all prediction models constructed by the ML algorithms in the LOOCV of the primary dataset was 0.902 +/- 0.024 (95% CI (confidence interval), 0.873-0.932). In the independent validation, the mean AUC value for all prediction models was 0.747 +/- 0.034 (95% CI, 0.705-0.790). The results of this study suggest that the malignant glioma grades could be sufficiently and easily predicted by preparing the CE-T1WIs, T2WIs, and tumor delineations for each patient. Our proposed framework may be an effective tool for preoperatively grading malignant gliomas.

Quantitative and Qualitative Evaluation of Convolutional Neural Networks with a Deeper U-Net for Sparse-View Computed Tomography Reconstruction

  • Nakai, H.
  • Nishio, M.
  • Yamashita, R.
  • Ono, A.
  • Nakao, K. K.
  • Fujimoto, K.
  • Togashi, K.
Acad Radiol 2019 Journal Article, cited 0 times
Website
Rationale and Objectives To evaluate the utility of a convolutional neural network (CNN) with an increased number of contracting and expanding paths of U-net for sparse-view CT reconstruction. Materials and Methods This study used 60 anonymized chest CT cases from a public database called “The Cancer Imaging Archive”. Eight thousand images from 40 cases were used for training. Eight hundred and 80 images from another 20 cases were used for quantitative and qualitative evaluation, respectively. Sparse-view CT images subsampled by a factor of 20 were simulated, and two CNNs were trained to create denoised images from the sparse-view CT. A CNN based on U-net with residual learning with four contracting and expanding paths (the preceding CNN) was compared with another CNN with eight contracting and expanding paths (the proposed CNN) both quantitatively (peak signal to noise ratio, structural similarity index), and qualitatively (the scores given by two radiologists for anatomical visibility, artifact and noise, and overall image quality) using the Wilcoxon signed-rank test. Nodule and emphysema appearance were also evaluated qualitatively. Results The proposed CNN was significantly better than the preceding CNN both quantitatively and qualitatively (overall image quality interquartile range, 3.0–3.5 versus 1.0–1.0 reported from the preceding CNN; p < 0.001). However, only 2 of 22 cases used for emphysematous evaluation (2 CNNs for every 11 cases with emphysema) had an average score of ≥ 2 (on a 3 point scale). Conclusion Increasing contracting and expanding paths may be useful for sparse-view CT reconstruction with CNN. However, poor reproducibility of emphysema appearance should also be noted. Key Words Convolutional neural network CNN Sparse-view CT Deep learning Abbreviations BN batch normalization CNN convolutional neural networks CTcomputed tomography dB decibel GGO ground glass opacity GPU graphics processing unit MSE the mean squared error PSNR peak signal to noise ratio ReLU rectified linear unit SSIM structural similarity index TCIA The Cancer Imaging Archive

Advanced 3D printed model of middle cerebral artery aneurysms for neurosurgery simulation

  • Nagassa, Ruth G
  • McMenamin, Paul G
  • Adams, Justin W
  • Quayle, Michelle R
  • Rosenfeld, Jeffrey V
3D Print Med 2019 Journal Article, cited 0 times
Website
BACKGROUND: Neurosurgical residents are finding it more difficult to obtain experience as the primary operator in aneurysm surgery. The present study aimed to replicate patient-derived cranial anatomy, pathology and human tissue properties relevant to cerebral aneurysm intervention through 3D printing and 3D print-driven casting techniques. The final simulator was designed to provide accurate simulation of a human head with a middle cerebral artery (MCA) aneurysm. METHODS: This study utilized living human and cadaver-derived medical imaging data including CT angiography and MRI scans. Computer-aided design (CAD) models and pre-existing computational 3D models were also incorporated in the development of the simulator. The design was based on including anatomical components vital to the surgery of MCA aneurysms while focusing on reproducibility, adaptability and functionality of the simulator. Various methods of 3D printing were utilized for the direct development of anatomical replicas and moulds for casting components that optimized the bio-mimicry and mechanical properties of human tissues. Synthetic materials including various types of silicone and ballistics gelatin were cast in these moulds. A novel technique utilizing water-soluble wax and silicone was used to establish hollow patient-derived cerebrovascular models. RESULTS: A patient-derived 3D aneurysm model was constructed for a MCA aneurysm. Multiple cerebral aneurysm models, patient-derived and CAD, were replicated as hollow high-fidelity models. The final assembled simulator integrated six anatomical components relevant to the treatment of cerebral aneurysms of the Circle of Willis in the left cerebral hemisphere. These included models of the cerebral vasculature, cranial nerves, brain, meninges, skull and skin. The cerebral circulation was modeled through the patient-derived vasculature within the brain model. Linear and volumetric measurements of specific physical modular components were repeated, averaged and compared to the original 3D meshes generated from the medical imaging data. Calculation of the concordance correlation coefficient (rhoc: 90.2%-99.0%) and percentage difference (</=0.4%) confirmed the accuracy of the models. CONCLUSIONS: A multi-disciplinary approach involving 3D printing and casting techniques was used to successfully construct a multi-component cerebral aneurysm surgery simulator. Further study is planned to demonstrate the educational value of the proposed simulator for neurosurgery residents.

Recommendations for Processing Head CT Data

  • Muschelli, J.
Frontiers in Neuroinformatics 2019 Journal Article, cited 0 times
Website
Many research applications of neuroimaging use magnetic resonance imaging (MRI). As such, recommendations for image analysis and standardized imaging pipelines exist. Clinical imaging, however, relies heavily on X-ray computed tomography (CT) scans for diagnosis and prognosis. Currently, there is only one image processing pipeline for head CT, which focuses mainly on head CT data with lesions. We present tools and a complete pipeline for processing CT data, focusing on open-source solutions, that focus on head CT but are applicable to most CT analyses. We describe going from raw DICOM data to a spatially normalized brain within CT presenting a full example with code. Overall, we recommend anonymizing data with Clinical Trials Processor, converting DICOM data to NIfTI using dcm2niix, using BET for brain extraction, and registration using a publicly-available CT template for analysis.

Impact of image preprocessing methods on reproducibility of radiomic features in multimodal magnetic resonance imaging in glioblastoma

  • Moradmand, Hajar
  • Aghamiri, Seyed Mahmoud Reza
  • Ghaderi, Reza
J Appl Clin Med Phys 2019 Journal Article, cited 0 times
Website
To investigate the effect of image preprocessing, in respect to intensity inhomogeneity correction and noise filtering, on the robustness and reproducibility of the radiomics features extracted from the Glioblastoma (GBM) tumor in multimodal MR images (mMRI). In this study, for each patient 1461 radiomics features were extracted from GBM subregions (i.e., edema, necrosis, enhancement, and tumor) of mMRI (i.e., FLAIR, T1, T1C, and T2) volumes for five preprocessing combinations (in total 116 880 radiomics features). The robustness and reproducibility of the radiomics features were assessed under four comparisons: (a) Baseline versus modified bias field; (b) Baseline versus modified bias field followed by noise filtering; (c) Baseline versus modified noise, and (d) Baseline versus modified noise followed bias field correction. The concordance correlation coefficient (CCC), dynamic range (DR), and interclass correlation coefficient (ICC) were used as metrics. Shape features and subsequently, local binary pattern (LBP) filtered images were highly stable and reproducible against bias field correction and noise filtering in all measurements. In all MRI modalities, necrosis regions (NC: n ~449/1461, 30%) had the highest number of highly robust features, with CCC and DR >= 0.9, in comparison with edema (ED: n ~296/1461, 20%), enhanced (EN: n ~ 281/1461, 19%) and active-tumor regions (TM: n ~254/1461, 17%). The necrosis regions (NC: n ~ 449/1461, 30%) had a higher number of highly robust features (CCC and DR >= 0.9) than edema (ED: n ~ 296/1461, 20%), enhanced (EN: n ~ 281/1461, 19%) and active-tumor (TM: n ~ 254/1461, 17%) regions across all modalities. Furthermore, our results identified that the percentage of high reproducible features with ICC >= 0.9 after bias field correction (23.2%), and bias field correction followed by noise filtering (22.4%) were higher in contrast with noise smoothing and also noise smoothing follow by bias correction. These preliminary findings imply that preprocessing sequences can also have a significant impact on the robustness and reproducibility of mMRI-based radiomics features and identification of generalizable and consistent preprocessing algorithms is a pivotal step before imposing radiomics biomarkers into the clinic for GBM patients.

Evaluation of TP53/PIK3CA mutations using texture and morphology analysis on breast MRI

  • Moon, W. K.
  • Chen, H. H.
  • Shin, S. U.
  • Han, W.
  • Chang, R. F.
Magn Reson Imaging 2019 Journal Article, cited 0 times
Website
PURPOSE: Somatic mutations in TP53 and PIK3CA genes, the two most frequent genetic alternations in breast cancer, are associated with prognosis and therapeutic response. This study predicted the presence of TP53 and PIK3CA mutations in breast cancer by using texture and morphology analyses on breast MRI. MATERIALS AND METHODS: A total of 107 breast cancers (dataset A) from The Cancer Imaging Archive (TCIA) consisting of 40 TP53 mutation cancer and 67 cancers without TP53 mutation; 35 PIK3CA mutations cancer and 72 without PIK3CA mutation. 122 breast cancer (dataset B) from Seoul National University Hospital containing 54 TP53 mutation cancer and 68 without mutations were used in this study. At first, the tumor area was segmented by a region growing method. Subsequently, gray level co-occurrence matrix (GLCM) texture features were extracted after ranklet transform, and a series of features including compactness, margin, and ellipsoid fitting model were used to describe the morphological characteristics of tumors. Lastly, a logistic regression was used to identify the presence of TP53 and PIK3CA mutations. The classification performances were evaluated by accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). Taking into account the trade-offs of sensitivity and specificity, the overall performances were evaluated by using receiver operating characteristic (ROC) curve analysis. RESULTS: The GLCM texture feature based on ranklet transform is more capable of recognizing TP53 and PIK3CA mutations than morphological feature, especially for the TP53 mutation that achieves statistically significant. The area under the ROC curve (AUC) for TP53 mutation dataset A and dataset B achieved 0.78 and 0.81 respectively. For PIK3CA mutation, the AUC of ranklet texture feature was 0.70. CONCLUSION: Texture analysis of segmented tumor on breast MRI based on ranklet transform is potential in recognizing the presence of TP53 mutation and PIK3CA mutation.

Automated AJCC (7th edition) staging of non-small cell lung cancer (NSCLC) using deep convolutional neural network (CNN) and recurrent neural network (RNN)

  • Moitra, Dipanjan
  • Mandal, Rakesh Kr
Health Inf Sci Syst 2019 Journal Article, cited 0 times
Website
Purpose: A large chunk of lung cancers are of the type non-small cell lung cancer (NSCLC). Both the treatment planning and patients' prognosis depend greatly on factors like AJCC staging which is an abstraction over TNM staging. Many significant efforts have so far been made towards automated staging of NSCLC, but the groundbreaking application of a deep neural networks (DNNs) is yet to be observed in this domain of study. DNN is capable of achieving higher level of accuracy than the traditional artificial neural networks (ANNs) as it uses deeper layers of convolutional neural network (CNN). The objective of the present study is to propose a simple yet fast CNN model combined with recurrent neural network (RNN) for automated AJCC staging of NSCLC and to compare the outcome with a few standard machine learning algorithms along with a few similar studies. Methods: The NSCLC radiogenomics collection from the cancer imaging archive (TCIA) dataset was considered for the study. The tumor images were refined and filtered by resizing, enhancing, de-noising, etc. The initial image processing phase was followed by texture based image segmentation. The segmented images were fed into a hybrid feature detection and extraction model which was comprised of two sequential phases: maximally stable extremal regions (MSER) and the speeded up robust features (SURF). After a prolonged experiment, the desired CNN-RNN model was derived and the extracted features were fed into the model. Results: The proposed CNN-RNN model almost outperformed the other machine learning algorithms under consideration. The accuracy remained steadily higher than the other contemporary studies. Conclusion: The proposed CNN-RNN model performed commendably during the study. Further studies may be carried out to refine the model and develop an improved auxiliary decision support system for oncologists and radiologists.

Automated grading of non-small cell lung cancer by fuzzy rough nearest neighbour method

  • Moitra, Dipanjan
  • Mandal, Rakesh Kr
Network Modeling Analysis in Health Informatics and Bioinformatics 2019 Journal Article, cited 0 times
Lung cancer is one of the most lethal diseases across the world. Most lung cancers belong to the category of non-small cell lung cancer (NSCLC). Many studies have so far been carried out to avoid the hazards and bias of manual classification of NSCLC tumors. A few of such studies were intended towards automated nodal staging using the standard machine learning algorithms. Many others tried to classify tumors as either benign or malignant. None of these studies considered the pathological grading of NSCLC. Automated grading may perfectly depict the dissimilarity between normal tissue and cancer affected tissue. Such automation may save patients from undergoing a painful biopsy and may also help radiologists or oncologists in grading the tumor or lesion correctly. The present study aims at the automated grading of NSCLC tumors using the fuzzy rough nearest neighbour (FRNN) method. The dataset was extracted from The Cancer Imaging Archive and it comprised PET/CT images of NSCLC tumors of 211 patients. Accelerated segment test (FAST) and histogram oriented gradients methods were used to detect and extract features from the segmented images. Gray level co-occurrence matrix (GLCM) features were also considered in the study. The features along with the clinical grading information were fed into four machine learning algorithms: FRNN, logistic regression, multi-layer perceptron, and support vector machine. The results were thoroughly compared in the light of various evaluation-metrics. The confusion matrix was found balanced, and the outcome was found more cost-effective for FRNN. Results were also compared with various other leading studies done earlier in this field. The proposed FRNN model performed satisfactorily during the experiment. Further exploration of FRNN may be very helpful for radiologists and oncologists in planning the treatment for NSCLC. More varieties of cancers may be considered while conducting similar studies.

IMAGE FUSION BASED LUNG NODULE DETECTION USING STRUCTURAL SIMILARITY AND MAX RULE

  • Mohana, P
  • Venkatesan, P
INTERNATIONAL JOURNAL OF ADVANCES IN SIGNAL AND IMAGE SCIENCES 2019 Journal Article, cited 0 times
Website
The uncontrollable cells in the lungs are the main cause of lung cancer that reduces the ability to breathe. In this study, fusion of Computed Tomography (CT) lung image and Positron Emission Tomography (PET) lung image using their structural similarity is presented. The fused image has more information compared to individual CT and PET lung images which helps radiologists to make decision quickly. Initially, the CT and PET images are divided into blocks of predefined size in an overlapping manner. The structural similarity between each block of CT and PET are computed for fusion. Image fusion is performed using a combination of structural similarity and MAX rule. If the structural similarity between CT and PET block is greater than a particular threshold, the MAX rule is applied; otherwise the pixel intensities in CT image are used. A simple thresholding approach is employed to detect the lung nodule from the fused image. The qualitative analyses show that the fusion approach provides more information with accurate detection of lung nodules.

Database Acquisition for the Lung Cancer Computer Aided Diagnostic Systems

  • Meldo, Anna
  • Utkin, Lev
  • Lukashin, Aleksey
  • Muliukha, Vladimir
  • Zaborovsky, Vladimir
2019 Conference Paper, cited 0 times
Website
Most of the used computer aided diagnostic (CAD) systems based on applying the deep learning algorithms are similar from the point of view of data processing stages. The main typical stages are the training data acquisition, pre-processing, segmentation and classification. Homogeneity of a training dataset structure and its completeness are very important for minimizing inaccuracies in the development of the CAD systems. The main difficulties in the medical training data acquisition are concerned with their heterogeneity and incompleteness. Another problem is a lack of a sufficient large amount of data for training deep neural networks which are a basis of the CAD systems. In order to overcome these problems in the lung cancer CAD systems, a new methodology of the dataset acquisition is proposed by using as an example the database called LIRA which has been applied to training the intellectual lung cancer CAD system called by Dr. AIzimov. One of the important peculiarities of the dataset LIRA is the morphological confirmation of diseases. Another peculiarity is taking into account and including “atypical” cases from the point of view of radiographic features. The database development is carried out in the interdisciplinary collaboration of radiologists and data scientists developing the CAD system.

Bone Marrow and Tumor Radiomics at (18)F-FDG PET/CT: Impact on Outcome Prediction in Non-Small Cell Lung Cancer

  • Mattonen, Sarah A
  • Davidzon, Guido A
  • Benson, Jalen
  • Leung, Ann N C
  • Vasanawala, Minal
  • Horng, George
  • Shrager, Joseph B
  • Napel, Sandy
  • Nair, Viswam S.
Radiology 2019 Journal Article, cited 0 times
Website
Background Primary tumor maximum standardized uptake value is a prognostic marker for non-small cell lung cancer. In the setting of malignancy, bone marrow activity from fluorine 18-fluorodeoxyglucose (FDG) PET may be informative for clinical risk stratification. Purpose To determine whether integrating FDG PET radiomic features of the primary tumor, tumor penumbra, and bone marrow identifies lung cancer disease-free survival more accurately than clinical features alone. Materials and Methods Patients were retrospectively analyzed from two distinct cohorts collected between 2008 and 2016. Each tumor, its surrounding penumbra, and bone marrow from the L3-L5 vertebral bodies was contoured on pretreatment FDG PET/CT images. There were 156 bone marrow and 512 tumor and penumbra radiomic features computed from the PET series. Randomized sparse Cox regression by least absolute shrinkage and selection operator identified features that predicted disease-free survival in the training cohort. Cox proportional hazards models were built and locked in the training cohort, then evaluated in an independent cohort for temporal validation. Results There were 227 patients analyzed; 136 for training (mean age, 69 years +/- 9 [standard deviation]; 101 men) and 91 for temporal validation (mean age, 72 years +/- 10; 91 men). The top clinical model included stage; adding tumor region features alone improved outcome prediction (log likelihood, -158 vs -152; P = .007). Adding bone marrow features continued to improve performance (log likelihood, -158 vs -145; P = .001). The top model integrated stage, two bone marrow texture features, one tumor with penumbra texture feature, and two penumbra texture features (concordance, 0.78; 95% confidence interval: 0.70, 0.85; P < .001). This fully integrated model was a predictor of poor outcome in the independent cohort (concordance, 0.72; 95% confidence interval: 0.64, 0.80; P < .001) and a binary score stratified patients into high and low risk of poor outcome (P < .001). Conclusion A model that includes pretreatment fluorine 18-fluorodeoxyglucose PET texture features from the primary tumor, tumor penumbra, and bone marrow predicts disease-free survival of patients with non-small cell lung cancer more accurately than clinical features alone. (c) RSNA, 2019 Online supplemental material is available for this article.

[18F] FDG Positron Emission Tomography (PET) Tumor and Penumbra Imaging Features Predict Recurrence in Non-Small Cell Lung Cancer

  • Mattonen, Sarah A.
  • Davidzon, Guido A.
  • Bakr, Shaimaa
  • Echegaray, Sebastian
  • Leung, Ann N. C.
  • Vasanawala, Minal
  • Horng, George
  • Napel, Sandy
  • Nair, Viswam S.
Tomography (Ann Arbor, Mich.) 2019 Journal Article, cited 0 times
Website
We identified computational imaging features on 18F-fluorodeoxyglucose positron emission tomography (PET) that predict recurrence/progression in non-small cell lung cancer (NSCLC). We retrospectively identified 291 patients with NSCLC from 2 prospectively acquired cohorts (training, n = 145; validation, n = 146). We contoured the metabolic tumor volume (MTV) on all pretreatment PET images and added a 3-dimensional penumbra region that extended outward 1 cm from the tumor surface. We generated 512 radiomics features, selected 435 features based on robustness to contour variations, and then applied randomized sparse regression (LASSO) to identify features that predicted time to recurrence in the training cohort. We built Cox proportional hazards models in the training cohort and independently evaluated the models in the validation cohort. Two features including stage and a MTV plus penumbra texture feature were selected by LASSO. Both features were significant univariate predictors, with stage being the best predictor (hazard ratio [HR] = 2.15 [95% confidence interval (CI): 1.56-2.95], P < .001). However, adding the MTV plus penumbra texture feature to stage significantly improved prediction (P = .006). This multivariate model was a significant predictor of time to recurrence in the training cohort (concordance = 0.74 [95% CI: 0.66-0.81], P < .001) that was validated in a separate validation cohort (concordance = 0.74 [95% CI: 0.67-0.81], P < .001). A combined radiomics and clinical model improved NSCLC recurrence prediction. FDG PET radiomic features may be useful biomarkers for lung cancer prognosis and add clinical utility for risk stratification.

Bone suppression for chest X-ray image using a convolutional neural filter

  • Matsubara, N.
  • Teramoto, A.
  • Saito, K.
  • Fujita, H.
Australas Phys Eng Sci Med 2019 Journal Article, cited 0 times
Website
Chest X-rays are used for mass screening for the early detection of lung cancer. However, lung nodules are often overlooked because of bones overlapping the lung fields. Bone suppression techniques based on artificial intelligence have been developed to solve this problem. However, bone suppression accuracy needs improvement. In this study, we propose a convolutional neural filter (CNF) for bone suppression based on a convolutional neural network which is frequently used in the medical field and has excellent performance in image processing. CNF outputs a value for the bone component of the target pixel by inputting pixel values in the neighborhood of the target pixel. By processing all positions in the input image, a bone-extracted image is generated. Finally, bone-suppressed image is obtained by subtracting the bone-extracted image from the original chest X-ray image. Bone suppression was most accurate when using CNF with six convolutional layers, yielding bone suppression of 89.2%. In addition, abnormalities, if present, were effectively imaged by suppressing only bone components and maintaining soft-tissue. These results suggest that the chances of missing abnormalities may be reduced by using the proposed method. The proposed method is useful for bone suppression in chest X-ray images.

Domain-Based Analysis of Colon Polyp in CT Colonography Using Image-Processing Techniques

  • Manjunath, K N
  • Siddalingaswamy, PC
  • Prabhu, GK
Asian Pacific Journal of Cancer Prevention 2019 Journal Article, cited 0 times
Website
Background: The purpose of the research was to improve the polyp detection accuracy in CT Colonography (CTC)through effective colon segmentation, removal of tagged fecal matter through Electronic Cleansing (EC), and measuringthe smaller polyps. Methods: An improved method of boundary-based semi-automatic colon segmentation with theknowledge of colon distension, an adaptive multistep method for the virtual cleansing of segmented colon based onthe knowledge of Hounsfield Units, and an automated method of smaller polyp measurement using skeletonizationtechnique have been implemented. Results: The techniques were evaluated on 40 CTC dataset. The segmentationmethod was able to delineate the colon wall accurately. The submerged colonic structures were preserved withoutsoft tissue erosion, pseudo enhanced voxels were corrected, and the air-contrast layer was removed without losingthe adjacent tissues. The smaller polyp of size less than validated qualitatively and quantitatively. Segmented colons were validated through volumetric overlap computation,and accuracy of 95.826±0.6854% was achieved. In polyp measurement, the paired t-test method was applied to comparethe difference with ground truth and at α=5%, t=0.9937 and p=0.098 was achieved. The statistical values of TPR=90%,TNR=82.3% and accuracy=88.31% were achieved. Conclusion: An automated system of polyp measurement has beendeveloped starting from colon segmentation to improve the existing CTC solutions. The analysis of domain-basedapproach of polyp has given good results. A prototype software, which can be used as a low-cost polyp diagnosis tool,has been developed.

Scale-Space DCE-MRI Radiomics Analysis Based on Gabor Filters for Predicting Breast Cancer Therapy Response

  • Manikis, Georgios C.
  • Venianaki, Maria
  • Skepasianos, Iraklis
  • Papadakis, Georgios Z.
  • Maris, Thomas G.
  • Agelaki, Sofia
  • Karantanas, Apostolos
  • Marias, Kostas
2019 Conference Paper, cited 0 times
Website
Radiomics-based studies have created an unprecedented momentum in computational medical imaging over the last years by significantly advancing and empowering correlational and predictive quantitative studies in numerous clinical applications. An important element of this exciting field of research especially in oncology is multi-scale texture analysis since it can effectively describe tissue heterogeneity, which is highly informative for clinical diagnosis and prognosis. There are however, several concerns regarding the plethora of radiomics features used in the literature especially regarding their performance consistency across studies. Since many studies use software packages that yield multi-scale texture features it makes sense to investigate the scale-space performance of texture candidate biomarkers under the hypothesis that significant texture markers may have a more persistent scale-space performance. To this end, this study proposes a methodology for the extraction of Gabor multi-scale and orientation texture DCE-MRI radiomics for predicting breast cancer complete response to neoadjuvant therapy. More specifically, a Gabor filter bank was created using four different orientations and ten different scales and then first-order and second-order texture features were extracted for each scale-orientation data representation. The performance of all these features was evaluated under a generalized repeated cross-validation framework in a scale-space fashion using extreme gradient boosting classifiers.

Study on Prognosis Factors of Non-Small Cell Lung Cancer Based on CT Image Features

  • Lu, Xiaoteng
  • Gong, Jing
  • Nie, Shengdong
Journal of Medical Imaging and Health Informatics 2019 Journal Article, cited 0 times
This study aims to investigate the prognosis factors of non-small cell lung cancer (NSCLC) based on CT image features and develop a new quantitative image feature prognosis approach using CT images. Firstly, lung tumors were segmented and images features were extracted. Secondly, the Kaplan-Meier method was used to have a univariate survival analysis. A multiple survival analysis was carried out with the method of COX regression model. Thirdly, SMOTE algorithm was took to make the feature data balanced. Finally, classifiers based on WEKA were established to test the prognosis ability of independent prognosis factors. Univariate analysis results reflected that six features had significant influence on patients' prognosis. After multivariate analysis, angular second moment, srhge and volume were significantly related to the survival situation of NSCLC patients (P < 0.05). According to the results of classifiers, these three features could make a well prognosis on the NSCLC. The best classification accuracy was 78.4%. The results of our study suggested that angular second moment, srhge and volume were high potential independent prognosis factors of NSCLC.

A mathematical-descriptor of tumor-mesoscopic-structure from computed-tomography images annotates prognostic- and molecular-phenotypes of epithelial ovarian cancer

  • Lu, Haonan
  • Arshad, Mubarik
  • Thornton, Andrew
  • Avesani, Giacomo
  • Cunnea, Paula
  • Curry, Ed
  • Kanavati, Fahdi
  • Liang, Jack
  • Nixon, Katherine
  • Williams, Sophie T.
  • Hassan, Mona Ali
  • Bowtell, David D. L.
  • Gabra, Hani
  • Fotopoulou, Christina
  • Rockall, Andrea
  • Aboagye, Eric O.
Nature Communications 2019 Journal Article, cited 0 times
Website
The five-year survival rate of epithelial ovarian cancer (EOC) is approximately 35-40% despite maximal treatment efforts, highlighting a need for stratification biomarkers for personalized treatment. Here we extract 657 quantitative mathematical descriptors from the preoperative CT images of 364 EOC patients at their initial presentation. Using machine learning, we derive a non-invasive summary-statistic of the primary ovarian tumor based on 4 descriptors, which we name "Radiomic Prognostic Vector" (RPV). RPV reliably identifies the 5% of patients with median overall survival less than 2 years, significantly improves established prognostic methods, and is validated in two independent, multi-center cohorts. Furthermore, genetic, transcriptomic and proteomic analysis from two independent datasets elucidate that stromal phenotype and DNA damage response pathways are activated in RPV-stratified tumors. RPV and its associated analysis platform could be exploited to guide personalized therapy of EOC and is potentially transferrable to other cancer types.

A Weighted Voting Ensemble Self-Labeled Algorithm for the Detection of Lung Abnormalities from X-Rays

  • Livieris, Ioannis
  • Kanavos, Andreas
  • Tampakas, Vassilis
  • Pintelas, Panagiotis
Algorithms 2019 Journal Article, cited 0 times
Website
During the last decades, intensive efforts have been devoted to the extraction of useful knowledge from large volumes of medical data employing advanced machine learning and data mining techniques. Advances in digital chest radiography have enabled research and medical centers to accumulate large repositories of classified (labeled) images and mostly of unclassified (unlabeled) images from human experts. Machine learning methods such as semi-supervised learning algorithms have been proposed as a new direction to address the problem of shortage of available labeled data, by exploiting the explicit classification information of labeled data with the information hidden in the unlabeled data. In the present work, we propose a new ensemble semi-supervised learning algorithm for the classification of lung abnormalities from chest X-rays based on a new weighted voting scheme. The proposed algorithm assigns a vector of weights on each component classifier of the ensemble based on its accuracy on each class. Our numerical experiments illustrate the efficiency of the proposed ensemble methodology against other state-of-the-art classification methods.

Detecting Lung Abnormalities From X-rays Using an Improved SSL Algorithm

  • Livieris, Ioannis
  • Kanavos, Andreas
  • Pintelas, Panagiotis
Electronic Notes in Theoretical Computer Science 2019 Journal Article, cited 0 times

Oligodendroglial tumours: subventricular zone involvement and seizure history are associated with CIC mutation status

  • Liu, Zhenyin
  • Liu, Hongsheng
  • Liu, Zhenqing
  • Zhang, Jing
BMC Neurol 2019 Journal Article, cited 1 times
Website
BACKGROUND: CIC-mutant oligodendroglial tumours linked to better prognosis. We aim to investigate associations between CIC gene mutation status, MR characteristics and clinical features. METHODS: Imaging and genomic data from the Cancer Genome Atlas and the Cancer Imaging Archive (TCGA/TCIA) for 59 patients with oligodendroglial tumours were used. Differences between CIC mutation and CIC wild-type were tested using Chi-square test and binary logistic regression analysis. RESULTS: In univariate analysis, the clinical variables and MR features, which consisted 3 selected features (subventricular zone[SVZ] involvement, volume and seizure history) were associated with CIC mutation status (all p < 0.05). A multivariate logistic regression analysis identified that seizure history (no vs. yes odd ratio [OR]: 28.960, 95 confidence interval [CI]:2.625-319.49, p = 0.006) and SVZ involvement (SVZ- vs. SVZ+ OR: 77.092, p = 0.003; 95% CI: 4.578-1298.334) were associated with a higher incidence of CIC mutation status. The nomogram showed good discrimination, with a C-index of 0.906 (95% CI: 0.812-1.000) and was well calibrated. SVZ- group has increased (SVZ- vs. SVZ+, hazard ratio [HR]: 4.500, p = 0.04; 95% CI: 1.069-18.945) overall survival. CONCLUSIONS: Absence of seizure history and SVZ involvement (-) was associated with a higher incidence of CIC mutation.

Cross-Modality Knowledge Transfer for Prostate Segmentation from CT Scans

  • Liu, Yucheng
  • Khosravan, Naji
  • Liu, Yulin
  • Stember, Joseph
  • Shoag, Jonathan
  • Bagci, Ulas
  • Jambawalikar, Sachin
2019 Book Section, cited 0 times

Deep learning for magnetic resonance imaging-genomic mapping of invasive breast carcinoma

  • Liu, Qian
2019 Thesis, cited 0 times
Website
To identify MRI-based radiomic features that could be obtained automatically by a deep learning (DL) model and could predict the clinical characteristics of breast cancer (BC). Also, to explain the potential underlying genomic mechanisms of the predictive radiomic features. A denoising autoencoder (DA) was developed to retrospectively extract 4,096 phenotypes from the MRI of 110 BC patients collected by The Cancer Imaging Archive (TCIA). The associations of these phenotypes with genomic features (commercialized gene signatures, expression of risk genes, and biological pathways activities extracted from the same patients’ mRNA expression collected by The Cancer Genome Atlas (TCGA)) were tested based on linear mixed effect (LME) models. A least absolute shrinkage and selection operator (LASSO) model was used to identify the most predictive MRI phenotypes for each clinical phenotype (tumor size (T), lymph node metastasis(N), status of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2)). More than 1,000 of the 4,096 MRI phenotypes were associated with the activities of risk genes, gene signatures, and biological pathways (adjusted P-value < 0.05). High performances are obtained in the prediction of the status of T, N, ER, PR, HER2 (AUC>0.9). These identified MRI phenotypes also show significant power to stratify the BC tumors. DL based auto MRI features performed very well in predicting clinical characteristics of BC and these phenotypes were identified to have genomic significance.

Multi-subtype classification model for non-small cell lung cancer based on radiomics: SLS model

  • Liu, J.
  • Cui, J.
  • Liu, F.
  • Yuan, Y.
  • Guo, F.
  • Zhang, G.
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: Histological subtypes of non-small cell lung cancer (NSCLC) are crucial for systematic treatment decisions. However, the current studies which used non-invasive radiomic methods to classify NSCLC histology subtypes mainly focused on two main subtypes: squamous cell carcinoma (SCC) and adenocarcinoma (ADC), while multi-subtype classifications that included the other two subtypes of NSCLC: large cell carcinoma (LCC) and not otherwise specified (NOS), were very few in the previous studies. The aim of this work is to establish a multi-subtype classification model for the four main subtypes of NSCLC and improve the classification performance and generalization ability compared with previous studies. METHODS: In this work, we extracted 1029 features from regions of interest in computed tomography (CT) images of 349 patients from two different datasets using radiomic methods. Based on 'three-in-one' concept, we proposed a model called SLS wrapping three algorithms, synthetic minority oversampling technique, l2,1-norm minimization, and support vector machines, into one hybrid technique to classify the four main subtypes of NSCLC: SCC, ADC, LCC and NOS, which could cover the whole range of NSCLC. RESULTS: We analyzed the 247 features obtained by dimension reduction, and found that the extracted features from three methods: first order statistics, gray level co-occurrence matrix, and gray level size zone matrix, were more conducive to the classification of NSCLC subtypes. The proposed SLS model achieved an average classification accuracy of 0.89 on the training set (95% confidence interval [CI]: 0.846 to 0.912) and a classification accuracy of 0.86 on the test set (95% CI: 0.779 to 0.941). CONCLUSIONS: The experiment results showed that the subtypes of NSCLC could be well classified by radiomic method. Our SLS model can accurately classify and diagnose the four subtypes of NSCLC based on CT images, and thus it has the potential to be used in the clinical practice to provide valuable information for lung cancer treatment and further promote the personalized medicine. This article is protected by copyright. All rights reserved.

Machine Learning Models on Prognostic Outcome Prediction for Cancer Images with Multiple Modalities

  • Liu, Gengbo
2019 Thesis, cited 0 times
Website
Machine learning algorithms have been applied to predict different prognostic outcomes for many different diseases by directly using medical images. However, the higher resolution in various types of medical imaging modalities and new imaging feature extraction framework bringsnew challenges for predicting prognostic outcomes. Compared to traditional radiology practice, which is only based on visual interpretation and simple quantitative measurements, medical imaging featurescan dig deeper within medical images and potentially provide further objective support for clinical decisions.In this dissertation, we cover three projects with applying or designing machine learning models on predicting prognostic outcomes using various types of medical images.

A radiogenomics signature for predicting the clinical outcome of bladder urothelial carcinoma

  • Lin, Peng
  • Wen, Dong-Yue
  • Chen, Ling
  • Li, Xin
  • Li, Sheng-Hua
  • Yan, Hai-Biao
  • He, Rong-Quan
  • Chen, Gang
  • He, Yun
  • Yang, Hong
Eur Radiol 2019 Journal Article, cited 0 times
Website
OBJECTIVES: To determine the integrative value of contrast-enhanced computed tomography (CECT), transcriptomics data and clinicopathological data for predicting the survival of bladder urothelial carcinoma (BLCA) patients. METHODS: RNA sequencing data, radiomics features and clinical parameters of 62 BLCA patients were included in the study. Then, prognostic signatures based on radiomics features and gene expression profile were constructed by using least absolute shrinkage and selection operator (LASSO) Cox analysis. A multi-omics nomogram was developed by integrating radiomics, transcriptomics and clinicopathological data. More importantly, radiomics risk score-related genes were identified via weighted correlation network analysis and submitted to functional enrichment analysis. RESULTS: The radiomics and transcriptomics signatures significantly stratified BLCA patients into high- and low-risk groups in terms of the progression-free interval (PFI). The two risk models remained independent prognostic factors in multivariate analyses after adjusting for clinical parameters. A nomogram was developed and showed an excellent predictive ability for the PFI in BLCA patients. Functional enrichment analysis suggested that the radiomics signature we developed could reflect the angiogenesis status of BLCA patients. CONCLUSIONS: The integrative nomogram incorporated CECT radiomics, transcriptomics and clinical features improved the PFI prediction in BLCA patients and is a feasible and practical reference for oncological precision medicine. KEY POINTS: * Our radiomics and transcriptomics models are proved robust for survival prediction in bladder urothelial carcinoma patients. * A multi-omics nomogram model which integrates radiomics, transcriptomics and clinical features for prediction of progression-free interval in bladder urothelial carcinoma is established. * Molecular functional enrichment analysis is used to reveal the potential molecular function of radiomics signature.

Volumetric and Voxel-Wise Analysis of Dominant Intraprostatic Lesions on Multiparametric MRI

  • Lee, Joon
  • Carver, Eric
  • Feldman, Aharon
  • Pantelic, Milan V
  • Elshaikh, Mohamed
  • Wen, Ning
Front Oncol 2019 Journal Article, cited 0 times
Website
Introduction: Multiparametric MR imaging (mpMRI) has shown promising results in the diagnosis and localization of prostate cancer. Furthermore, mpMRI may play an important role in identifying the dominant intraprostatic lesion (DIL) for radiotherapy boost. We sought to investigate the level of correlation between dominant tumor foci contoured on various mpMRI sequences. Methods: mpMRI data from 90 patients with MR-guided biopsy-proven prostate cancer were obtained from the SPIE-AAPM-NCI Prostate MR Classification Challenge. Each case consisted of T2-weighted (T2W), apparent diffusion coefficient (ADC), and K(trans) images computed from dynamic contrast-enhanced sequences. All image sets were rigidly co-registered, and the dominant tumor foci were identified and contoured for each MRI sequence. Hausdorff distance (HD), mean distance to agreement (MDA), and Dice and Jaccard coefficients were calculated between the contours for each pair of MRI sequences (i.e., T2 vs. ADC, T2 vs. K(trans), and ADC vs. K(trans)). The voxel wise spearman correlation was also obtained between these image pairs. Results: The DILs were located in the anterior fibromuscular stroma, central zone, peripheral zone, and transition zone in 35.2, 5.6, 32.4, and 25.4% of patients, respectively. Gleason grade groups 1-5 represented 29.6, 40.8, 15.5, and 14.1% of the study population, respectively (with group grades 4 and 5 analyzed together). The mean contour volumes for the T2W images, and the ADC and K(trans) maps were 2.14 +/- 2.1, 2.22 +/- 2.2, and 1.84 +/- 1.5 mL, respectively. K(trans) values were indistinguishable between cancerous regions and the rest of prostatic regions for 19 patients. The Dice coefficient and Jaccard index were 0.74 +/- 0.13, 0.60 +/- 0.15 for T2W-ADC and 0.61 +/- 0.16, 0.46 +/- 0.16 for T2W-K(trans). The voxel-based Spearman correlations were 0.20 +/- 0.20 for T2W-ADC and 0.13 +/- 0.25 for T2W-K(trans). Conclusions: The DIL contoured on T2W images had a high level of agreement with those contoured on ADC maps, but there was little to no quantitative correlation of these results with tumor location and Gleason grade group. Technical hurdles are yet to be solved for precision radiotherapy to target the DILs based on physiological imaging. A Boolean sum volume (BSV) incorporating all available MR sequences may be reasonable in delineating the DIL boost volume.

Automatic GPU memory management for large neural models in TensorFlow

  • Le, Tung D.
  • Imai, Haruki
  • Negishi, Yasushi
  • Kawachiya, Kiyokuni
2019 Conference Proceedings, cited 0 times
Website
Deep learning models are becoming larger and will not fit inthe limited memory of accelerators such as GPUs for train-ing. Though many methods have been proposed to solvethis problem, they are rather ad-hoc in nature and difficultto extend and integrate with other techniques. In this pa-per, we tackle the problem in a formal way to provide astrong foundation for supporting large models. We proposea method of formally rewriting the computational graph of amodel where swap-out and swap-in operations are insertedto temporarily store intermediate results on CPU memory.By introducing a categorized topological ordering for simu-lating graph execution, the memory consumption of a modelcan be easily analyzed by using operation distances in theordering. As a result, the problem of fitting a large model intoa memory-limited accelerator is reduced to the problem ofreducing operation distances in a categorized topological or-dering. We then show how to formally derive swap-out andswap-in operations from an existing graph and present rulesto optimize the graph. Finally, we propose a simulation-basedauto-tuning to automatically find suitable graph-rewritingparameters for the best performance. We developed a modulein TensorFlow, calledLMS, by which we successfully trainedResNet-50 with a4.9x larger mini-batch size and 3D U-Netwith a5.6x larger image resolution.

Conditional random fields improve the CNN-based prostate cancer classification performance

  • Lapa, Paulo Alberto Fernandes
2019 Thesis, cited 0 times
Website
Prostate cancer is a condition with life-threatening implications but without clear causes yet identified. Several diagnostic procedures can be used, ranging from human dependent and very invasive to using state of the art non-invasive medical imaging. With recent academic and industry focus on the deep learning field, novel research has been performed on to how to improve prostate cancer diagnosis using Convolutional Neural Networks to interpret Magnetic Resonance images. Conditional Random Fields have achieved outstanding results in the image segmentation task, by promoting homogeneous classification at the pixel level. A new implementation, CRF-RNN defines Conditional Random Fields by means of convolutional layers, allowing the end to end training of the feature extractor and classifier models. This work tries to repurpose CRFs for the image classification task, a more traditional sub-field of imaging analysis, on a way that to the best of the author’s knowledge, has not been implemented before. To achieve this, a purpose-built architecture was refitted, adding a CRF layer as a feature extractor step. To serve as the implementation’s benchmark, a multi-parametric Magnetic Resonance Imaging dataset was used, initially provided for the PROSTATEx Challenge 2017 and collected by the Radboud University. The results are very promising, showing an increase in the network’s classification quality. Cancro da próstata é uma condição que pode apresentar risco de vida, mas sem causas ainda corretamente identificadas. Vários métodos de diagnóstico podem ser utilizados, desde bastante invasivos e dependentes do operador humano a métodos não invasivos de ponta através de imagens médicas. Com o crescente interesse das universidades e da indústria no campo do deep learning, investigação tem sido desenvolvida com o propósito de melhorar o diagnóstico de cancro da próstata através de Convolutional Neural Networks (CNN) (Redes Neuronais Convolucionais) para interpretar imagens de Ressonância Magnética. Conditional Random Fields (CRF) (Campos Aleatórios Condicionais) alcançaram resultados muito promissores no campo da Segmentação de Imagem, por promoverem classificações homogéneas ao nível do pixel. Uma nova implementação, CRF-RNN redefine os CRF através de camadas de CNN, permitindo assim o treino integrado da rede que extrai as características e o modelo que faz a classificação. Este trabalho tenta aproveitar os CRF para a tarefa de Classificação de Imagem, um campo mais tradicional, numa abordagem que nunca foi implementada anteriormente, para o conhecimento do autor. Para conseguir isto, uma nova arquitetura foi definida, utilizando uma camada CRF-RNN como um extrator de características. Como meio de comparação foi utilizada uma base de dados de imagens multiparamétricas de Ressonância Magnética, recolhida pela Universidade de Radboud e inicialmente utilizada para o PROSTATEx Challenge 2017. Os resultados são bastante promissores, mostrando uma melhoria na capacidade de classificação da rede neuronal.

Semantic learning machine improves the CNN-Based detection of prostate cancer in non-contrast-enhanced MRI

  • Lapa, Paulo
  • Gonçalves, Ivo
  • Rundo, Leonardo
  • Castelli, Mauro
2019 Conference Proceedings, cited 0 times
Website
Considering that Prostate Cancer (PCa) is the most frequently diagnosed tumor in Western men, considerable attention has been devoted in computer-assisted PCa detection approaches. However, this task still represents an open research question. In the clinical practice, multiparametric Magnetic Resonance Imaging (MRI) is becoming the most used modality, aiming at defining biomarkers for PCa. In the latest years, deep learning techniques have boosted the performance in prostate MR image analysis and classification. This work explores the use of the Semantic Learning Machine (SLM) neuroevolution algorithm to replace the backpropagation algorithm commonly used in the last fully-connected layers of Convolutional Neural Networks (CNNs). We analyzed the non-contrast-enhanced multispectral MRI sequences included in the PROSTATEx dataset, namely: T2-weighted, Proton Density weighted, Diffusion Weighted Imaging. The experimental results show that the SLM significantly outperforms XmasNet, a state-of-the-art CNN. In particular, with respect to XmasNet, the SLM achieves higher classification accuracy (without neither pre-training the underlying CNN nor relying on backprogation) as well as a speed-up of one order of magnitude.

A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop

  • Langlotz, Curtis P
  • Allen, Bibb
  • Erickson, Bradley J
  • Kalpathy-Cramer, Jayashree
  • Bigelow, Keith
  • Cook, Tessa S
  • Flanders, Adam E
  • Lungren, Matthew P
  • Mendelson, David S
  • Rudie, Jeffrey D
  • Wang, Ge
  • Kandarpa, Krishna
Radiology 2019 Journal Article, cited 1 times
Website
Imaging research laboratories are rapidly creating machine learning systems that achieve expert human performance using open-source methods and tools. These artificial intelligence systems are being developed to improve medical image reconstruction, noise reduction, quality assurance, triage, segmentation, computer-aided detection, computer-aided classification, and radiogenomics. In August 2018, a meeting was held in Bethesda, Maryland, at the National Institutes of Health to discuss the current state of the art and knowledge gaps and to develop a roadmap for future research initiatives. Key research priorities include: 1, new image reconstruction methods that efficiently produce images suitable for human interpretation from source data; 2, automated image labeling and annotation methods, including information extraction from the imaging report, electronic phenotyping, and prospective structured image reporting; 3, new machine learning methods for clinical imaging data, such as tailored, pretrained model architectures, and federated machine learning methods; 4, machine learning methods that can explain the advice they provide to human users (so-called explainable artificial intelligence); and 5, validated methods for image de-identification and data sharing to facilitate wide availability of clinical imaging data sets. This research roadmap is intended to identify and prioritize these needs for academic research laboratories, funding agencies, professional societies, and industry.

Acute Tumor Transition Angle on Computed Tomography Predicts Chromosomal Instability Status of Primary Gastric Cancer: Radiogenomics Analysis from TCGA and Independent Validation

  • Lai, Ying-Chieh
  • Yeh, Ta-Sen
  • Wu, Ren-Chin
  • Tsai, Cheng-Kun
  • Yang, Lan-Yan
  • Lin, Gigin
  • Kuo, Michael D
Cancers 2019 Journal Article, cited 0 times
Website
Chromosomal instability (CIN) of gastric cancer is correlated with distinct outcomes. This study aimed to investigate the role of computed tomography (CT) imaging traits in predicting the CIN status of gastric cancer. We screened 443 patients in the Cancer Genome Atlas gastric cancer cohort to filter 40 patients with complete CT imaging and genomic data as the training cohort. CT imaging traits were subjected to logistic regression to select independent predictors for the CIN status. For the validation cohort, we prospectively enrolled 18 gastric cancer patients for CT and tumor genomic analysis. The imaging predictors were tested in the validation cohort using receiver operating characteristic curve (ROC) analysis. Thirty patients (75%) in the training cohort and 9 patients (50%) in the validation cohort had CIN subtype gastric cancers. Smaller tumor diameter (p = 0.017) and acute tumor transition angle (p = 0.045) independently predict CIN status in the training cohort. In the validation cohort, acute tumor transition angle demonstrated the highest accuracy, sensitivity, and specificity of 88.9%, 88.9%, and 88.9%, respectively, and areas under ROC curve of 0.89. In conclusion, this pilot study showed acute tumor transition angle on CT images may predict the CIN status of gastric cancer.

Computer-Aided Diagnosis of Life-Threatening Diseases

  • Kumar, Pramod
  • Ambekar, Sameer
  • Roy, Subarna
  • Kunchur, Pavan
2019 Book Section, cited 0 times
According to WHO, the incidence of life-threatening diseases like cancer, diabetes, and Alzheimer’s disease is escalating globally. In the past few decades, traditional methods have been used to diagnose such diseases. These traditional methods often have limitations such as lack of accuracy, expense, and time-consuming procedures. Computer-aided diagnosis (CAD) aims to overcome these limitations by personalizing healthcare issues. Machine learning is a promising CAD method, offering effective solutions for these diseases. It is being used for early detection of cancer, diabetic retinopathy, as well as Alzheimer’s disease, and also to identify diseases in plants. Machine learning can increase efficiency, making the process more cost effective, with quicker delivery of results. There are several CAD algorithms (ANN, SVM, etc.) that can be used to train the disease dataset, and eventually make significant predictions. It has also been proven that CAD algorithms have potential to diagnose and early detection of life-threatening diseases.

Analysis of CT DICOM Image Segmentation for Abnormality Detection

  • Kulkarni, Rashmi
  • Bhavani, K.
International Journal of Engineering and Manufacturing 2019 Journal Article, cited 0 times
Website
The cancer is a menacing disease. More care is required while diagnosing cancer disease. Mostly CT modality is used for Cancer therapy. Image processing techniques [1] can help doctors to diagnose easily and more accurately. Image pre-processing [2], segmentation methods [3] are used in extraction of cancerous nodules from CT images. Many researches have been done on segmentation of CT images with different algorithms, but they failed to reach 100% accuracy. This research work, proposes a model for analysis of CT image segmentation with filtered and without filtered images. And brings out the importance of pre-processing of CT images.

Medical (CT) image generation with style

  • Krishna, Arjun
  • Mueller, Klaus
2019 Conference Proceedings, cited 0 times

Investigating the role of model-based and model-free imaging biomarkers as early predictors of neoadjuvant breast cancer therapy outcome

  • Kontopodis, Eleftherios
  • Venianaki, Maria
  • Manikis, George C
  • Nikiforaki, Katerina
  • Salvetti, Ovidio
  • Papadaki, Efrosini
  • Papadakis, Georgios Z
  • Karantanas, Apostolos H
  • Marias, Kostas
IEEE J Biomed Health Inform 2019 Journal Article, cited 0 times
Website
Imaging biomarkers (IBs) play a critical role in the clinical management of breast cancer (BRCA) patients throughout the cancer continuum for screening, diagnosis and therapy assessment especially in the neoadjuvant setting. However, certain model-based IBs suffer from significant variability due to the complex workflows involved in their computation, whereas model-free IBs have not been properly studied regarding clinical outcome. In the present study, IBs from 35 BRCA patients who received neoadjuvant chemotherapy (NAC) were extracted from dynamic contrast enhanced MR imaging (DCE-MRI) data with two different approaches, a model-free approach based on pattern recognition (PR), and a model-based one using pharmacokinetic compartmental modeling. Our analysis found that both model-free and model-based biomarkers can predict pathological complete response (pCR) after the first cycle of NAC. Overall, 8 biomarkers predicted the treatment response after the first cycle of NAC, with statistical significance (p-value<0.05), and 3 at the baseline. The best pCR predictors at first follow-up, achieving high AUC and sensitivity and specificity more than 50%, were the hypoxic component with threshold2 (AUC 90.4%) from the PR method, and the median value of kep (AUC 73.4%) from the model-based approach. Moreover, the 80th percentile of ve achieved the highest pCR prediction at baseline with AUC 78.5%. The results suggest that model-free DCE-MRI IBs could be a more robust alternative to complex, model-based ones such as kep and favor the hypothesis that the PR image-derived hypoxic image component captures actual tumor hypoxia information able to predict BRCA NAC outcome.

Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy

  • Koike, Yuhei
  • Akino, Yuichi
  • Sumida, Iori
  • Shiomi, Hiroya
  • Mizuno, Hirokazu
  • Yagi, Masashi
  • Isohashi, Fumiaki
  • Seo, Yuji
  • Suzuki, Osamu
  • Ogawa, Kazuhiko
J Radiat Res 2019 Journal Article, cited 0 times
Website
The aim of this work is to generate synthetic computed tomography (sCT) images from multi-sequence magnetic resonance (MR) images using an adversarial network and to assess the feasibility of sCT-based treatment planning for brain radiotherapy. Datasets for 15 patients with glioblastoma were selected and 580 pairs of CT and MR images were used. T1-weighted, T2-weighted and fluid-attenuated inversion recovery MR sequences were combined to create a three-channel image as input data. A conditional generative adversarial network (cGAN) was trained using image patches. The image quality was evaluated using voxel-wise mean absolute errors (MAEs) of the CT number. For the dosimetric evaluation, 3D conformal radiotherapy (3D-CRT) and volumetric modulated arc therapy (VMAT) plans were generated using the original CT set and recalculated using the sCT images. The isocenter dose and dose-volume parameters were compared for 3D-CRT and VMAT plans, respectively. The equivalent path length was also compared. The mean MAEs for the whole body, soft tissue and bone region were 108.1 +/- 24.0, 38.9 +/- 10.7 and 366.2 +/- 62.0 hounsfield unit, respectively. The dosimetric evaluation revealed no significant difference in the isocenter dose for 3D-CRT plans. The differences in the dose received by 2% of the volume (D2%), D50% and D98% relative to the prescribed dose were <1.0%. The overall equivalent path length was shorter than that for real CT by 0.6 +/- 1.9 mm. A treatment planning study using generated sCT detected only small, clinically negligible differences. These findings demonstrated the feasibility of generating sCT images for MR-only radiotherapy from multi-sequence MR images using cGAN.

Machine learning-based unenhanced CT texture analysis for predicting BAP1 mutation status of clear cell renal cell carcinomas

  • Kocak, Burak
  • Durmaz, Emine Sebnem
  • Kaya, Ozlem Korkmaz
  • Kilickesmez, Ozgur
Acta Radiol 2019 Journal Article, cited 0 times
Website
BACKGROUND: BRCA1-associated protein 1 (BAP1) mutation is an unfavorable factor for overall survival in patients with clear cell renal cell carcinoma (ccRCC). Radiomics literature about BAP1 mutation lacks papers that consider the reliability of texture features in their workflow. PURPOSE: Using texture features with a high inter-observer agreement, we aimed to develop and internally validate a machine learning-based radiomic model for predicting the BAP1 mutation status of ccRCCs. MATERIALS AND METHODS: For this retrospective study, 65 ccRCCs were included from a public database. Texture features were extracted from unenhanced computed tomography (CT) images, using two-dimensional manual segmentation. Dimension reduction was done in three steps: (i) inter-observer agreement analysis; (ii) collinearity analysis; and (iii) feature selection. The machine learning classifier was random forest. The model was validated using 10-fold nested cross-validation. The reference standard was the BAP1 mutation status. RESULTS: Out of 744 features, 468 had an excellent inter-observer agreement. After the collinearity analysis, the number of features decreased to 17. Finally, the wrapper-based algorithm selected six features. Using selected features, the random forest correctly classified 84.6% of the labelled slices regarding BAP1 mutation status with an area under the receiver operating characteristic curve of 0.897. For predicting ccRCCs with BAP1 mutation, the sensitivity, specificity, and precision were 90.4%, 78.8%, and 81%, respectively. For predicting ccRCCs without BAP1 mutation, the sensitivity, specificity, and precision were 78.8%, 90.4%, and 89.1%, respectively. CONCLUSION: Machine learning-based unenhanced CT texture analysis might be a potential method for predicting the BAP1 mutation status of ccRCCs.

Reliability of Single-Slice–Based 2D CT Texture Analysis of Renal Masses: Influence of Intra- and Interobserver Manual Segmentation Variability on Radiomic Feature Reproducibility

  • Kocak, Burak
  • Durmaz, Emine Sebnem
  • Kaya, Ozlem Korkmaz
  • Ates, Ece
  • Kilickesmez, Ozgur
AJR Am J Roentgenol 2019 Journal Article, cited 0 times
Website
OBJECTIVE. The objective of our study was to investigate the potential influence of intra- and interobserver manual segmentation variability on the reliability of single-slice-based 2D CT texture analysis of renal masses. MATERIALS AND METHODS. For this retrospective study, 30 patients with clear cell renal cell carcinoma were included from a public database. For intra- and interobserver analyses, three radiologists with varying degrees of experience segmented the tumors from unenhanced CT and corticomedullary phase contrast-enhanced CT (CECT) in different sessions. Each radiologist was blind to the image slices selected by other radiologists and him- or herself in the previous session. A total of 744 texture features were extracted from original, filtered, and transformed images. The intraclass correlation coefficient was used for reliability analysis. RESULTS. In the intraobserver analysis, the rates of features with good to excellent reliability were 84.4-92.2% for unenhanced CT and 85.5-93.1% for CECT. Considering the mean rates of unenhanced CT and CECT, having high experience resulted in better reliability rates in terms of the intraobserver analysis. In the interobserver analysis, the rates were 76.7% for unenhanced CT and 84.9% for CECT. The gray-level cooccurrence matrix and first-order feature groups yielded higher good to excellent reliability rates on both unenhanced CT and CECT. Filtered and transformed images resulted in more features with good to excellent reliability than the original images did on both unenhanced CT and CECT. CONCLUSION. Single-slice-based 2D CT texture analysis of renal masses is sensitive to intra- and interobserver manual segmentation variability. Therefore, it may lead to nonreproducible results in radiomic analysis unless a reliability analysis is considered in the workflow.

Radiogenomics of lower-grade gliomas: machine learning-based MRI texture analysis for predicting 1p/19q codeletion status

  • Kocak, B.
  • Durmaz, E. S.
  • Ates, E.
  • Sel, I.
  • Turgut Gunes, S.
  • Kaya, O. K.
  • Zeynalova, A.
  • Kilickesmez, O.
Eur Radiol 2019 Journal Article, cited 0 times
Website
OBJECTIVE: To evaluate the potential value of the machine learning (ML)-based MRI texture analysis for predicting 1p/19q codeletion status of lower-grade gliomas (LGG), using various state-of-the-art ML algorithms. MATERIALS AND METHODS: For this retrospective study, 107 patients with LGG were included from a public database. Texture features were extracted from conventional T2-weighted and contrast-enhanced T1-weighted MRI images, using LIFEx software. Training and unseen validation splits were created using stratified 10-fold cross-validation technique along with minority over-sampling. Dimension reduction was done using collinearity analysis and feature selection (ReliefF). Classifications were done using adaptive boosting, k-nearest neighbours, naive Bayes, neural network, random forest, stochastic gradient descent, and support vector machine. Friedman test and pairwise post hoc analyses were used for comparison of classification performances based on the area under the curve (AUC). RESULTS: Overall, the predictive performance of the ML algorithms were statistically significantly different, chi2(6) = 26.7, p < 0.001. There was no statistically significant difference among the performance of the neural network, naive Bayes, support vector machine, random forest, and stochastic gradient descent, adjusted p > 0.05. The mean AUC and accuracy values of these five algorithms ranged from 0.769 to 0.869 and from 80.1 to 84%, respectively. The neural network had the highest mean rank with mean AUC and accuracy values of 0.869 and 83.8%, respectively. CONCLUSIONS: The ML-based MRI texture analysis might be a promising non-invasive technique for predicting the 1p/19q codeletion status of LGGs. Using this technique along with various ML algorithms, more than four-fifths of the LGGs can be correctly classified. KEY POINTS: * More than four-fifths of the lower-grade gliomas can be correctly classified with machine learning-based MRI texture analysis. Satisfying classification outcomes are not limited to a single algorithm. * A few-slice-based volumetric segmentation technique would be a valid approach, providing satisfactory predictive textural information and avoiding excessive segmentation duration in clinical practice. * Feature selection is sensitive to different patient data set samples so that each sampling leads to the selection of different feature subsets, which needs to be considered in future works.

Unenhanced CT Texture Analysis of Clear Cell Renal Cell Carcinomas: A Machine Learning-Based Study for Predicting Histopathologic Nuclear Grade

  • Kocak, Burak
  • Durmaz, Emine Sebnem
  • Ates, Ece
  • Kaya, Ozlem Korkmaz
  • Kilickesmez, Ozgur
American Journal of Roentgenology 2019 Journal Article, cited 0 times
Website
OBJECTIVE: The purpose of this study is to investigate the predictive performance of machine learning (ML)-based unenhanced CT texture analysis in distinguishing low (grades I and II) and high (grades III and IV) nuclear grade clear cell renal cell carcinomas (RCCs). MATERIALS AND METHODS: For this retrospective study, 81 patients with clear cell RCC (56 high and 25 low nuclear grade) were included from a public database. Using 2D manual segmentation, 744 texture features were extracted from unenhanced CT images. Dimension reduction was done in three consecutive steps: reproducibility analysis by two radiologists, collinearity analysis, and feature selection. Models were created using artificial neural network (ANN) and binary logistic regression, with and without synthetic minority oversampling technique (SMOTE), and were validated using 10-fold cross-validation. The reference standard was histopathologic nuclear grade (low vs high). RESULTS: Dimension reduction steps yielded five texture features for the ANN and six for the logistic regression algorithm. None of clinical variables was selected. ANN alone and ANN with SMOTE correctly classified 81.5% and 70.5%, respectively, of clear cell RCCs, with AUC values of 0.714 and 0.702, respectively. The logistic regression algorithm alone and with SMOTE correctly classified 75.3% and 62.5%, respectively, of the tumors, with AUC values of 0.656 and 0.666, respectively. The ANN performed better than the logistic regression (p < 0.05). No statistically significant difference was present between the model performances created with and without SMOTE (p > 0.05). CONCLUSION: ML-based unenhanced CT texture analysis using ANN can be a promising noninvasive method in predicting the nuclear grade of clear cell RCCs.

Influence of segmentation margin on machine learning–based high-dimensional quantitative CT texture analysis: a reproducibility study on renal clear cell carcinomas

  • Kocak, Burak
  • Ates, Ece
  • Durmaz, Emine Sebnem
  • Ulusan, Melis Baykara
  • Kilickesmez, Ozgur
European Radiology 2019 Journal Article, cited 0 times
Website

Training of deep convolutional neural nets to extract radiomic signatures of tumors

  • Kim, J.
  • Seo, S.
  • Ashrafinia, S.
  • Rahmim, A.
  • Sossi, V.
  • Klyuzhin, I.
Journal of Nuclear Medicine 2019 Journal Article, cited 0 times
Website
Objectives: Radiomics-based analysis of FDG PET images has been shown to improve the assessment and prediction of tumor growth rate, response to treatment and other patient outcomes [1]. An alternative new approach to image analysis involves the use of convolutional neural networks (CNNs), wherein relevant image features are learned implicitly and automatically in the process of network training [2]; this is in contrast to radiomics analyses, where the features are “hand-crafted” and are explicitly computed (EC). Although CNNs represent a more general approach, it is not clear whether the implicitly learned features may, or have the ability to include radiomics features (RFs) as a subset. If this is the case, CNN-based approaches may eventually obviate the use of EC RFs. Further, the use of CNNs instead of RFs may completely eliminate the need for feature selection and tumor delineation, enabling high-throughput data analyses. Thus, our objective was to test whether CNNs can learn to act similarly to several commonly used RFs. Using a set of simulated and real FDG PET images of tumors, we train the CNNs to estimate the values of RFs from the images without the explicit computation. We then compare the values of the CNN-estimated and EC features. Methods: Using a stochastic volumetric model for tumor growth, 2000 FDG images of tumors confined to a bounding box (BB) were simulated (40x40x40 voxels, voxel size 2.0 mm), and 10 RFs (3 x morphology, 4 x intensity histogram, 3 x texture features) were computed for each image using the SERA library [3] (compliant with the Image Biomarker Standardization Initiative, IBSI [4]). A 3D CNN with 4 convolutional layers, and a total of 164 filters, was implemented in Python using the Keras library with TensorFlow backend (https://www.keras.io). The mean absolute error was the optimized loss function. The CNN was trained to automatically estimate the values each of the 10 RFs for each image; 1900 of images were used for training, and 100 were used for testing, to compare the CNN-estimated values to the EC feature values. We also used a secondary test set comprised of 133 real tumor images, obtained from the head and neck PET/CT imaging study [5] publicly available at the Cancer Imaging Archive. The tumors were cropped to a BB, and the images were resampled to yield similar image size to the simulated image set. Results: After the training procedure, on the simulated test set the CNN was able to estimate the values of most EC RFs with 10-20% error (relative to the range). In the morphology group, the errors were 3.8% for volume, 12.0% for compactness, 15.7% for flatness. In the intensity group, the errors were 13.7% for the mean, 15.4% for variance, 12.3% for skewness, and 13.1% for kurtosis. In the texture group, the error was 10.6% for GLCM contrast, 13.4% for cluster tendency, and 21.7% for angular momentum. With all features, the difference between the CNN-estimated and EC feature values were statistically insignificant (two-sample t-test), and the correlation between the feature values was highly significant (p<0.01). On the real image test set, we observed higher error rates, on the order of 20-30%; however, with all but one feature (angular momentum), there was a significant correlation between the CNN-estimated and EC features (p<0.01). Conclusions: Our results suggest that CNNs can be trained to act similarly to several widely used RFs. While the accuracy of CNN-based estimates varied between the features, in general, the CNN showed a good propensity for learning. Thus, it is likely that with more complex network architectures and training data, features can be estimated more accurately. While a greater number of RFs need to be similarly tested in the future, these initial experiments provide first evidence that, given the sufficient quality and quantity of the training data, the CNNs indeed represent a more general approach to feature extraction, and may potentially replace radiomics-based analyses without compromising the descriptive thoroughness.

Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities

  • Kim, Incheol
  • Rajaraman, Sivaramakrishnan
  • Antani, Sameer
Diagnostics (Basel) 2019 Journal Article, cited 0 times
Website
Deep learning (DL) methods are increasingly being applied for developing reliable computer-aided detection (CADe), diagnosis (CADx), and information retrieval algorithms. However, challenges in interpreting and explaining the learned behavior of the DL models hinders their adoption and use in real-world systems. In this study, we propose a novel method called “Class-selective Relevance Mapping” (CRM) for localizing and visualizing discriminative regions of interest (ROI) within a medical image. Such visualizations offer improved explanation of the convolutional neural network (CNN)-based DL model predictions. We demonstrate CRM effectiveness in classifying medical imaging modalities toward automatically labeling them for visual information retrieval applications. The CRM is based on linear sum of incremental mean squared errors (MSE) calculated at the output layer of the CNN model. It measures both positive and negative contributions of each spatial element in the feature maps produced from the last convolution layer leading to correct classification of an input image. A series of experiments on a “multi-modality” CNN model designed for classifying seven different types of image modalities shows that the proposed method is significantly better in detecting and localizing the discriminative ROIs than other state of the art class-activation methods. Further, to visualize its effectiveness we generate “class-specific” ROI maps by averaging the CRM scores of images in each modality class, and characterize the visual explanation through their different size, shape, and location for our multi-modality CNN model that achieved over 98% performance on a dataset constructed from publicly available images.

Prediction of 1p/19q Codeletion in Diffuse Glioma Patients Using Preoperative Multiparametric Magnetic Resonance Imaging

  • Kim, Donnie
  • Wang, Nicholas C
  • Ravikumar, Visweswaran
  • Raghuram, DR
  • Li, Jinju
  • Patel, Ankit
  • Wendt, Richard E
  • Rao, Ganesh
  • Rao, Arvind
Frontiers in computational neuroscience 2019 Journal Article, cited 0 times

3D convolution neural networks for molecular subtype prediction in glioblastoma multiforme

  • Khened, Mahendra
  • Anand, Vikas Kumar
  • Acharya, Gagan
  • Shah, Nameeta
  • Krishnamurthi, Ganapathy
2019 Conference Proceedings, cited 0 times
Website

Zonal Segmentation of Prostate T2W-MRI using Atrous Convolutional Neural Network

  • Khan, Zia
  • Yahya, Norashikin
  • Alsaih, Khaled
  • Meriaudeau, Fabrice
2019 Conference Paper, cited 0 times
The number of prostate cancer cases is steadily increasing especially with rising number of ageing population. It is reported that 5-year relative survival rate for man with stage 1 prostate cancer is almost 99% hence, early detection will significantly improve treatment planning and increase survival rate. Magnetic resonance imaging (MRI) technique is a common imaging modality for diagnosis of prostate cancer. MRI provide good visualization of soft tissue and enable better lesion detection and staging of prostate cancer. The main challenge of prostate whole gland segmentation is due to blurry boundary of central gland (CG) and peripheral zone (PZ) which lead to differential diagnosis. Since there is substantial difference in occurance and characteristic of cancer in both zones. So to enhance the diagnosis of prostate gland, we implemented DeeplabV3+ semantic segmentation approach to segment the prostate into zones. DeepLabV3+ achieved significant results in segmentation of prostate MRI by applying several parallel atrous convolution with different rates. The CNN-based semantic segmentation approach is trained and tested on NCI-ISBI 1.5T and 3T MRI dataset consist of 40 patients. Performance evaluation based on Dice similarity coefficient (DSC) of the Deeplab-based segmentation is compared with two other CNN-based semantic segmentation techniques: FCN and PSNet. Results shows that prostate segmentation using DeepLabV3+ can perform better than FCN and PSNet with average DSC of 70.3% in PZ and 88% in CG zone. This indicates the significant contribution made by the atrous convolution layer, in producing better prostate segmentation result.

ECM-CSD: An Efficient Classification Model for Cancer Stage Diagnosis in CT Lung Images Using FCM and SVM Techniques

  • Kavitha, MS
  • Shanthini, J
  • Sabitha, R
Journal of Medical Systems 2019 Journal Article, cited 0 times
Website

Neurosense: deep sensing of full or near-full coverage head/brain scans in human magnetic resonance imaging

  • Kanber, B.
  • Ruffle, J.
  • Cardoso, J.
  • Ourselin, S.
  • Ciccarelli, O.
Neuroinformatics 2019 Journal Article, cited 0 times
Website
The application of automated algorithms to imaging requires knowledge of its content, a curatorial task, for which we ordinarily rely on the Digital Imaging and Communications in Medicine (DICOM) header as the only source of image meta-data. However, identifying brain MRI scans that have full or near-full coverage among a large number (e.g. >5000) of scans comprising both head/brain and other body parts is a time-consuming task that cannot be automated with the use of the information stored in the DICOM header attributes alone. Depending on the clinical scenario, an entire set of scans acquired in a single visit may often be labelled “BRAIN” in the DICOM field 0018,0015 (Body Part Examined), while the individual scans will often not only include brain scans with full coverage, but also others with partial brain coverage, scans of the spinal cord, and in some cases other body parts.

Multicenter CT phantoms public dataset for radiomics reproducibility tests

  • Kalendralis, Petros
  • Traverso, Alberto
  • Shi, Zhenwei
  • Zhovannik, Ivan
  • Monshouwer, Rene
  • Starmans, Martijn P A
  • Klein, Stefan
  • Pfaehler, Elisabeth
  • Boellaard, Ronald
  • Dekker, Andre
  • Wee, Leonard
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: The aim of this paper is to describe a public, open-access, computed tomography (CT) phantom image set acquired at three centers and collected especially for radiomics reproducibility research. The dataset is useful to test radiomic features reproducibility with respect to various parameters, such as acquisition settings, scanners, and reconstruction algorithms. ACQUISITION AND VALIDATION METHODS: Three phantoms were scanned in three independent institutions. Images of the following phantoms were acquired: Catphan 700 and COPDGene Phantom II (Phantom Laboratory, Greenwich, NY, USA), and the Triple modality 3D Abdominal Phantom (CIRS, Norfolk, VA, USA). Data were collected at three Dutch medical centers: MAASTRO Clinic (Maastricht, NL), Radboud University Medical Center (Nijmegen, NL), and University Medical Center Groningen (Groningen, NL) with scanners from two different manufacturers Siemens Healthcare and Philips Healthcare. The following acquisition parameter were varied in the phantom scans: slice thickness, reconstruction kernels, and tube current. DATA FORMAT AND USAGE NOTES: We made the dataset publically available on the Dutch instance of "Extensible Neuroimaging Archive Toolkit-XNAT" (https://xnat.bmia.nl). The dataset is freely available and reusable with attribution (Creative Commons 3.0 license). POTENTIAL APPLICATIONS: Our goal was to provide a findable, open-access, annotated, and reusable CT phantom dataset for radiomics reproducibility studies. Reproducibility testing and harmonization are fundamental requirements for wide generalizability of radiomics-based clinical prediction models. It is highly desirable to include only reproducible features into models, to be more assured of external validity across hitherto unseen contexts. In this view, phantom data from different centers represent a valuable source of information to exclude CT radiomic features that may already be unstable with respect to simplified structures and tightly controlled scan settings. The intended extension of our shared dataset is to include other modalities and phantoms with more realistic lesion simulations.

Enhancement of Deep Learning in Image Classification Performance Using Xception with the Swish Activation Function for Colorectal Polyp Preliminary Screening

  • Jinsakul, Natinai
  • Tsai, Cheng-Fa
  • Tsai, Chia-En
  • Wu, Pensee
Mathematics 2019 Journal Article, cited 0 times
One of the leading forms of cancer is colorectal cancer (CRC), which is responsible for increasing mortality in young people. The aim of this paper is to provide an experimental modification of deep learning of Xception with Swish and assess the possibility of developing a preliminary colorectal polyp screening system by training the proposed model with a colorectal topogram dataset in two and three classes. The results indicate that the proposed model can enhance the original convolutional neural network model with evaluation classification performance by achieving accuracy of up to 98.99% for classifying into two classes and 91.48% for three classes. For testing of the model with another external image, the proposed method can also improve the prediction compared to the traditional method, with 99.63% accuracy for true prediction of two classes and 80.95% accuracy for true prediction of three classes.

Fusion Radiomics Features from Conventional MRI Predict MGMT Promoter Methylation Status in Lower Grade Gliomas

  • Jiang, Chendan
  • Kong, Ziren
  • Liu, Sirui
  • Feng, Shi
  • Zhang, Yiwei
  • Zhu, Ruizhe
  • Chen, Wenlin
  • Wang, Yuekun
  • Lyu, Yuelei
  • You, Hui
  • Zhao, Dachun
  • Wang, Renzhi
  • Wang, Yu
  • Ma, Wenbin
  • Feng, Feng
Eur J Radiol 2019 Journal Article, cited 0 times
Website
PURPOSE: The methylation status of the O6-methylguanine-DNA methyltransferase (MGMT) promoter has been proven to be a prognostic and predictive biomarker for lower grade glioma (LGG). This study aims to build a radiomics model to preoperatively predict the MGMT promoter methylation status in LGG. METHOD: 122 pathology-confirmed LGG patients were retrospectively reviewed, with 87 local patients as the training dataset, and 35 from The Cancer Imaging Archive as independent validation. A total of 1702 radiomics features were extracted from three-dimensional contrast-enhanced T1 (3D-CE-T1)-weighted and T2-weighted MRI images, including 14 shape, 18 first order, 75 texture, and 744 wavelet features respectively. The radiomics features were selected with the least absolute shrinkage and selection operator algorithm, and prediction models were constructed with multiple classifiers. Models were evaluated using receiver operating characteristic (ROC). RESULTS: Five radiomics prediction models, namely, 3D-CE-T1-weighted single radiomics model, T2-weighted single radiomics model, fusion radiomics model, linear combination radiomics model, and clinical integrated model, were built. The fusion radiomics model, which constructed from the concatenation of both series, displayed the best performance, with an accuracy of 0.849 and an area under the curve (AUC) of 0.970 (0.939-1.000) in the training dataset, and an accuracy of 0.886 and an AUC of 0.898 (0.786-1.000) in the validation dataset. Linear combination of single radiomics models and integration of clinical factors did not improve. CONCLUSIONS: Conventional MRI radiomics models are reliable for predicting the MGMT promoter methylation status in LGG patients. The fusion of radiomics features from different series may increase the prediction performance.

Assessment of prostate cancer prognostic Gleason grade group using zonal-specific features extracted from biparametric MRI using a KNN classifier

  • Jensen, C.
  • Carl, J.
  • Boesen, L.
  • Langkilde, N. C.
  • Ostergaard, L. R.
J Appl Clin Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: To automatically assess the aggressiveness of prostate cancer (PCa) lesions using zonal-specific image features extracted from diffusion weighted imaging (DWI) and T2W MRI. METHODS: Region of interest was extracted from DWI (peripheral zone) and T2W MRI (transitional zone and anterior fibromuscular stroma) around the center of 112 PCa lesions from 99 patients. Image histogram and texture features, 38 in total, were used together with a k-nearest neighbor classifier to classify lesions into their respective prognostic Grade Group (GG) (proposed by the International Society of Urological Pathology 2014 consensus conference). A semi-exhaustive feature search was performed (1-6 features in each feature set) and validated using threefold stratified cross validation in a one-versus-rest classification setup. RESULTS: Classifying PCa lesions into GGs resulted in AUC of 0.87, 0.88, 0.96, 0.98, and 0.91 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5 for the peripheral zone, respectively. The results for transitional zone and anterior fibromuscular stroma were AUC of 0.85, 0.89, 0.83, 0.94, and 0.86 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5, respectively. CONCLUSION: This study showed promising results with reasonable AUC values for classification of all GG indicating that zonal-specific imaging features from DWI and T2W MRI can be used to differentiate between PCa lesions of various aggressiveness.

Deep Neural Network Based Classifier Model for Lung Cancer Diagnosis and Prediction System in Healthcare Informatics

  • Jayaraj, D.
  • Sathiamoorthy, S.
2019 Conference Paper, cited 0 times
Lung cancer is a most important deadly disease which results to mortality of people because of the cells growth in unmanageable way. This problem leads to increased significance among physicians as well as academicians to develop efficient diagnosis models. Therefore, a novel method for automated identification of lung nodule becomes essential and it forms the motivation of this study. This paper presents a new deep learning classification model for lung cancer diagnosis. The presented model involves four main steps namely preprocessing, feature extraction, segmentation and classification. A particle swarm optimization (PSO) algorithm is sued for segmentation and deep neural network (DNN) is applied for classification. The presented PSO-DNN model is tested against a set of sample lung images and the results verified the goodness of the projected model on all the applied images.

Prediction of Treatment Response to Neoadjuvant Chemotherapy for Breast Cancer via Early Changes in Tumor Heterogeneity Captured by DCE-MRI Registration

  • Jahani, Nariman
  • Cohen, Eric
  • Hsieh, Meng-Kang
  • Weinstein, Susan P
  • Pantalone, Lauren
  • Hylton, Nola
  • Newitt, David
  • Davatzikos, Christos
  • Kontos, Despina
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
We analyzed DCE-MR images from 132 women with locally advanced breast cancer from the I-SPY1 trial to evaluate changes of intra-tumor heterogeneity for augmenting early prediction of pathologic complete response (pCR) and recurrence-free survival (RFS) after neoadjuvant chemotherapy (NAC). Utilizing image registration, voxel-wise changes including tumor deformations and changes in DCE-MRI kinetic features were computed to characterize heterogeneous changes within the tumor. Using five-fold cross-validation, logistic regression and Cox regression were performed to model pCR and RFS, respectively. The extracted imaging features were evaluated in augmenting established predictors, including functional tumor volume (FTV) and histopathologic and demographic factors, using the area under the curve (AUC) and the C-statistic as performance measures. The extracted voxel-wise features were also compared to analogous conventional aggregated features to evaluate the potential advantage of voxel-wise analysis. Voxel-wise features improved prediction of pCR (AUC = 0.78 (±0.03) vs 0.71 (±0.04), p < 0.05 and RFS (C-statistic = 0.76 ( ± 0.05), vs 0.63 ( ± 0.01)), p < 0.05, while models based on analogous aggregate imaging features did not show appreciable performance changes (p > 0.05). Furthermore, all selected voxel-wise features demonstrated significant association with outcome (p < 0.05). Thus, precise measures of voxel-wise changes in tumor heterogeneity extracted from registered DCE-MRI scans can improve early prediction of neoadjuvant treatment outcomes in locally advanced breast cancer.

NextMed, Augmented and Virtual Reality platform for 3D medical imaging visualization: Explanation of the software platform developed for 3D models visualization related with medical images using Augmented and Virtual Reality technology

  • Izard, Santiago González
  • Plaza, Óscar Alonso
  • Torres, Ramiro Sánchez
  • Méndez, Juan Antonio Juanes
  • García-Peñalvo, Francisco José
2019 Conference Proceedings, cited 0 times
Website
The visualization of the radiological results with more advanced techniques than the current ones, such as Augmented Reality and Virtual Reality technologies, represent a great advance for medical professionals, by eliminating their imagination capacity as an indispensable requirement for the understanding of medical images. The problem is that for its application it is necessary to segment the anatomical areas of interest, and this currently involves the intervention of the human being. The Nextmed project is presented as a complete solution that includes DICOM images import, automatic segmentation of certain anatomical structures, 3D mesh generation of the segmented area, visualization engine with Augmented Reality and Virtual Reality, all thanks to different software platforms that have been implemented and detailed, including results obtained from real patients. We will focus on the visualization platform using both Augmented and Virtual Reality technologies to allow medical professionals to work with 3d model representation of medical images in a different way taking advantage of new technologies.

A rotation and translation invariant method for 3D organ image classification using deep convolutional neural networks

  • Islam, Kh Tohidul
  • Wijewickrema, Sudanthi
  • O’Leary, Stephen
PeerJ Computer SciencePeerJ Computer Science 2019 Journal Article, cited 0 times
Website
Three-dimensional (3D) medical image classification is useful in applications such as disease diagnosis and content-based medical image retrieval. It is a challenging task due to several reasons. First, image intensity values are vastly different depending on the image modality. Second, intensity values within the same image modality may vary depending on the imaging machine and artifacts may also be introduced in the imaging process. Third, processing 3D data requires high computational power. In recent years, significant research has been conducted in the field of 3D medical image classification. However, most of these make assumptions about patient orientation and imaging direction to simplify the problem and/or work with the full 3D images. As such, they perform poorly when these assumptions are not met. In this paper, we propose a method of classification for 3D organ images that is rotation and translation invariant. To this end, we extract a representative two-dimensional (2D) slice along the plane of best symmetry from the 3D image. We then use this slice to represent the 3D image and use a 20-layer deep convolutional neural network (DCNN) to perform the classification task. We show experimentally, using multi-modal data, that our method is comparable to existing methods when the assumptions of patient orientation and viewing direction are met. Notably, it shows similarly high accuracy even when these assumptions are violated, where other methods fail. We also explore how this method can be used with other DCNN models as well as conventional classification approaches.

Automatic MRI Breast tumor Detection using Discrete Wavelet Transform and Support Vector Machines

  • Ibraheem, Amira Mofreh
  • Rahouma, Kamel Hussein
  • Hamed, Hesham F. A.
2019 Conference Paper, cited 0 times
Website
The human right is to live a healthy life free of serious diseases. Cancer is the most serious disease facing humans and possibly leading to death. So, a definitive solution must be done to these diseases, to eliminate them and also to protect humans from them. Breast cancer is considered being one of the dangerous types of cancers that face women in particular. Early examination should be done periodically and the diagnosis must be more sensitive and effective to preserve the women lives. There are various types of breast cancer images but magnetic resonance imaging (MRI) has become one of the important ways in breast cancer detection. In this work, a new method is done to detect the breast cancer using the MRI images that is preprocessed using a 2D Median filter. The features are extracted from the images using discrete wavelet transform (DWT). These features are reduced to 13 features. Then, support vector machine (SVM) is used to detect if there is a tumor or not. Simulation results have been accomplished using the MRI images datasets. These datasets are extracted from the standard Breast MRI database known as the “Reference Image Database to Evaluate Response (RIDER)”. The proposed method has achieved an accuracy of 98.03 % using the available MRIs database. The processing time for all processes was recorded as 0.894 seconds. The obtained results have demonstrated the superiority of the proposed system over the available ones in the literature.

Fast and Fully-Automated Detection and Segmentation of Pulmonary Nodules in Thoracic CT Scans Using Deep Convolutional Neural Networks

  • Huang, X.
  • Sun, W.
  • Tseng, T. B.
  • Li, C.
  • Qian, W.
Computerized Medical Imaging and Graphics 2019 Journal Article, cited 0 times
Website
Deep learning techniques have been extensively used in computerized pulmonary nodule analysis in recent years. Many reported studies still utilized hybrid methods for diagnosis, in which convolutional neural networks (CNNs) are used only as one part of the pipeline, and the whole system still needs either traditional image processing modules or human intervention to obtain final results. In this paper, we introduced a fast and fully-automated end-to-end system that can efficiently segment precise lung nodule contours from raw thoracic CT scans. Our proposed system has four major modules: candidate nodule detection with Faster regional-CNN (R-CNN), candidate merging, false positive (FP) reduction with CNN, and nodule segmentation with customized fully convolutional neural network (FCN). The entire system has no human interaction or database specific design. The average runtime is about 16 s per scan on a standard workstation. The nodule detection accuracy is 91.4% and 94.6% with an average of 1 and 4 false positives (FPs) per scan. The average dice coefficient of nodule segmentation compared to the groundtruth is 0.793.

Assessment of a radiomic signature developed in a general NSCLC cohort for predicting overall survival of ALK-positive patients with different treatment types

  • Huang, Lyu
  • Chen, Jiayan
  • Hu, Weigang
  • Xu, Xinyan
  • Liu, Di
  • Wen, Junmiao
  • Lu, Jiayu
  • Cao, Jianzhao
  • Zhang, Junhua
  • Gu, Yu
  • Wang, Jiazhou
  • Fan, Min
Clinical lung cancer 2019 Journal Article, cited 0 times
Website
Objectives To investigate the potential of a radiomic signature developed in a general NSCLC cohort for predicting the overall survival of ALK-positive patients with different treatment types. Methods After test-retest in the RIDER dataset, 132 features (ICC>0.9) were selected in the LASSO Cox regression model with a leave-one-out cross-validation. The NSCLC Radiomics collection from TCIA was randomly divided into a training set (N=254) and a validation set (N=63) to develop a general radiomic signature for NSCLC. In our ALK+ set, 35 patients received targeted therapy and 19 patients received non-targeted therapy. The developed signature was tested later in this ALK+ set. Performance of the signature was evaluated with C-index and stratification analysis. Results The general signature has good performance (C-index>0.6, log-rank p-value<0.05) in the NSCLC Radiomics collection. It includes five features: Geom_va_ratio, W_GLCM_LH_Std, W_GLCM_LH_DV, W_GLCM_HH_IM2 and W_his_HL_mean (Supplementary Table S2). Its accuracy of predicting overall survival in the ALK+ set achieved 0.649 (95%CI=0.640-0.658). Nonetheless, impaired performance was observed in the targeted therapy group (C-index=0.573, 95%CI=0.556-0.589) while significantly improved performance was observed in the non-targeted therapy group (C-index=0.832, 95%CI=0.832-0.852). Stratification analysis also showed that the general signature could only identify high- and low-risk patients in the non-targeted therapy group (log-rank p-value=0.00028). Conclusions This preliminary study suggests that the applicability of a general signature to ALK-positive patients is limited. The general radiomic signature seems to be only applicable to ALK-positive patients who had received non-targeted therapy, which indicates that developing special radiomics signatures for patients treated with TKI might be necessary. Abbreviations and acronyms TCIA The Cancer Imaging Archive ALK Anaplastic lymphoma kinase NSCLC Non-small cell lung cancer EML4-ALK fusion The echinoderm microtubule-associated protein like 4-anaplastic lymphoma kinase fusion C-index Concordance index CI Confidence interval ICC The intra-class correlation coefficient OS Overall Survival LASSO The Least Absolute Shrinkage and Selection Operator EGFR Epidermal Growth Factor Receptor TKI Tyrosine-kinase inhibitor

Development and validation of radiomic signatures of head and neck squamous cell carcinoma molecular features and subtypes

  • Huang, Chao
  • Cintra, Murilo
  • Brennan, Kevin
  • Zhou, Mu
  • Colevas, A Dimitrios
  • Fischbein, Nancy
  • Zhu, Shankuan
  • Gevaert, Olivier
EBioMedicine 2019 Journal Article, cited 1 times
Website
BACKGROUND: Radiomics-based non-invasive biomarkers are promising to facilitate the translation of therapeutically related molecular subtypes for treatment allocation of patients with head and neck squamous cell carcinoma (HNSCC). METHODS: We included 113 HNSCC patients from The Cancer Genome Atlas (TCGA-HNSCC) project. Molecular phenotypes analyzed were RNA-defined HPV status, five DNA methylation subtypes, four gene expression subtypes and five somatic gene mutations. A total of 540 quantitative image features were extracted from pre-treatment CT scans. Features were selected and used in a regularized logistic regression model to build binary classifiers for each molecular subtype. Models were evaluated using the average area under the Receiver Operator Characteristic curve (AUC) of a stratified 10-fold cross-validation procedure repeated 10 times. Next, an HPV model was trained with the TCGA-HNSCC, and tested on a Stanford cohort (N=53). FINDINGS: Our results show that quantitative image features are capable of distinguishing several molecular phenotypes. We obtained significant predictive performance for RNA-defined HPV+ (AUC=0.73), DNA methylation subtypes MethylMix HPV+ (AUC=0.79), non-CIMP-atypical (AUC=0.77) and Stem-like-Smoking (AUC=0.71), and mutation of NSD1 (AUC=0.73). We externally validated the HPV prediction model (AUC=0.76) on the Stanford cohort. When compared to clinical models, radiomic models were superior to subtypes such as NOTCH1 mutation and DNA methylation subtype non-CIMP-atypical while were inferior for DNA methylation subtype CIMP-atypical and NSD1 mutation. INTERPRETATION: Our study demonstrates that radiomics can potentially serve as a non-invasive tool to identify treatment-relevant subtypes of HNSCC, opening up the possibility for patient stratification, treatment allocation and inclusion in clinical trials. FUND: Dr. Gevaert reports grants from National Institute of Dental & Craniofacial Research (NIDCR) U01 DE025188, grants from National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health (NIBIB), R01 EB020527, grants from National Cancer Institute (NCI), U01 CA217851, during the conduct of the study; Dr. Huang and Dr. Zhu report grants from China Scholarship Council (Grant NO:201606320087), grants from China Medical Board Collaborating Program (Grant NO:15-216), the Cyrus Tang Foundation, and the Zhejiang University Education Foundation during the conduct of the study; Dr. Cintra reports grants from Sao Paulo State Foundation for Teaching and Research (FAPESP), during the conduct of the study.

Brain Tumor Segmentation Using Multi-Cascaded Convolutional Neural Networks and Conditional Random Field

  • Hu, Kai
  • Gan, Qinghai
  • Zhang, Yuan
  • Deng, Shuhua
  • Xiao, Fen
  • Huang, Wei
  • Cao, Chunhong
  • Gao, Xieping
IEEE Access 2019 Journal Article, cited 2 times
Website
Accurate segmentation of brain tumor is an indispensable component for cancer diagnosis and treatment. In this paper, we propose a novel brain tumor segmentation method based on multicascaded convolutional neural network (MCCNN) and fully connected conditional random fields (CRFs). The segmentation process mainly includes the following two steps. First, we design a multi-cascaded network architecture by combining the intermediate results of several connected components to take the local dependencies of labels into account and make use of multi-scale features for the coarse segmentation. Second, we apply CRFs to consider the spatial contextual information and eliminate some spurious outputs for the fine segmentation. In addition, we use image patches obtained from axial, coronal, and sagittal views to respectively train three segmentation models, and then combine them to obtain the final segmentation result. The validity of the proposed method is evaluated on three publicly available databases. The experimental results show that our method achieves competitive performance compared with the state-of-the-art approaches.

Performance of sparse-view CT reconstruction with multi-directional gradient operators

  • Hsieh, C. J.
  • Jin, S. C.
  • Chen, J. C.
  • Kuo, C. W.
  • Wang, R. T.
  • Chu, W. C.
PLoS One 2019 Journal Article, cited 0 times
Website
To further reduce the noise and artifacts in the reconstructed image of sparse-view CT, we have modified the traditional total variation (TV) methods, which only calculate the gradient variations in x and y directions, and have proposed 8- and 26-directional (the multi-directional) gradient operators for TV calculation to improve the quality of reconstructed images. Different from traditional TV methods, the proposed 8- and 26-directional gradient operators additionally consider the diagonal directions in TV calculation. The proposed method preserves more information from original tomographic data in the step of gradient transform to obtain better reconstruction image qualities. Our algorithms were tested using two-dimensional Shepp-Logan phantom and three-dimensional clinical CT images. Results were evaluated using the root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), and universal quality index (UQI). All the experiment results show that the sparse-view CT images reconstructed using the proposed 8- and 26-directional gradient operators are superior to those reconstructed by traditional TV methods. Qualitative and quantitative analyses indicate that the more number of directions that the gradient operator has, the better images can be reconstructed. The 8- and 26-directional gradient operators we proposed have better capability to reduce noise and artifacts than traditional TV methods, and they are applicable to be applied to and combined with existing CT reconstruction algorithms derived from CS theory to produce better image quality in sparse-view reconstruction.

A Pipeline for Lung Tumor Detection and Segmentation from CT Scans Using Dilated Convolutional Neural Networks

  • Hossain, S
  • Najeeb, S
  • Shahriyar, A
  • Abdullah, ZR
  • Haque, MA
2019 Conference Proceedings, cited 0 times
Website
Lung cancer is the most prevalent cancer worldwide with about 230,000 new cases every year. Most cases go undiagnosed until it’s too late, especially in developing countries and remote areas. Early detection is key to beating cancer. Towards this end, the work presented here proposes an automated pipeline for lung tumor detection and segmentation from 3D lung CT scans from the NSCLC-Radiomics Dataset. It also presents a new dilated hybrid-3D convolutional neural network architecture for tumor segmentation. First, a binary classifier chooses CT scan slices that may contain parts of a tumor. To segment the tumors, the selected slices are passed to the segmentation model which extracts feature maps from each 2D slice using dilated convolutions and then fuses the stacked maps through 3D convolutions - incorporating the 3D structural information present in the CT scan volume into the output. Lastly, the segmentation masks are passed through a post-processing block which cleans them up through morphological operations. The proposed segmentation model outperformed other contemporary models like LungNet and U-Net. The average and median dice coefficient on the test set for the proposed model were 65.7% and 70.39% respectively. The next best model, LungNet had dice scores of 62.67% and 66.78%.

Renal Cancer Cell Nuclei Detection from Cytological Images Using Convolutional Neural Network for Estimating Proliferation Rate

  • Hossain, Shamim
  • Jalab, Hamid A.
  • Zulfiqar, Fariha
  • Pervin, Mahfuza
Journal of Telecommunication, Electronic and Computer Engineering 2019 Journal Article, cited 0 times
Website
The Cytological images play an essential role in monitoring the progress of cancer cell mutation. The proliferation rate of the cancer cell is the prerequisite for cancer treatment. It is hard to accurately identify the nucleus of the abnormal cell in a faster way as well as find the correct proliferation rate since it requires an in-depth manual examination, observation and cell counting, which are very tedious and time-consuming. The proposed method starts with segmentation to separate the background and object regions with K-means clustering. The small candidate regions, which contain cell region is detected based on the value of support vector machine automatically. The sets of cell regions are marked with selective search according to the local distance between the nucleus and cell boundary, whether they are overlapping or non-overlapping cell regions. After that, the selective segmented cell features are taken to learn the normal and abnormal cell nuclei separately from the regional convolutional neural network. Finally, the proliferation rate in the invasive cancer area is calculated based on the number of abnormal cells. A set of renal cancer cell cytological images is taken from the National Cancer Institute, USA and this data set is available for the research work. Quantitative evaluation of this method is performed by comparing its accuracy with the accuracy of the other state of the art cancer cell nuclei detection methods. Qualitative assessment is done based on human observation. The proposed method is able to detect renal cancer cell nuclei accurately and provide automatic proliferation rate.

Automated Muscle Segmentation from Clinical CT using Bayesian U-Net for Personalized Musculoskeletal Modeling

  • Hiasa, Yuta
  • Otake, Yoshito
  • Takao, Masaki
  • Ogawa, Takeshi
  • Sugano, Nobuhiko
  • Sato, Yoshinobu
IEEE Trans Med Imaging 2019 Journal Article, cited 2 times
Website
We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated the performance of the proposed method using two data sets: 20 fully annotated CTs of the hip and thigh regions and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database. The experiments showed a Dice coefficient (DC) of 0.891+/-0.016 (mean+/-std) and an average symmetric surface distance (ASD) of 0.994+/-0.230 mm over 19 muscles in the set of 20 CTs. These results were statistically significant improvements compared to the state-of-the-art hierarchical multi-atlas method which resulted in 0.845+/-0.031 DC and 1.556+/-0.444 mm ASD. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in activelearning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitative evaluation of muscle atrophy.

Transfer learning with multiple convolutional neural networks for soft tissue sarcoma MRI classification

  • Hermessi, Haithem
  • Mourali, Olfa
  • Zagrouba, Ezzeddine
2019 Conference Proceedings, cited 1 times
Website

A Comparison of the Efficiency of Using a Deep CNN Approach with Other Common Regression Methods for the Prediction of EGFR Expression in Glioblastoma Patients

  • Hedyehzadeh, Mohammadreza
  • Maghooli, Keivan
  • MomenGharibvand, Mohammad
  • Pistorius, Stephen
J Digit Imaging 2019 Journal Article, cited 0 times
Website
To estimate epithermal growth factor receptor (EGFR) expression level in glioblastoma (GBM) patients using radiogenomic analysis of magnetic resonance images (MRI). A comparative study using a deep convolutional neural network (CNN)-based regression, deep neural network, least absolute shrinkage and selection operator (LASSO) regression, elastic net regression, and linear regression with no regularization was carried out to estimate EGFR expression of 166 GBM patients. Except for the deep CNN case, overfitting was prevented by using feature selection, and loss values for each method were compared. The loss values in the training phase for deep CNN, deep neural network, Elastic net, LASSO, and the linear regression with no regularization were 2.90, 8.69, 7.13, 14.63, and 21.76, respectively, while in the test phase, the loss values were 5.94, 10.28, 13.61, 17.32, and 24.19 respectively. These results illustrate that the efficiency of the deep CNN approach is better than that of the other methods, including Lasso regression, which is a regression method known for its advantage in high-dimension cases. A comparison between deep CNN, deep neural network, and three other common regression methods was carried out, and the efficiency of the CNN deep learning approach, in comparison with other regression models, was demonstrated.

Fast Super-Resolution in MRI Images Using Phase Stretch Transform, Anchored Point Regression and Zero-Data Learning

  • He, Sifeng
  • Jalali, Bahram
2019 Conference Proceedings, cited 0 times
Website
Medical imaging is fundamentally challenging due to absorption and scattering in tissues and by the need to minimize illumination of the patient with harmful radiation. Common problems are low spatial resolution, limited dynamic range and low contrast. These predicaments have fueled interest in enhancing medical images using digital post processing. In this paper, we propose and demonstrate an algorithm for real-time inference that is suitable for edge computing. Our locally adaptive learned filtering technique named Phase Stretch Anchored Regression (PhSAR) combines the Phase Stretch Transform for local features extraction in visually impaired images with clustered anchored points to represent image feature space and fast regression based learning. In contrast with the recent widely-used deep neural network for image super-resolution, our algorithm achieves significantly faster inference and less hallucination on image details and is interpretable. Tests on brain MRI images using zero-data learning reveal its robustness with explicit PSNR improvement and lower latency compared to relevant benchmarks.

Automatic Colorectal Segmentation with Convolutional Neural Network

  • Guachi, Lorena
  • Guachi, Robinson
  • Bini, Fabiano
  • Marinozzi, Franco
Computer-Aided Design and Applications 2019 Journal Article, cited 3 times
Website
This paper presents a new method for colon tissues segmentation on Computed Tomography images which takes advantages of using deep and hierarchical learning about colon features through Convolutional Neural Networks (CNN). The proposed method works robustly reducing misclassified colon tissues pixels that are introduced by the presence of noise, artifacts, unclear edges, and other organs or different areas characterized by the same intensity value as the colon. Patch analysis is exploited for allowing the classification of each center pixel as colon tissue or background pixel. Experimental results demonstrate the proposed method achieves a higher effectiveness in terms of sensitivity and specificity with respect to three state-of the art methods.

Automatic lung nodule detection using multi-scale dot nodule-enhancement filter and weighted support vector machines in chest computed tomography

  • Gu, Y.
  • Lu, X.
  • Zhang, B.
  • Zhao, Y.
  • Yu, D.
  • Gao, L.
  • Cui, G.
  • Wu, L.
  • Zhou, T.
PLoS One 2019 Journal Article, cited 0 times
Website
A novel CAD scheme for automated lung nodule detection is proposed to assist radiologists with the detection of lung cancer on CT scans. The proposed scheme is composed of four major steps: (1) lung volume segmentation, (2) nodule candidate extraction and grouping, (3) false positives reduction for the non-vessel tree group, and (4) classification for the vessel tree group. Lung segmentation is performed first. Then, 3D labeling technology is used to divide nodule candidates into two groups. For the non-vessel tree group, nodule candidates are classified as true nodules at the false positive reduction stage if the candidates survive the rule-based classifier and are not screened out by the dot filter. For the vessel tree group, nodule candidates are extracted using dot filter. Next, RSFS feature selection is used to select the most discriminating features for classification. Finally, WSVM with an undersampling approach is adopted to discriminate true nodules from vessel bifurcations in vessel tree group. The proposed method was evaluated on 154 thin-slice scans with 204 nodules in the LIDC database. The performance of the proposed CAD scheme yielded a high sensitivity (87.81%) while maintaining a low false rate (1.057 FPs/scan). The experimental results indicate the performance of our method may be better than the existing methods.

Exploit fully automatic low-level segmented PET data for training high-level deep learning algorithms for the corresponding CT data

  • Gsaxner, Christina
  • Roth, Peter M
  • Wallner, Jurgen
  • Egger, Jan
PLoS One 2019 Journal Article, cited 0 times
Website
We present an approach for fully automatic urinary bladder segmentation in CT images with artificial neural networks in this study. Automatic medical image analysis has become an invaluable tool in the different treatment stages of diseases. Especially medical image segmentation plays a vital role, since segmentation is often the initial step in an image analysis pipeline. Since deep neural networks have made a large impact on the field of image processing in the past years, we use two different deep learning architectures to segment the urinary bladder. Both of these architectures are based on pre-trained classification networks that are adapted to perform semantic segmentation. Since deep neural networks require a large amount of training data, specifically images and corresponding ground truth labels, we furthermore propose a method to generate such a suitable training data set from Positron Emission Tomography/Computed Tomography image data. This is done by applying thresholding to the Positron Emission Tomography data for obtaining a ground truth and by utilizing data augmentation to enlarge the dataset. In this study, we discuss the influence of data augmentation on the segmentation results, and compare and evaluate the proposed architectures in terms of qualitative and quantitative segmentation performance. The results presented in this study allow concluding that deep neural networks can be considered a promising approach to segment the urinary bladder in CT images.

Deep Learning for Low-Dose CT Denoising Using Perceptual Loss and Edge Detection Layer

  • Gholizadeh-Ansari, M.
  • Alirezaie, J.
  • Babyn, P.
J Digit Imaging 2019 Journal Article, cited 1 times
Website
Low-dose CT denoising is a challenging task that has been studied by many researchers. Some studies have used deep neural networks to improve the quality of low-dose CT images and achieved fruitful results. In this paper, we propose a deep neural network that uses dilated convolutions with different dilation rates instead of standard convolution helping to capture more contextual information in fewer layers. Also, we have employed residual learning by creating shortcut connections to transmit image information from the early layers to later ones. To further improve the performance of the network, we have introduced a non-trainable edge detection layer that extracts edges in horizontal, vertical, and diagonal directions. Finally, we demonstrate that optimizing the network by a combination of mean-square error loss and perceptual loss preserves many structural details in the CT image. This objective function does not suffer from over smoothing and blurring effects causing by per-pixel loss and grid-like artifacts resulting from perceptual loss. The experiments show that each modification to the network improves the outcome while changing the complexity of the network, minimally.

Radiomics features of the primary tumor fail to improve prediction of overall survival in large cohorts of CT- and PET-imaged head and neck cancer patients

  • Ger, Rachel B
  • Zhou, Shouhao
  • Elgohari, Baher
  • Elhalawani, Hesham
  • Mackin, Dennis M
  • Meier, Joseph G
  • Nguyen, Callistus M
  • Anderson, Brian M
  • Gay, Casey
  • Ning, Jing
  • Fuller, Clifton D
  • Li, Heng
  • Howell, Rebecca M
  • Layman, Rick R
  • Mawlawi, Osama
  • Stafford, R Jason
  • Aerts, Hugo JWL
  • Court, Laurence E.
PLoS One 2019 Journal Article, cited 0 times
Website
Radiomics studies require many patients in order to power them, thus patients are often combined from different institutions and using different imaging protocols. Various studies have shown that imaging protocols affect radiomics feature values. We examined whether using data from cohorts with controlled imaging protocols improved patient outcome models. We retrospectively reviewed 726 CT and 686 PET images from head and neck cancer patients, who were divided into training or independent testing cohorts. For each patient, radiomics features with different preprocessing were calculated and two clinical variables-HPV status and tumor volume-were also included. A Cox proportional hazards model was built on the training data by using bootstrapped Lasso regression to predict overall survival. The effect of controlled imaging protocols on model performance was evaluated by subsetting the original training and independent testing cohorts to include only patients whose images were obtained using the same imaging protocol and vendor. Tumor volume, HPV status, and two radiomics covariates were selected for the CT model, resulting in an AUC of 0.72. However, volume alone produced a higher AUC, whereas adding radiomics features reduced the AUC. HPV status and one radiomics feature were selected as covariates for the PET model, resulting in an AUC of 0.59, but neither covariate was significantly associated with survival. Limiting the training and independent testing to patients with the same imaging protocol reduced the AUC for CT patients to 0.55, and no covariates were selected for PET patients. Radiomics features were not consistently associated with survival in CT or PET images of head and neck patients, even within patients with the same imaging protocol.

Ultra-Fast 3D GPGPU Region Extractions for Anatomy Segmentation

  • George, Jose
  • Mysoon, N. S.
  • Antony, Nixima
2019 Conference Paper, cited 0 times
Website
Region extractions are ubiquitous in any anatomy segmentation. Region growing is one such method. Starting from an initial seed point, it grows a region of interest until all valid voxels are checked, thereby resulting in an object segmentation. Although widely used, it is computationally expensive because of its sequential approach. In this paper, we present a parallel and high performance alternate for region growing using GPGPU capability. The idea is to approximate region growing requirements within an algorithm using a parallel connected-component labeling (CCL) solution. To showcase this, we selected a typical lung segmentation problem using region growing. In CPU, sequential approach consists of 3D region growing inside a mask, that is created after applying a threshold. In GPU, parallel alternative is to apply parallel CCL and select the biggest region of interest. We evaluated our approach on 45 clinical chest CT scans in LIDC data from TCIA repository. With respect to CPU, our CUDA based GPU facilitated an average performance improvement of 240× approximately. Speed up is so profound that it can be even applied to 4D lung segmentation at 6 fps.

Performance analysis for nonlinear tomographic data processing

  • Gang, Grace J
  • Guo, Xueqi
  • Stayman IV, J Webster
2019 Conference Proceedings, cited 0 times
Website

In Silico Approach for the Definition of radiomiRNomic Signatures for Breast Cancer Differential Diagnosis

  • Gallivanone, F.
  • Cava, C.
  • Corsi, F.
  • Bertoli, G.
  • Castiglioni, I.
Int J Mol Sci 2019 Journal Article, cited 2 times
Website
Personalized medicine relies on the integration and consideration of specific characteristics of the patient, such as tumor phenotypic and genotypic profiling. BACKGROUND: Radiogenomics aim to integrate phenotypes from tumor imaging data with genomic data to discover genetic mechanisms underlying tumor development and phenotype. METHODS: We describe a computational approach that correlates phenotype from magnetic resonance imaging (MRI) of breast cancer (BC) lesions with microRNAs (miRNAs), mRNAs, and regulatory networks, developing a radiomiRNomic map. We validated our approach to the relationships between MRI and miRNA expression data derived from BC patients. We obtained 16 radiomic features quantifying the tumor phenotype. We integrated the features with miRNAs regulating a network of pathways specific for a distinct BC subtype. RESULTS: We found six miRNAs correlated with imaging features in Luminal A (miR-1537, -205, -335, -337, -452, and -99a), seven miRNAs (miR-142, -155, -190, -190b, -1910, -3617, and -429) in HER2+, and two miRNAs (miR-135b and -365-2) in Basal subtype. We demonstrate that the combination of correlated miRNAs and imaging features have better classification power of Luminal A versus the different BC subtypes than using miRNAs or imaging alone. CONCLUSION: Our computational approach could be used to identify new radiomiRNomic profiles of multi-omics biomarkers for BC differential diagnosis and prognosis.

Automatic Detection of Lung Nodules Using 3D Deep Convolutional Neural Networks

  • Fu, Ling
  • Ma, Jingchen
  • Chen, Yizhi
  • Larsson, Rasmus
  • Zhao, Jun
Journal of Shanghai Jiaotong University (Science) 2019 Journal Article, cited 0 times
Website
Lung cancer is the leading cause of cancer deaths worldwide. Accurate early diagnosis is critical in increasing the 5-year survival rate of lung cancer, so the efficient and accurate detection of lung nodules, the potential precursors to lung cancer, is paramount. In this paper, a computer-aided lung nodule detection system using 3D deep convolutional neural networks (CNNs) is developed. The first multi-scale 11-layer 3D fully convolutional neural network (FCN) is used for screening all lung nodule candidates. Considering relative small sizes of lung nodules and limited memory, the input of the FCN consists of 3D image patches rather than of whole images. The candidates are further classified in the second CNN to get the final result. The proposed method achieves high performance in the LUNA16 challenge and demonstrates the effectiveness of using 3D deep CNNs for lung nodule detection.

A Radiogenomic Approach for Decoding Molecular Mechanisms Underlying Tumor Progression in Prostate Cancer

  • Fischer, Sarah
  • Tahoun, Mohamed
  • Klaan, Bastian
  • Thierfelder, Kolja M
  • Weber, Marc-Andre
  • Krause, Bernd J
  • Hakenberg, Oliver
  • Fuellen, Georg
  • Hamed, Mohamed
Cancers (Basel) 2019 Journal Article, cited 0 times
Website
Prostate cancer (PCa) is a genetically heterogeneous cancer entity that causes challenges in pre-treatment clinical evaluation, such as the correct identification of the tumor stage. Conventional clinical tests based on digital rectal examination, Prostate-Specific Antigen (PSA) levels, and Gleason score still lack accuracy for stage prediction. We hypothesize that unraveling the molecular mechanisms underlying PCa staging via integrative analysis of multi-OMICs data could significantly improve the prediction accuracy for PCa pathological stages. We present a radiogenomic approach comprising clinical, imaging, and two genomic (gene and miRNA expression) datasets for 298 PCa patients. Comprehensive analysis of gene and miRNA expression profiles for two frequent PCa stages (T2c and T3b) unraveled the molecular characteristics for each stage and the corresponding gene regulatory interaction network that may drive tumor upstaging from T2c to T3b. Furthermore, four biomarkers (ANPEP, mir-217, mir-592, mir-6715b) were found to distinguish between the two PCa stages and were highly correlated (average r = +/- 0.75) with corresponding aggressiveness-related imaging features in both tumor stages. When combined with related clinical features, these biomarkers markedly improved the prediction accuracy for the pathological stage. Our prediction model exhibits high potential to yield clinically relevant results for characterizing PCa aggressiveness.

On the Evaluation of the Suitability of the Materials Used to 3D Print Holographic Acoustic Lenses to Correct Transcranial Focused Ultrasound Aberrations

  • Ferri, Marcelino
  • Bravo, Jose Maria
  • Redondo, Javier
  • Jimenez-Gambin, Sergio
  • Jimenez, Noe
  • Camarena, Francisco
  • Sanchez-Perez, Juan Vicente
Polymers (Basel) 2019 Journal Article, cited 2 times
Website
The correction of transcranial focused ultrasound aberrations is a relevant topic for enhancing various non-invasive medical treatments. Presently, the most widely accepted method to improve focusing is the emission through multi-element phased arrays; however, a new disruptive technology, based on 3D printed holographic acoustic lenses, has recently been proposed, overcoming the spatial limitations of phased arrays due to the submillimetric precision of the latest generation of 3D printers. This work aims to optimize this recent solution. Particularly, the preferred acoustic properties of the polymers used for printing the lenses are systematically analyzed, paying special attention to the effect of p-wave speed and its relationship to the achievable voxel size of 3D printers. Results from simulations and experiments clearly show that, given a particular voxel size, there are optimal ranges for lens thickness and p-wave speed, fairly independent of the emitted frequency, the transducer aperture, or the transducer-target distance.

A study of machine learning and deep learning models for solving medical imaging problems

  • Farhat, Fadi G.
2019 Thesis, cited 0 times
Website
Application of machine learning and deep learning methods on medical imaging aims to create systems that can help in the diagnosis of disease and the automation of analyzing medical images in order to facilitate treatment planning. Deep learning methods do well in image recognition, but medical images present unique challenges. The lack of large amounts of data, the image size, and the high class-imbalance in most datasets, makes training a machine learning model to recognize a particular pattern that is typically present only in case images a formidable task. Experiments are conducted to classify breast cancer images as healthy or nonhealthy, and to detect lesions in damaged brain MRI (Magnetic Resonance Imaging) scans. Random Forest, Logistic Regression and Support Vector Machine perform competitively in the classification experiments, but in general, deep neural networks beat all conventional methods. Gaussian Naïve Bayes (GNB) and the Lesion Identification with Neighborhood Data Analysis (LINDA) methods produce better lesion detection results than single path neural networks, but a multi-modal, multi-path deep neural network beats all other methods. The importance of pre-processing training data is also highlighted and demonstrated, especially for medical images, which require extensive preparation to improve classifier and detector performance. Only a more complex and deeper neural network combined with properly pre-processed data can produce the desired accuracy levels that can rival and maybe exceed those of human experts.

Tumour heterogeneity revealed by unsupervised decomposition of dynamic contrast-enhanced magnetic resonance imaging is associated with underlying gene expression patterns and poor survival in breast cancer patients

  • Fan, M.
  • Xia, P.
  • Liu, B.
  • Zhang, L.
  • Wang, Y.
  • Gao, X.
  • Li, L.
Breast Cancer Res 2019 Journal Article, cited 3 times
Website
BACKGROUND: Heterogeneity is a common finding within tumours. We evaluated the imaging features of tumours based on the decomposition of tumoural dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data to identify their prognostic value for breast cancer survival and to explore their biological importance. METHODS: Imaging features (n = 14), such as texture, histogram distribution and morphological features, were extracted to determine their associations with recurrence-free survival (RFS) in patients in the training cohort (n = 61) from The Cancer Imaging Archive (TCIA). The prognostic value of the features was evaluated in an independent dataset of 173 patients (i.e. the reproducibility cohort) from the TCIA I-SPY 1 TRIAL dataset. Radiogenomic analysis was performed in an additional cohort, the radiogenomic cohort (n = 87), using DCE-MRI from TCGA-BRCA and corresponding gene expression data from The Cancer Genome Atlas (TCGA). The MRI tumour area was decomposed by convex analysis of mixtures (CAM), resulting in 3 components that represent plasma input, fast-flow kinetics and slow-flow kinetics. The prognostic MRI features were associated with the gene expression module in which the pathway was analysed. Furthermore, a multigene signature for each prognostic imaging feature was built, and the prognostic value for RFS and overall survival (OS) was confirmed in an additional cohort from TCGA. RESULTS: Three image features (i.e. the maximum probability from the precontrast MR series, the median value from the second postcontrast series and the overall tumour volume) were independently correlated with RFS (p values of 0.0018, 0.0036 and 0.0032, respectively). The maximum probability feature from the fast-flow kinetics subregion was also significantly associated with RFS and OS in the reproducibility cohort. Additionally, this feature had a high correlation with the gene expression module (r = 0.59), and the pathway analysis showed that Ras signalling, a breast cancer-related pathway, was significantly enriched (corrected p value = 0.0044). Gene signatures (n = 43) associated with the maximum probability feature were assessed for associations with RFS (p = 0.035) and OS (p = 0.027) in an independent dataset containing 1010 gene expression samples. Among the 43 gene signatures, Ras signalling was also significantly enriched. CONCLUSIONS: Dynamic pattern deconvolution revealed that tumour heterogeneity was associated with poor survival and cancer-related pathways in breast cancer.

Towards Fully Automatic X-Ray to CT Registration

  • Esteban, Javier
  • Grimm, Matthias
  • Unberath, Mathias
  • Zahnd, Guillaume
  • Navab, Nassir
2019 Journal Article, cited 3 times
Website
The main challenge preventing a fully-automatic X-ray to CT registration is an initialization scheme that brings the X-ray pose within the capture range of existing intensity-based registration methods. By providing such an automatic initialization, the present study introduces the first end-to-end fully-automatic registration framework. A network is first trained once on artificial X-rays to extract 2D landmarks resulting from the projection of CT-labels. A patient-specific refinement scheme is then carried out: candidate points detected from a new set of artificial X-rays are back-projected onto the patient CT and merged into a refined meaningful set of landmarks used for network re-training. This network-landmarks combination is finally exploited for intraoperative pose-initialization with a runtime of 102 ms. Evaluated on 6 pelvis anatomies (486 images in total), the mean Target Registration Error was 15.0±7.3 mm. When used to initialize the BOBYQA optimizer with normalized cross-correlation, the average (± STD) projection distance was 3.4±2.3 mm, and the registration success rate (projection distance <2.5% of the detector width) greater than 97%.

Attention P-Net for Segmentation of Post-operative Glioblastoma in MRI

  • Enlund Åström, Isabelle
2019 Thesis, cited 0 times
Website
Segmentation of post-operative glioblastoma is important for follow-up treatment. In this thesis, Fully Convolutional Networks (FCN) are utilised together with attention modules for segmentation of post-operative glioblastoma in MR images. Attention-based modules help the FCN to focus on relevant features to improve segmentation results. Channel and spatial attention combines both the spatial context as well as the semantic information in MR images. P-Net is used as a backbone for creating an architecture with existing bottleneck attention modules and was named attention P-Net. The proposed network and competing techniques were evaluated on a Uppsala University database containing T1-weighted MR images of brain from 12 subjects. The proposed framework shows substantial improvement over the existing techniques.

Feature Extraction and Analysis for Lung Nodule Classification using Random Forest

  • Nada El-Askary
  • Mohammed Salem
  • Mohammed Roushdy
2019 Conference Paper, cited 0 times
Website

An Ad Hoc Random Initialization Deep Neural Network Architecture for Discriminating Malignant Breast Cancer Lesions in Mammographic Images

  • Duggento, Andrea
  • Aiello, Marco
  • Cavaliere, Carlo
  • Cascella, Giuseppe L
  • Cascella, Davide
  • Conte, Giovanni
  • Guerrisi, Maria
  • Toschi, Nicola
Contrast Media Mol Imaging 2019 Journal Article, cited 1 times
Website
Breast cancer is one of the most common cancers in women, with more than 1,300,000 cases and 450,000 deaths each year worldwide. In this context, recent studies showed that early breast cancer detection, along with suitable treatment, could significantly reduce breast cancer death rates in the long term. X-ray mammography is still the instrument of choice in breast cancer screening. In this context, the false-positive and false-negative rates commonly achieved by radiologists are extremely arduous to estimate and control although some authors have estimated figures of up to 20% of total diagnoses or more. The introduction of novel artificial intelligence (AI) technologies applied to the diagnosis and, possibly, prognosis of breast cancer could revolutionize the current status of the management of the breast cancer patient by assisting the radiologist in clinical image interpretation. Lately, a breakthrough in the AI field has been brought about by the introduction of deep learning techniques in general and of convolutional neural networks in particular. Such techniques require no a priori feature space definition from the operator and are able to achieve classification performances which can even surpass human experts. In this paper, we design and validate an ad hoc CNN architecture specialized in breast lesion classification from imaging data only. We explore a total of 260 model architectures in a train-validation-test split in order to propose a model selection criterion which can pose the emphasis on reducing false negatives while still retaining acceptable accuracy. We achieve an area under the receiver operatic characteristics curve of 0.785 (accuracy 71.19%) on the test set, demonstrating how an ad hoc random initialization architecture can and should be fine tuned to a specific problem, especially in biomedical applications.

Improving PET Imaging Acquisition and Analysis With Machine Learning: A Narrative Review With Focus on Alzheimer's Disease and Oncology

  • Duffy, Ian R
  • Boyle, Amanda J
  • Vasdev, Neil
Molecular imaging 2019 Journal Article, cited 0 times

Learning Multi-Class Segmentations From Single-Class Datasets

  • Dmitriev, Konstantin
  • Kaufman, Arie
2019 Conference Paper, cited 1 times
Website
Multi-class segmentation has recently achieved significant performance in natural images and videos. This achievement is due primarily to the public availability of large multi-class datasets. However, there are certain domains, such as biomedical images, where obtaining sufficient multi-class annotations is a laborious and often impossible task and only single-class datasets are available. While existing segmentation research in such domains use private multi-class datasets or focus on single-class segmentations, we propose a unified highly efficient framework for robust simultaneous learning of multi-class segmentations by combining single-class datasets and utilizing a novel way of conditioning a convolutional network for the purpose of segmentation. We demonstrate various ways of incorporating the conditional information, perform an extensive evaluation, and show compelling multi-class segmentation performance on biomedical images, which outperforms current state-of-the-art solutions (up to 2.7%). Unlike current solutions, which are meticulously tailored for particular single-class datasets, we utilize datasets from a variety of sources. Furthermore, we show the applicability of our method also to natural images and evaluate it on the Cityscapes dataset. We further discuss other possible applications of our proposed framework.

Theoretical tumor edge detection technique using multiple Bragg peak decomposition in carbon ion therapy

  • Dias, Marta Filipa Ferraz
  • Collins-Fekete, Charles-Antoine
  • Baroni, Guido
  • Riboldi, Marco
  • Seco, Joao
Biomedical Physics & Engineering Express 2019 Journal Article, cited 0 times
Website

Deep learning in head & neck cancer outcome prediction

  • Diamant, André
  • Chatterjee, Avishek
  • Vallières, Martin
  • Shenouda, George
  • Seuntjens, Jan
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
Traditional radiomics involves the extraction of quantitative texture features from medical images in an attempt to determine correlations with clinical endpoints. We hypothesize that convolutional neural networks (CNNs) could enhance the performance of traditional radiomics, by detecting image patterns that may not be covered by a traditional radiomic framework. We test this hypothesis by training a CNN to predict treatment outcomes of patients with head and neck squamous cell carcinoma, based solely on their pre-treatment computed tomography image. The training (194 patients) and validation sets (106 patients), which are mutually independent and include 4 institutions, come from The Cancer Imaging Archive. When compared to a traditional radiomic framework applied to the same patient cohort, our method results in a AUC of 0.88 in predicting distant metastasis. When combining our model with the previous model, the AUC improves to 0.92. Our framework yields models that are shown to explicitly recognize traditional radiomic features, be directly visualized and perform accurate outcome prediction.

Tumor Transcriptome Reveals High Expression of IL-8 in Non-Small Cell Lung Cancer Patients with Low Pectoralis Muscle Area and Reduced Survival

  • Cury, Sarah Santiloni
  • de Moraes, Diogo
  • Freire, Paula Paccielli
  • de Oliveira, Grasieli
  • Marques, Douglas Venancio Pereira
  • Fernandez, Geysson Javier
  • Dal-Pai-Silva, Maeli
  • Hasimoto, Erica Nishida
  • Dos Reis, Patricia Pintor
  • Rogatto, Silvia Regina
  • Carvalho, Robson Francisco
Cancers (Basel) 2019 Journal Article, cited 1 times
Website
Cachexia is a syndrome characterized by an ongoing loss of skeletal muscle mass associated with poor patient prognosis in non-small cell lung cancer (NSCLC). However, prognostic cachexia biomarkers in NSCLC are unknown. Here, we analyzed computed tomography (CT) images and tumor transcriptome data to identify potentially secreted cachexia biomarkers (PSCB) in NSCLC patients with low-muscularity. We integrated radiomics features (pectoralis muscle, sternum, and tenth thoracic (T10) vertebra) from CT of 89 NSCLC patients, which allowed us to identify an index for screening muscularity. Next, a tumor transcriptomic-based secretome analysis from these patients (discovery set) was evaluated to identify potential cachexia biomarkers in patients with low-muscularity. The prognostic value of these biomarkers for predicting recurrence and survival outcome was confirmed using expression data from eight lung cancer datasets (validation set). Finally, C2C12 myoblasts differentiated into myotubes were used to evaluate the ability of the selected biomarker, interleukin (IL)-8, in inducing muscle cell atrophy. We identified 75 over-expressed transcripts in patients with low-muscularity, which included IL-6, CSF3, and IL-8. Also, we identified NCAM1, CNTN1, SCG2, CADM1, IL-8, NPTX1, and APOD as PSCB in the tumor secretome. These PSCB were capable of distinguishing worse and better prognosis (recurrence and survival) in NSCLC patients. IL-8 was confirmed as a predictor of worse prognosis in all validation sets. In vitro assays revealed that IL-8 promoted C2C12 myotube atrophy. Tumors from low-muscularity patients presented a set of upregulated genes encoding for secreted proteins, including pro-inflammatory cytokines that predict worse overall survival in NSCLC. Among these upregulated genes, IL-8 expression in NSCLC tissues was associated with worse prognosis, and the recombinant IL-8 was capable of triggering atrophy in C2C12 myotubes.

Combined Megavoltage and Contrast-Enhanced Radiotherapy as an Intrafraction Motion Management Strategy in Lung SBRT

  • Coronado-Delgado, Daniel A
  • Garnica-Garza, Hector M
Technol Cancer Res Treat 2019 Journal Article, cited 0 times
Website
Using Monte Carlo simulation and a realistic patient model, it is shown that the volume of healthy tissue irradiated at therapeutic doses can be drastically reduced using a combination of standard megavoltage and kilovoltage X-ray beams with a contrast agent previously loaded into the tumor, without the need to reduce standard treatment margins. Four-dimensional computed tomography images of 2 patients with a centrally located and a peripherally located tumor were obtained from a public database and subsequently used to plan robotic stereotactic body radiotherapy treatments. Two modalities are assumed: conventional high-energy stereotactic body radiotherapy and a treatment with contrast agent loaded in the tumor and a kilovoltage X-ray beam replacing the megavoltage beam (contrast-enhanced radiotherapy). For each patient model, 2 planning target volumes were designed: one following the recommendations from either Radiation Therapy Oncology Group (RTOG) 0813 or RTOG 0915 task group depending on the patient model and another with a 2-mm uniform margin determined solely on beam penumbra considerations. The optimized treatments with RTOG margins were imparted to the moving phantom to model the dose distribution that would be obtained as a result of intrafraction motion. Treatment plans are then compared to the plan with the 2-mm uniform margin considered to be the ideal plan. It is shown that even for treatments in which only one-fifth of the total dose is imparted via the contrast-enhanced radiotherapy modality and with the use of standard treatment margins, the resultant absorbed dose distributions are such that the volume of healthy tissue irradiated to high doses is close to what is obtained under ideal conditions.

Using Machine Learning Applied to Radiomic Image Features for Segmenting Tumour Structures

  • Clifton, Henry
  • Vial, Alanna
  • Miller, Andrew
  • Ritz, Christian
  • Field, Matthew
  • Holloway, Lois
  • Ros, Montserrat
  • Carolan, Martin
  • Stirling, David
2019 Conference Paper, cited 0 times
Website
Lung cancer (LC) was the predicted leading causeof Australian cancer fatalities in 2018 (around 9,200 deaths). Non-Small Cell Lung Cancer (NSCLC) tumours with larger amounts of heterogeneity have been linked to a worse outcome.Medical imaging is widely used in oncology and non-invasively collects data about the whole tumour. The field of radiomics uses these medical images to extract quantitative image featuresand promises further understanding of the disease at the time of diagnosis, during treatment and in follow up. It is well known that manual and semi-automatic tumour segmentation methods are subject to inter-observer variability which reduces confidence in the treatment region and extentof disease. This leads to tumour under- and over-estimation which can impact on treatment outcome and treatment-induced morbidity.This research aims to use radiomic features centred at each pixel to segment the location of the lung tumour on Computed Tomography (CT) scans. To achieve this objective, a DecisionTree (DT) model was trained using sampled CT data from eight patients. The data consisted of 25 pixel-based texture features calculated from four Gray Level Matrices (GLMs)describing the region around each pixel. The model was assessed using an unseen patient through both a confusion matrix and interpretation of the segment.The findings showed that the model accurately (AUROC =83.9%) predicts tumour location within the test data, concluding that pixel based textural features likely contribute to segmenting the lung tumour. The prediction displayed a strong representation of the manually segmented Region of Interest (ROI), which is considered the ground truth for the purpose of this research.

Application of Artificial Neural Networks for Prognostic Modeling in Lung Cancer after Combining Radiomic and Clinical Features

  • Chufal, Kundan S.
  • Ahmad, Irfan
  • Pahuja, Anjali K.
  • Miller, Alexis A.
  • Singh, Rajpal
  • Chowdhary, Rahul L.
Asian Journal of Oncology 2019 Journal Article, cited 0 times
Website
Objective This study was aimed to investigate machine learning (ML) and artificial neural networks (ANNs) in the prognostic modeling of lung cancer, utilizing high-dimensional data. Materials and Methods A computed tomography (CT) dataset of inoperable nonsmall cell lung carcinoma (NSCLC) patients with embedded tumor segmentation and survival status, comprising 422 patients, was selected. Radiomic data extraction was performed on Computation Environment for Radiation Research (CERR). The survival probability was first determined based on clinical features only and then unsupervised ML methods. Supervised ANN modeling was performed by direct and hybrid modeling which were subsequently compared. Statistical significance was set at <0.05. Results Survival analyses based on clinical features alone were not significant, except for gender. ML clustering performed on unselected radiomic and clinical data demonstrated a significant difference in survival (two-step cluster, median overall survival [ mOS]: 30.3 vs. 17.2 m; p = 0.03; K-means cluster, mOS: 21.1 vs. 7.3 m; p < 0.001). Direct ANN modeling yielded a better overall model accuracy utilizing multilayer perceptron (MLP) than radial basis function (RBF; 79.2 vs. 61.4%, respectively). Hybrid modeling with MLP (after feature selection with ML) resulted in an overall model accuracy of 80%. There was no difference in model accuracy after direct and hybrid modeling (p = 0.164). Conclusion Our preliminary study supports the application of ANN in predicting outcomes based on radiomic and clinical data.

Imaging phenotypes of breast cancer heterogeneity in pre-operative breast Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) scans predict 10-year recurrence

  • Chitalia, Rhea
  • Rowland, Jennifer
  • McDonald, Elizabeth S
  • Pantalone, Lauren
  • Cohen, Eric A
  • Gastounioti, Aimilia
  • Feldman, Michael
  • Schnall, Mitchell
  • Conant, Emily
  • Kontos, Despina
Clinical Cancer Research 2019 Journal Article, cited 0 times
Website

SVM-PUK Kernel Based MRI-brain Tumor Identification Using Texture and Gabor Wavelets

  • Chinnam, Siva
  • Sistla, Venkatramaphanikumar
  • Kolli, Venkata
Traitement du Signal 2019 Journal Article, cited 0 times
Website

Computed Tomography (CT) Image Quality Enhancement via a Uniform Framework Integrating Noise Estimation and Super-Resolution Networks

  • Chi, Jianning
  • Zhang, Yifei
  • Yu, Xiaosheng
  • Wang, Ying
  • Wu, Chengdong
Sensors (Basel) 2019 Journal Article, cited 2 times
Website
Computed tomography (CT) imaging technology has been widely used to assist medical diagnosis in recent years. However, noise during the process of imaging, and data compression during the process of storage and transmission always interrupt the image quality, resulting in unreliable performance of the post-processing steps in the computer assisted diagnosis system (CADs), such as medical image segmentation, feature extraction, and medical image classification. Since the degradation of medical images typically appears as noise and low-resolution blurring, in this paper, we propose a uniform deep convolutional neural network (DCNN) framework to handle the de-noising and super-resolution of the CT image at the same time. The framework consists of two steps: Firstly, a dense-inception network integrating an inception structure and dense skip connection is proposed to estimate the noise level. The inception structure is used to extract the noise and blurring features with respect to multiple receptive fields, while the dense skip connection can reuse those extracted features and transfer them across the network. Secondly, a modified residual-dense network combined with joint loss is proposed to reconstruct the high-resolution image with low noise. The inception block is applied on each skip connection of the dense-residual network so that the structure features of the image are transferred through the network more than the noise and blurring features. Moreover, both the perceptual loss and the mean square error (MSE) loss are used to restrain the network, leading to better performance in the reconstruction of image edges and details. Our proposed network integrates the degradation estimation, noise removal, and image super-resolution in one uniform framework to enhance medical image quality. We apply our method to the Cancer Imaging Archive (TCIA) public dataset to evaluate its ability in medical image quality enhancement. The experimental results demonstrate that the proposed method outperforms the state-of-the-art methods on de-noising and super-resolution by providing higher peak signal to noise ratio (PSNR) and structure similarity index (SSIM) values.

Revealing Tumor Habitats from Texture Heterogeneity Analysis for Classification of Lung Cancer Malignancy and Aggressiveness

  • Cherezov, Dmitry
  • Goldgof, Dmitry
  • Hall, Lawrence
  • Gillies, Robert
  • Schabath, Matthew
  • Müller, Henning
  • Depeursinge, Adrien
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
We propose an approach for characterizing structural heterogeneity of lung cancer nodules using Computed Tomography Texture Analysis (CTTA). Measures of heterogeneity were used to test the hypothesis that heterogeneity can be used as predictor of nodule malignancy and patient survival. To do this, we use the National Lung Screening Trial (NLST) dataset to determine if heterogeneity can represent differences between nodules in lung cancer and nodules in non-lung cancer patients. 253 participants are in the training set and 207 participants in the test set. To discriminate cancerous from non-cancerous nodules at the time of diagnosis, a combination of heterogeneity and radiomic features were evaluated to produce the best area under receiver operating characteristic curve (AUROC) of 0.85 and accuracy 81.64%. Second, we tested the hypothesis that heterogeneity can predict patient survival. We analyzed 40 patients diagnosed with lung adenocarcinoma (20 short-term and 20 long-term survival patients) using a leave-one-out cross validation approach for performance evaluation. A combination of heterogeneity features and radiomic features produce an AUROC of 0.9 and an accuracy of 85% to discriminate long- and short-term survivors.

Automatic assessment of glioma burden: A deep learning algorithm for fully automated volumetric and bi-dimensional measurement

  • Chang, Ken
  • Beers, Andrew L
  • Bai, Harrison X
  • Brown, James M
  • Ly, K Ina
  • Li, Xuejun
  • Senders, Joeky T
  • Kavouridis, Vasileios K
  • Boaro, Alessandro
  • Su, Chang
  • Bi, Wenya Linda
  • Rapalino, Otto
  • Liao, Weihua
  • Shen, Qin
  • Zhou, Hao
  • Xiao, Bo
  • Wang, Yinyan
  • Zhang, Paul J
  • Pinho, Marco C
  • Wen, Patrick Y
  • Batchelor, Tracy T
  • Boxerman, Jerrold L
  • Arnaout, Omar
  • Rosen, Bruce R
  • Gerstner, Elizabeth R
  • Yang, Li
  • Huang, Raymond Y
  • Kalpathy-Cramer, Jayashree
Neuro Oncol 2019 Journal Article, cited 0 times
Website
BACKGROUND: Longitudinal measurement of glioma burden with MRI is the basis for treatment response assessment. In this study, we developed a deep learning algorithm that automatically segments abnormal FLAIR hyperintensity and contrast-enhancing tumor, quantitating tumor volumes as well as the product of maximum bi-dimensional diameters according to the Response Assessment in Neuro-Oncology (RANO) criteria (AutoRANO). METHODS: Two cohorts of patients were used for this study. One consisted of 843 pre-operative MRIs from 843 patients with low- or high-grade gliomas from four institutions and the second consisted 713 longitudinal, post-operative MRI visits from 54 patients with newly diagnosed glioblastomas (each with two pre-treatment "baseline" MRIs) from one institution. RESULTS: The automatically generated FLAIR hyperintensity volume, contrast-enhancing tumor volume, and AutoRANO were highly repeatable for the double-baseline visits, with an intraclass correlation coefficient (ICC) of 0.986, 0.991, and 0.977, respectivelyon the cohort of post-operative GBM patients. Furthermore, there was high agreement between manually and automatically measured tumor volumes, with ICC values of 0.915, 0.924, and 0.965 for pre-operative FLAIR hyperintensity, post-operative FLAIR hyperintensity, and post-operative contrast-enhancing tumor volumes, respectively. Lastly, the ICC for comparing manually and automatically derived longitudinal changes in tumor burden was 0.917, 0.966, and 0.850 for FLAIR hyperintensity volume, contrast-enhancing tumor volume, and RANO measures, respectively. CONCLUSIONS: Our automated algorithm demonstrates potential utility for evaluating tumor burden in complex post-treatment settings, although further validation in multi-center clinical trials will be needed prior to widespread implementation.

Renal cell carcinoma: predicting RUNX3 methylation level and its consequences on survival with CT features

  • Dongzhi Cen
  • Li Xu
  • Siwei Zhang
  • Zhiguang Chen
  • Yan Huang
  • Ziqi Li
  • Bo Liang
European Radiology 2019 Journal Article, cited 0 times
Website
PURPOSE: To investigate associations between CT imaging features, RUNX3 methylation level, and survival in clear cell renal cell carcinoma (ccRCC). MATERIALS AND METHODS: Patients were divided into high RUNX3 methylation and low RUNX3 methylation groups according to RUNX3 methylation levels (the threshold was identified by using X-tile). The CT scanning data from 106 ccRCC patients were retrospectively analyzed. The relationship between RUNX3 methylation level and overall survivals was evaluated using the Kaplan-Meyer analysis and Cox regression analysis (univariate and multivariate). The relationship between RUNX3 methylation level and CT features was evaluated using chi-square test and logistic regression analysis (univariate and multivariate). RESULTS: beta value cutoff of 0.53 to distinguish high methylation (N = 44) from low methylation tumors (N = 62). Patients with lower levels of methylation had longer median overall survival (49.3 vs. 28.4) months (low vs. high, adjusted hazard ratio [HR] 4.933, 95% CI 2.054-11.852, p < 0.001). On univariate logistic regression analysis, four risk factors (margin, side, long diameter, and intratumoral vascularity) were associated with RUNX3 methylation level (all p < 0.05). Multivariate logistic regression analysis found that three risk factors (side: left vs. right, odds ratio [OR] 2.696; p = 0.024; 95% CI 1.138-6.386; margin: ill-defined vs. well-defined, OR 2.685; p = 0.038; 95% CI 1.057-6.820; and intratumoral vascularity: yes vs. no, OR 3.286; p = 0.008; 95% CI 1.367-7.898) were significant independent predictors of high methylation tumors. This model had an area under the receiver operating characteristic curve (AUC) of 0.725 (95% CI 0.623-0.827). CONCLUSIONS: Higher levels of RUNX3 methylation are associated with shorter survival in ccRCC patients. And presence of intratumoral vascularity, ill-defined margin, and left side tumor were significant independent predictors of high methylation level of RUNX3 gene. KEY POINTS: * RUNX3 methylation level is negatively associated with overall survival in ccRCC patients. * Presence of intratumoral vascularity, ill-defined margin, and left side tumor were significant independent predictors of high methylation level of RUNX3 gene.

MRI volume changes of axillary lymph nodes as predictor of pathological complete responses to neoadjuvant chemotherapy in breast cancer

  • Cattell, Renee F.
  • Kang, James J.
  • Ren, Thomas
  • Huang, Pauline B.
  • Muttreja, Ashima
  • Dacosta, Sarah
  • Li, Haifang
  • Baer, Lea
  • Clouston, Sean
  • Palermo, Roxanne
  • Fisher, Paul
  • Bernstein, Cliff
  • Cohen, Jules A.
  • Duong, Tim Q.
Clinical Breast Cancer 2019 Journal Article, cited 0 times
Website
Introduction Longitudinal monitoring of breast tumor volume over the course of chemotherapy is informative of pathological response. This study aims to determine whether axillary lymph node (aLN) volume by MRI could augment the prediction accuracy of treatment response to neoadjuvant chemotherapy (NAC). Materials and Methods Level-2a curated data from I-SPY-1 TRIAL (2002-2006) were used. Patients had stage 2 or 3 breast cancer. MRI was acquired pre-, during and post-NAC. A subset with visible aLN on MRI was identified (N=132). Prediction of pathological complete response (PCR) was made using breast tumor volume changes, nodal volume changes, and combined breast tumor and nodal volume changes with sub-stratification with and without large lymph nodes (3mL or ∼1.79cm diameter cutoff). Receiver-operator-curve analysis was used to quantify prediction performance. Results Rate of change of aLN and breast tumor volume were informative of pathological response, with prediction being most informative early in treatment (AUC: 0.63-0.82) compared to later in treatment (AUC: 0.50-0.73). Larger aLN volume was associated with hormone receptor negativity, with the largest nodal volume for triple negative subtypes. Sub-stratification by node size improved predictive performance, with the best predictive model for large nodes having AUC of 0.82. Conclusion Axillary lymph node MRI offers clinically relevant information and has the potential to predict treatment response to neoadjuvant chemotherapy in breast cancer patients.

PARaDIM - A PHITS-based Monte Carlo tool for internal dosimetry with tetrahedral mesh computational phantoms

  • Carter, L. M.
  • Crawford, T. M.
  • Sato, T.
  • Furuta, T.
  • Choi, C.
  • Kim, C. H.
  • Brown, J. L.
  • Bolch, W. E.
  • Zanzonico, P. B.
  • Lewis, J. S.
J Nucl Med 2019 Journal Article, cited 0 times
Website
Mesh-type and voxel-based computational phantoms comprise the current state-of-the-art for internal dose assessment via Monte Carlo simulations, but excel in different aspects, with mesh-type phantoms offering advantages over their voxel counterparts in terms of their flexibility and realistic representation of detailed patient- or subject-specific anatomy. We have developed PARaDIM, a freeware application for implementing tetrahedral mesh-type phantoms in absorbed dose calculations via the Particle and Heavy Ion Transport code System (PHITS). It considers all medically relevant radionuclides including alpha, beta, gamma, positron, and Auger/conversion electron emitters, and handles calculation of mean dose to individual regions, as well as 3D dose distributions for visualization and analysis in a variety of medical imaging softwares. This work describes the development of PARaDIM, documents the measures taken to test and validate its performance, and presents examples to illustrate its uses. Methods: Human, small animal, and cell-level dose calculations were performed with PARaDIM and the results compared with those of widely accepted dosimetry programs and literature data. Several tetrahedral phantoms were developed or adapted using computer-aided modeling techniques for these comparisons. Results: For human dose calculations, agreement of PARaDIM with OLINDA 2.0 was good - within 10-20% for most organs - despite geometric differences among the phantoms tested. Agreement with MIRDcell for cell-level S-value calculations was within 5% in most cases. Conclusion: PARaDIM extends the use of Monte Carlo dose calculations to the broader community in nuclear medicine by providing a user-friendly graphical user interface for calculation setup and execution. PARaDIM leverages the enhanced anatomical realism provided by advanced computational reference phantoms or bespoke image-derived phantoms to enable improved assessments of radiation doses in a variety of radiopharmaceutical use cases, research, and preclinical development.

Medical Image Retrieval Based on Convolutional Neural Network and Supervised Hashing

  • Cai, Yiheng
  • Li, Yuanyuan
  • Qiu, Changyan
  • Ma, Jie
  • Gao, Xurong
IEEE Access 2019 Journal Article, cited 0 times
Website
In recent years, with extensive application in image retrieval and other tasks, a convolutional neural network (CNN) has achieved outstanding performance. In this paper, a new content-based medical image retrieval (CBMIR) framework using CNN and hash coding is proposed. The new framework adopts a Siamese network in which pairs of images are used as inputs, and a model is learned to make images belonging to the same class have similar features by using weight sharing and a contrastive loss function. In each branch of the network, CNN is adapted to extract features, followed by hash mapping, which is used to reduce the dimensionality of feature vectors. In the training process, a new loss function is designed to make the feature vectors more distinguishable, and a regularization term is added to encourage the real value outputs to approximate the desired binary values. In the retrieval phase, the compact binary hash code of the query image is achieved from the trained network and is subsequently compared with the hash codes of the database images. We experimented on two medical image datasets: the cancer imaging archive-computed tomography (TCIA-CT) and the vision and image analysis group/international early lung cancer action program (VIA/I-ELCAP). The results indicate that our method is superior to existing hash methods and CNN methods. Compared with the traditional hashing method, feature extraction based on CNN has advantages. The proposed algorithm combining a Siamese network with the hash method is superior to the classical CNN-based methods. The application of a new loss function can effectively improve retrieval accuracy.

Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm

  • Buda, Mateusz
  • Saha, Ashirbani
  • Mazurowski, Maciej A
Computers in biology and medicine 2019 Journal Article, cited 1 times
Website
Recent analysis identified distinct genomic subtypes of lower-grade glioma tumors which are associated with shape features. In this study, we propose a fully automatic way to quantify tumor imaging characteristics using deep learning-based segmentation and test whether these characteristics are predictive of tumor genomic subtypes. We used preoperative imaging and genomic data of 110 patients from 5 institutions with lower-grade gliomas from The Cancer Genome Atlas. Based on automatic deep learning segmentations, we extracted three features which quantify two-dimensional and three-dimensional characteristics of the tumors. Genomic data for the analyzed cohort of patients consisted of previously identified genomic clusters based on IDH mutation and 1p/19q co-deletion, DNA methylation, gene expression, DNA copy number, and microRNA expression. To analyze the relationship between the imaging features and genomic clusters, we conducted the Fisher exact test for 10 hypotheses for each pair of imaging feature and genomic subtype. To account for multiple hypothesis testing, we applied a Bonferroni correction. P-values lower than 0.005 were considered statistically significant. We found the strongest association between RNASeq clusters and the bounding ellipsoid volume ratio (p < 0.0002) and between RNASeq clusters and margin fluctuation (p < 0.005). In addition, we identified associations between bounding ellipsoid volume ratio and all tested molecular subtypes (p < 0.02) as well as between angular standard deviation and RNASeq cluster (p < 0.02). In terms of automatic tumor segmentation that was used to generate the quantitative image characteristics, our deep learning algorithm achieved a mean Dice coefficient of 82% which is comparable to human performance.

An ensemble learning approach for brain cancer detection exploiting radiomic features

  • Brunese, Luca
  • Mercaldo, Francesco
  • Reginelli, Alfonso
  • Santone, Antonella
Comput Methods Programs Biomed 2019 Journal Article, cited 1 times
Website
BACKGROUND AND OBJECTIVE: The brain cancer is one of the most aggressive tumour: the 70% of the patients diagnosed with this malignant cancer will not survive. Early detection of brain tumours can be fundamental to increase survival rates. The brain cancers are classified into four different grades (i.e., I, II, III and IV) according to how normal or abnormal the brain cells look. The following work aims to recognize the different brain cancer grades by analysing brain magnetic resonance images. METHODS: A method to identify the components of an ensemble learner is proposed. The ensemble learner is focused on the discrimination between different brain cancer grades using non invasive radiomic features. The considered radiomic features are belonging to five different groups: First Order, Shape, Gray Level Co-occurrence Matrix, Gray Level Run Length Matrix and Gray Level Size Zone Matrix. We evaluate the features effectiveness through hypothesis testing and through decision boundaries, performance analysis and calibration plots thus we select the best candidate classifiers for the ensemble learner. RESULTS: We evaluate the proposed method with 111,205 brain magnetic resonances belonging to two freely available data-sets for research purposes. The results are encouraging: we obtain an accuracy of 99% for the benign grade I and the II, III and IV malignant brain cancer detection. CONCLUSION: The experimental results confirm that the ensemble learner designed with the proposed method outperforms the current state-of-the-art approaches in brain cancer grade detection starting from magnetic resonance images.

Association of Peritumoral Radiomics With Tumor Biology and Pathologic Response to Preoperative Targeted Therapy for HER2 (ERBB2)-Positive Breast Cancer

  • Braman, Nathaniel
  • Prasanna, Prateek
  • Whitney, Jon
  • Singh, Salendra
  • Beig, Niha
  • Etesami, Maryam
  • Bates, David D. B.
  • Gallagher, Katherine
  • Bloch, B. Nicolas
  • Vulchi, Manasa
  • Turk, Paulette
  • Bera, Kaustav
  • Abraham, Jame
  • Sikov, William M.
  • Somlo, George
  • Harris, Lyndsay N.
  • Gilmore, Hannah
  • Plecha, Donna
  • Varadan, Vinay
  • Madabhushi, Anant
JAMA Netw Open 2019 Journal Article, cited 0 times
Website
Importance There has been significant recent interest in understanding the utility of quantitative imaging to delineate breast cancer intrinsic biological factors and therapeutic response. No clinically accepted biomarkers are as yet available for estimation of response to human epidermal growth factor receptor 2 (currently known as ERBB2, but referred to as HER2 in this study)–targeted therapy in breast cancer. Objective To determine whether imaging signatures on clinical breast magnetic resonance imaging (MRI) could noninvasively characterize HER2-positive tumor biological factors and estimate response to HER2-targeted neoadjuvant therapy. Design, Setting, and Participants In a retrospective diagnostic study encompassing 209 patients with breast cancer, textural imaging features extracted within the tumor and annular peritumoral tissue regions on MRI were examined as a means to identify increasingly granular breast cancer subgroups relevant to therapeutic approach and response. First, among a cohort of 117 patients who received an MRI prior to neoadjuvant chemotherapy (NAC) at a single institution from April 27, 2012, through September 4, 2015, imaging features that distinguished HER2+ tumors from other receptor subtypes were identified. Next, among a cohort of 42 patients with HER2+ breast cancers with available MRI and RNaseq data accumulated from a multicenter, preoperative clinical trial (BrUOG 211B), a signature of the response-associated HER2-enriched (HER2-E) molecular subtype within HER2+ tumors (n = 42) was identified. The association of this signature with pathologic complete response was explored in 2 patient cohorts from different institutions, where all patients received HER2-targeted NAC (n = 28, n = 50). Finally, the association between significant peritumoral features and lymphocyte distribution was explored in patients within the BrUOG 211B trial who had corresponding biopsy hematoxylin-eosin–stained slide images. Data analysis was conducted from January 15, 2017, to February 14, 2019. Main Outcomes and Measures Evaluation of imaging signatures by the area under the receiver operating characteristic curve (AUC) in identifying HER2+ molecular subtypes and distinguishing pathologic complete response (ypT0/is) to NAC with HER2-targeting. Results In the 209 patients included (mean [SD] age, 51.1 [11.7] years), features from the peritumoral regions better discriminated HER2-E tumors (maximum AUC, 0.85; 95% CI, 0.79-0.90; 9-12 mm from the tumor) compared with intratumoral features (AUC, 0.76; 95% CI, 0.69-0.84). A classifier combining peritumoral and intratumoral features identified the HER2-E subtype (AUC, 0.89; 95% CI, 0.84-0.93) and was significantly associated with response to HER2-targeted therapy in both validation cohorts (AUC, 0.80; 95% CI, 0.61-0.98 and AUC, 0.69; 95% CI, 0.53-0.84). Features from the 0- to 3-mm peritumoral region were significantly associated with the density of tumor-infiltrating lymphocytes (R2 = 0.57; 95% CI, 0.39-0.75; P = .002). Conclusions and Relevance A combination of peritumoral and intratumoral characteristics appears to identify intrinsic molecular subtypes of HER2+ breast cancers from imaging, offering insights into immune response within the peritumoral environment and suggesting potential benefit for treatment guidance.

Solid Indeterminate Nodules with a Radiological Stability Suggesting Benignity: A Texture Analysis of Computed Tomography Images Based on the Kurtosis and Skewness of the Nodule Volume Density Histogram

  • Borguezan, Bruno Max
  • Lopes, Agnaldo José
  • Saito, Eduardo Haruo
  • Higa, Claudio
  • Silva, Aristófanes Corrêa
  • Nunes, Rodolfo Acatauassú
Pulmonary Medicine 2019 Journal Article, cited 0 times
Website
BACKGROUND: The number of incidental findings of pulmonary nodules using imaging methods to diagnose other thoracic or extrathoracic conditions has increased, suggesting the need for in-depth radiological image analyses to identify nodule type and avoid unnecessary invasive procedures. OBJECTIVES:The present study evaluated solid indeterminate nodules with a radiological stability suggesting benignity (SINRSBs) through a texture analysis of computed tomography (CT) images. METHODS: A total of 100 chest CT scans were evaluated, including 50 cases of SINRSBs and 50 cases of malignant nodules. SINRSB CT scans were performed using the same noncontrast enhanced CT protocol and equipment; the malignant nodule data were acquired from several databases. The kurtosis (KUR) and skewness (SKW) values of these tests were determined for the whole volume of each nodule, and the histograms were classified into two basic patterns: peaks or plateaus. RESULTS: The mean (MEN) KUR values of the SINRSBs and malignant nodules were 3.37 ± 3.88 and 5.88 ± 5.11, respectively. The receiver operating characteristic (ROC) curve showed that the sensitivity and specificity for distinguishing SINRSBs from malignant nodules were 65% and 66% for KUR values >6, respectively, with an area under the curve (AUC) of 0.709 (p< 0.0001). The MEN SKW values of the SINRSBs and malignant nodules were 1.73 ± 0.94 and 2.07 ± 1.01, respectively. The ROC curve showed that the sensitivity and specificity for distinguishing malignant nodules from SINRSBs were 65% and 66% for SKW values >3.1, respectively, with an AUC of 0.709 (p < 0.0001). An analysis of the peak and plateau histograms revealed sensitivity, specificity, and accuracy values of 84%, 74%, and 79%, respectively. CONCLUSION: KUR, SKW, and histogram shape can help to noninvasively diagnose SINRSBs but should not be used alone or without considering clinical data.

Learning to detect anatomical landmarks of the pelvis in X-rays from arbitrary views

  • Bier, B.
  • Goldmann, F.
  • Zaech, J. N.
  • Fotouhi, J.
  • Hegeman, R.
  • Grupp, R.
  • Armand, M.
  • Osgood, G.
  • Navab, N.
  • Maier, A.
  • Unberath, M.
Int J Comput Assist Radiol Surg 2019 Journal Article, cited 0 times
Website
Purpose Minimally invasive alternatives are now available for many complex surgeries. These approaches are enabled by the increasing availability of intra-operative image guidance. Yet, fluoroscopic X-rays suffer from projective transformation and thus cannot provide direct views onto anatomy. Surgeons could highly benefit from additional information, such as the anatomical landmark locations in the projections, to support intra-operative decision making. However, detecting landmarks is challenging since the viewing direction changes substantially between views leading to varying appearance of the same landmark. Therefore, and to the best of our knowledge, view-independent anatomical landmark detection has not been investigated yet. Methods In this work, we propose a novel approach to detect multiple anatomical landmarks in X-ray images from arbitrary viewing directions. To this end, a sequential prediction framework based on convolutional neural networks is employed to simultaneously regress all landmark locations. For training, synthetic X-rays are generated with a physically accurate forward model that allows direct application of the trained model to real X-ray images of the pelvis. View invariance is achieved via data augmentation by sampling viewing angles on a spherical segment of 120∘×90∘ . Results On synthetic data, a mean prediction error of 5.6 ± 4.5 mm is achieved. Further, we demonstrate that the trained model can be directly applied to real X-rays and show that these detections define correspondences to a respective CT volume, which allows for analytic estimation of the 11 degree of freedom projective mapping. Conclusion We present the first tool to detect anatomical landmarks in X-ray images independent of their viewing direction. Access to this information during surgery may benefit decision making and constitutes a first step toward global initialization of 2D/3D registration without the need of calibration. As such, the proposed concept has a strong prospect to facilitate and enhance applications and methods in the realm of image-guided surgery.

Artificial intelligence in cancer imaging: Clinical challenges and applications

  • Bi, Wenya Linda
  • Hosny, Ahmed
  • Schabath, Matthew B
  • Giger, Maryellen L
  • Birkbak, Nicolai J
  • Mehrtash, Alireza
  • Allison, Tavis
  • Arnaout, Omar
  • Abbosh, Christopher
  • Dunn, Ian F
CA: a cancer journal for clinicians 2019 Journal Article, cited 0 times
Website

Adverse prognosis of glioblastoma contacting the subventricular zone: Biological correlates

  • Berendsen, S.
  • van Bodegraven, E.
  • Seute, T.
  • Spliet, W. G. M.
  • Geurts, M.
  • Hendrikse, J.
  • Schoysman, L.
  • Huiszoon, W. B.
  • Varkila, M.
  • Rouss, S.
  • Bell, E. H.
  • Kroonen, J.
  • Chakravarti, A.
  • Bours, V.
  • Snijders, T. J.
  • Robe, P. A.
PLoS One 2019 Journal Article, cited 2 times
Website
INTRODUCTION: The subventricular zone (SVZ) in the brain is associated with gliomagenesis and resistance to treatment in glioblastoma. In this study, we investigate the prognostic role and biological characteristics of subventricular zone (SVZ) involvement in glioblastoma. METHODS: We analyzed T1-weighted, gadolinium-enhanced MR images of a retrospective cohort of 647 primary glioblastoma patients diagnosed between 2005-2013, and performed a multivariable Cox regression analysis to adjust the prognostic effect of SVZ involvement for clinical patient- and tumor-related factors. Protein expression patterns of a.o. markers of neural stem cellness (CD133 and GFAP-delta) and (epithelial-) mesenchymal transition (NF-kappaB, C/EBP-beta and STAT3) were determined with immunohistochemistry on tissue microarrays containing 220 of the tumors. Molecular classification and mRNA expression-based gene set enrichment analyses, miRNA expression and SNP copy number analyses were performed on fresh frozen tissue obtained from 76 tumors. Confirmatory analyses were performed on glioblastoma TCGA/TCIA data. RESULTS: Involvement of the SVZ was a significant adverse prognostic factor in glioblastoma, independent of age, KPS, surgery type and postoperative treatment. Tumor volume and postoperative complications did not explain this prognostic effect. SVZ contact was associated with increased nuclear expression of the (epithelial-) mesenchymal transition markers C/EBP-beta and phospho-STAT3. SVZ contact was not associated with molecular subtype, distinct gene expression patterns, or markers of stem cellness. Our main findings were confirmed in a cohort of 229 TCGA/TCIA glioblastomas. CONCLUSION: In conclusion, involvement of the SVZ is an independent prognostic factor in glioblastoma, and associates with increased expression of key markers of (epithelial-) mesenchymal transformation, but does not correlate with stem cellness, molecular subtype, or specific (mi)RNA expression patterns.

Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T

  • Bell, Laura C
  • Stokes, Ashley M
  • Quarles, C. Chad
Journal of Magnetic Resonance Imaging 2019 Journal Article, cited 0 times
Website
BACKGROUND: Dynamic susceptibility contrast (DSC)-MRI analysis pipelines differ across studies and sites, potentially confounding the clinical value and use of the derived biomarkers. PURPOSE/HYPOTHESIS: To investigate how postprocessing steps for computation of cerebral blood volume (CBV) and residue function dependent parameters (cerebral blood flow [CBF], mean transit time [MTT], capillary transit heterogeneity [CTH]) impact glioma grading. STUDY TYPE: Retrospective study from The Cancer Imaging Archive (TCIA). POPULATION: Forty-nine subjects with low- and high-grade gliomas. FIELD STRENGTH/SEQUENCE: 1.5 and 3.0T clinical systems using a single-echo echo planar imaging (EPI) acquisition. ASSESSMENT: Manual regions of interest (ROIs) were provided by TCIA and automatically segmented ROIs were generated by k-means clustering. CBV was calculated based on conventional equations. Residue function dependent biomarkers (CBF, MTT, CTH) were found by two deconvolution methods: circular discretization followed by a signal-to-noise ratio (SNR)-adapted eigenvalue thresholding (Method 1) and Volterra discretization with L-curve-based Tikhonov regularization (Method 2). STATISTICAL TESTS: Analysis of variance, receiver operating characteristics (ROC), and logistic regression tests. RESULTS: MTT alone was unable to statistically differentiate glioma grade (P > 0.139). When normalized, tumor CBF, CTH, and CBV did not differ across field strengths (P > 0.141). Biomarkers normalized to automatically segmented regions performed equally (rCTH AUROC is 0.73 compared with 0.74) or better (rCBF AUROC increases from 0.74-0.84; rCBV AUROC increases 0.78-0.86) than manually drawn ROIs. By updating the current deconvolution steps (Method 2), rCTH can act as a classifier for glioma grade (P < 0.007), but not if processed by current conventional DSC methods (Method 1) (P > 0.577). Lastly, higher-order biomarkers (eg, rCBF and rCTH) along with rCBV increases AUROC to 0.92 for differentiating tumor grade as compared with 0.78 and 0.86 (manual and automatic reference regions, respectively) for rCBV alone. DATA CONCLUSION: With optimized analysis pipelines, higher-order perfusion biomarkers (rCBF and rCTH) improve glioma grading as compared with CBV alone. Additionally, postprocessing steps impact thresholds needed for glioma grading. LEVEL OF EVIDENCE: 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2019.

Longitudinal fan-beam computed tomography dataset for head-and-neck squamous cell carcinoma patients

  • Bejarano, T.
  • De Ornelas-Couto, M.
  • Mihaylov, I. B.
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: To describe in detail a dataset consisting of longitudinal fan-beam computed tomography (CT) imaging to visualize anatomical changes in head-and-neck squamous cell carcinoma (HNSCC) patients throughout radiotherapy (RT) treatment course. ACQUISITION AND VALIDATION METHODS: This dataset consists of CT images from 31 HNSCC patients who underwent volumetric modulated arc therapy (VMAT). Patients had three CT scans acquired throughout the duration of the radiation treatment course. Pretreatment planning CT scans with a median of 13 days before treatment (range: 2-27), mid-treatment CT at 22 days after start of treatment (range: 13-38), and post-treatment CT 65 days after start of treatment (range: 35-192). Patients received RT treatment to a total dose of 58-70 Gy, using daily 2.0-2.20 Gy, fractions for 30-35 fractions. The fan-beam CT images were acquired using a Siemens 16-slice CT scanner head protocol with 120 kV and current of 400 mAs. A helical scan with 1 rotation per second was used with a slice thickness of 2 mm and table increment of 1.2 mm. In addition to the imaging data, contours of anatomical structures for RT, demographic, and outcome measurements are provided. DATA FORMAT AND USAGE NOTES: The dataset with DICOM files including images, RTSTRUCT files, and RTDOSE files can be found and publicly accessed in the Cancer Imaging Archive (TCIA, http://www.cancerimagingarchive.net/) as collection Head-and-neck squamous cell carcinoma patients with CT taken during pretreatment, mid-treatment, and post-treatment (HNSCC-3DCT-RT). DISCUSSION: This is the first dataset to date in TCIA which provides a collection of multiple CT imaging studies (pretreatment, mid-treatment, and post-treatment) throughout the treatment course. The dataset can serve a wide array of research projects including (but not limited to): quantitative imaging assessment, investigation on anatomical changes with treatment progress, dosimetry of target volumes and/or normal structures due to anatomical changes occurring during treatment, investigation of RT toxicity, and concurrent chemotherapy and RT effects on head-and-neck patients.

Variability of manual segmentation of the prostate in axial T2-weighted MRI: A multi-reader study

  • Becker, A. S.
  • Chaitanya, K.
  • Schawkat, K.
  • Müehlematter, U. J.
  • Hotker, A. M.
  • Konukoglu, E.
  • Donati, O. F.
Eur J Radiol 2019 Journal Article, cited 3 times
Website
PURPOSE: To evaluate the interreader variability in prostate and seminal vesicle (SV) segmentation on T2w MRI. METHODS: Six readers segmented the peripheral zone (PZ), transitional zone (TZ) and SV slice-wise on axial T2w prostate MRI examinations of n=80 patients. Twenty different similarity scores, including dice score (DS), Hausdorff distance (HD) and volumetric similarity coefficient (VS), were computed with the VISCERAL EvaluateSegmentation software for all structures combined and separately for the whole gland (WG=PZ+TZ), TZ and SV. Differences between base, midgland and apex were evaluated with DS slice-wise. Descriptive statistics for similarity scores were computed. Wilcoxon testing to evaluate differences of DS, HD and VS was performed. RESULTS: Overall segmentation variability was good with a mean DS of 0.859 (+/-SD=0.0542), HD of 36.6 (+/-34.9 voxels) and VS of 0.926 (+/-0.065). The WG showed a DS, HD and VS of 0.738 (+/-0.144), 36.2 (+/-35.6 vx) and 0.853 (+/-0.143), respectively. The TZ showed generally lower variability with a DS of 0.738 (+/-0.144), HD of 24.8 (+/-16 vx) and VS of 0.908 (+/-0.126). The lowest variability was found for the SV with DS of 0.884 (+/-0.0407), HD of 17 (+/-10.9 vx) and VS of 0.936 (+/-0.0509). We found a markedly lower DS of the segmentations in the apex (0.85+/-0.12) compared to the base (0.87+/-0.10, p<0.01) and the midgland (0.89+/-0.10, p<0.001). CONCLUSIONS: We report baseline values for interreader variability of prostate and SV segmentation on T2w MRI. Variability was highest in the apex, lower in the base, and lowest in the midgland.

Call for Data Standardization: Lessons Learned and Recommendations in an Imaging Study

  • Basu, Amrita
  • Warzel, Denise
  • Eftekhari, Aras
  • Kirby, Justin S
  • Freymann, John
  • Knable, Janice
  • Sharma, Ashish
  • Jacobs, Paula
JCO Clin Cancer Inform 2019 Journal Article, cited 0 times
Website
PURPOSE: Data sharing creates potential cost savings, supports data aggregation, and facilitates reproducibility to ensure quality research; however, data from heterogeneous systems require retrospective harmonization. This is a major hurdle for researchers who seek to leverage existing data. Efforts focused on strategies for data interoperability largely center around the use of standards but ignore the problems of competing standards and the value of existing data. Interoperability remains reliant on retrospective harmonization. Approaches to reduce this burden are needed. METHODS: The Cancer Imaging Archive (TCIA) is an example of an imaging repository that accepts data from a diversity of sources. It contains medical images from investigators worldwide and substantial nonimage data. Digital Imaging and Communications in Medicine (DICOM) standards enable querying across images, but TCIA does not enforce other standards for describing nonimage supporting data, such as treatment details and patient outcomes. In this study, we used 9 TCIA lung and brain nonimage files containing 659 fields to explore retrospective harmonization for cross-study query and aggregation. It took 329.5 hours, or 2.3 months, extended over 6 months to identify 41 overlapping fields in 3 or more files and transform 31 of them. We used the Genomic Data Commons (GDC) data elements as the target standards for harmonization. RESULTS: We characterized the issues and have developed recommendations for reducing the burden of retrospective harmonization. Once we harmonized the data, we also developed a Web tool to easily explore harmonized collections. CONCLUSION: While prospective use of standards can support interoperability, there are issues that complicate this goal. Our work recognizes and reveals retrospective harmonization issues when trying to reuse existing data and recommends national infrastructure to address these issues.

Machine Intelligence for Advanced Medical Data Analysis: Manifold Learning Approach

  • Bashiri, Fereshteh Sadat
2019 Thesis, cited 0 times
Website
In the current work, linear and non-linear manifold learning techniques, specifically Principle Component Analysis (PCA) and Laplacian Eigenmaps, are studied in detail. Their applications in medical image and shape analysis are investigated. In the first contribution, a manifold learning-based multi-modal image registration technique is developed, which results in a unified intensity system through intensity transformation between the reference and sensed images. The transformation eliminates intensity variations in multi-modal medical scans and hence facilitates employing well-studied monomodal registration techniques. The method can be used for registering multi-modal images with full and partial data. Next, a manifold learning-based scale invariant global shape descriptor is introduced. The proposed descriptor benefits from the capability of Laplacian Eigenmap in dealing with high dimensional data by introducing an exponential weighting scheme. It eliminates the limitations tied to the well-known cotangent weighting scheme, namely dependency on triangular mesh representation and high intra-class quality of 3D models. In the end, a novel descriptive model for diagnostic classification of pulmonary nodules is presented. The descriptive model benefits from structural differences between benign and malignant nodules for automatic and accurate prediction of a candidate nodule. It extracts concise and discriminative features automatically from the 3D surface structure of a nodule using spectral features studied in the previous work combined with a point cloud-based deep learning network. Extensive experiments have been conducted and have shown that the proposed algorithms based on manifold learning outperform several state-of-the-art methods. Advanced computational techniques with a combination of manifold learning and deep networks can play a vital role in effective healthcare delivery by providing a framework for several fundamental tasks in image and shape processing, namely, registration, classification, and detection of features of interest.

Pathologically-Validated Tumor Prediction Maps in MRI

  • Barrington, Alex
2019 Thesis, cited 0 times
Website
Glioblastoma (GBM) is an aggressive cancer with an average 5-year survival rate of about 5%. Following treatment with surgery, radiation, and chemotherapy, diagnosing tumor recurrence requires serial magnetic resonance imaging (MRI) scans. Infiltrative tumor cells beyond gadolinium enhancement on T1-weighted MRI are difficult to detect. This study therefore aims to improve tumor detection beyond traditional tumor margins. To accomplish this, a neural network model was trained to classify tissue samples as ‘tumor’ or ‘not tumor’. This model was then used to classify thousands of tiles from histology samples acquired at autopsy with known MRI locations on the patient’s final clinical MRI scan. This combined radiological-pathological (rad-path) dataset was then treated as a ground truth to train a second model for predicting tumor presence from MRI alone. Predictive maps were created for seven patients left out of the training steps, and tissue samples were tested to determine the model’s accuracy. The final model produced a receiver operator characteristic (ROC) area under the curve (AUC) of 0.70. This study demonstrates a new method for detecting infiltrative tumor beyond conventional radiologist defined margins based on neural networks applied to rad-path datasets in glioblastoma.

Interreader Variability of Dynamic Contrast-enhanced MRI of Recurrent Glioblastoma: The Multicenter ACRIN 6677/RTOG 0625 Study

  • Barboriak, Daniel P
  • Zhang, Zheng
  • Desai, Pratikkumar
  • Snyder, Bradley S
  • Safriel, Yair
  • McKinstry, Robert C
  • Bokstein, Felix
  • Sorensen, Gregory
  • Gilbert, Mark R
  • Boxerman, Jerrold L
Radiology 2019 Journal Article, cited 2 times
Website
Purpose To evaluate factors contributing to interreader variation (IRV) in parameters measured at dynamic contrast material-enhanced (DCE) MRI in patients with glioblastoma who were participating in a multicenter trial. Materials and Methods A total of 18 patients (mean age, 57 years +/- 13 [standard deviation]; 10 men) who volunteered for the advanced imaging arm of ACRIN 6677, a substudy of the RTOG 0625 clinical trial for recurrent glioblastoma treatment, underwent analyzable DCE MRI at one of four centers. The 78 imaging studies were analyzed centrally to derive the volume transfer constant (K(trans)) for gadolinium between blood plasma and tissue extravascular extracellular space, fractional volume of the extracellular extravascular space (ve), and initial area under the gadolinium concentration curve (IAUGC). Two independently trained teams consisting of a neuroradiologist and a technologist segmented the enhancing tumor on three-dimensional spoiled gradient-recalled acquisition in the steady-state images. Mean and median parameter values in the enhancing tumor were extracted after registering segmentations to parameter maps. The effect of imaging time relative to treatment, map quality, imager magnet and sequence, average tumor volume, and reader variability in tumor volume on IRV was studied by using intraclass correlation coefficients (ICCs) and linear mixed models. Results Mean interreader variations (+/- standard deviation) (difference as a percentage of the mean) for mean and median IAUGC, mean and median K(trans), and median ve were 18% +/- 24, 17% +/- 23, 27% +/- 34, 16% +/- 27, and 27% +/- 34, respectively. ICCs for these metrics ranged from 0.90 to 1.0 for baseline and from 0.48 to 0.76 for posttreatment examinations. Variability in reader-derived tumor volume was significantly related to IRV for all parameters. Conclusion Differences in reader tumor segmentations are a significant source of interreader variation for all dynamic contrast-enhanced MRI parameters. (c) RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Wolf in this issue.

Quantitative Imaging features Improve Discrimination of Malignancy in Pulmonary nodules

  • Balagurunathan, Yoganand
  • Schabath, Matthew B.
  • Wang, Hua
  • Liu, Ying
  • Gillies, Robert J.
Sci RepScientific reports 2019 Journal Article, cited 0 times
Website
Pulmonary nodules are frequently detected radiological abnormalities in lung cancer screening. Nodules of the highest- and lowest-risk for cancer are often easily diagnosed by a trained radiologist there is still a high rate of indeterminate pulmonary nodules (IPN) of unknown risk. Here, we test the hypothesis that computer extracted quantitative features ("radiomics") can provide improved risk-assessment in the diagnostic setting. Nodules were segmented in 3D and 219 quantitative features are extracted from these volumes. Using these features novel malignancy risk predictors are formed with various stratifications based on size, shape and texture feature categories. We used images and data from the National Lung Screening Trial (NLST), curated a subset of 479 participants (244 for training and 235 for testing) that included incident lung cancers and nodule-positive controls. After removing redundant and non-reproducible features, optimal linear classifiers with area under the receiver operator characteristics (AUROC) curves were used with an exhaustive search approach to find a discriminant set of image features, which were validated in an independent test dataset. We identified several strong predictive models, using size and shape features the highest AUROC was 0.80. Using non-size based features the highest AUROC was 0.85. Combining features from all the categories, the highest AUROC were 0.83.

Secure telemedicine using RONI halftoned visual cryptography without pixel expansion

  • Bakshi, Arvind
  • Patel, Anoop Kumar
Journal of Information Security and Applications 2019 Journal Article, cited 0 times
Website
To provide quality healthcare services worldwide telemedicine is a well-known technique. It delivers healthcare services remotely. For the diagnosis of disease and prescription by the doctor, lots of information is needed to be shared over public and private channels. Medical information like MRI, X-Ray, CT-scan etc. contains very personal information and needs to be secured. Security like confidentiality, privacy, and integrity of medical data is still a challenge. It is observed that the existing security techniques like digital watermarking, encryption are not efficient for real-time use. This paper investigates the problem and provides the solution of security considering major aspects, using Visual Cryptography (VC). The proposed algorithm creates shares for parts of the image which does not have relevant information. All the information which contains data related to the disease is supposed to be relevant and is marked as the region of interest (ROI). The integrity of the image is maintained by inserting some information in the region of non-interest (RONI). All the shares generated are transmitted over different channels and embedded information is decrypted by overlapping (in XOR fashion) shares in theta(1) time. Visual perception of all the results discussed in this article is very clear. The proposed algorithm has performance metrics as PSNR (peak signal-to-noise ratio), SSIM (structure similarity matrix), and Accuracy having values 22.9452, 0.9701, and 99.8740 respectively. (C) 2019 Elsevier Ltd. All rights reserved.

Technical and Clinical Factors Affecting Success Rate of a Deep Learning Method for Pancreas Segmentation on CT

  • Bagheri, Mohammad Hadi
  • Roth, Holger
  • Kovacs, William
  • Yao, Jianhua
  • Farhadi, Faraz
  • Li, Xiaobai
  • Summers, Ronald M
Acad Radiol 2019 Journal Article, cited 0 times
Website
PURPOSE: Accurate pancreas segmentation has application in surgical planning, assessment of diabetes, and detection and analysis of pancreatic tumors. Factors that affect pancreas segmentation accuracy have not been previously reported. The purpose of this study is to identify technical and clinical factors that adversely affect the accuracy of pancreas segmentation on CT. METHOD AND MATERIALS: In this IRB and HIPAA compliant study, a deep convolutional neural network was used for pancreas segmentation in a publicly available archive of 82 portal-venous phase abdominal CT scans of 53 men and 29 women. The accuracies of the segmentations were evaluated by the Dice similarity coefficient (DSC). The DSC was then correlated with demographic and clinical data (age, gender, height, weight, body mass index), CT technical factors (image pixel size, slice thickness, presence or absence of oral contrast), and CT imaging findings (volume and attenuation of pancreas, visceral abdominal fat, and CT attenuation of the structures within a 5 mm neighborhood of the pancreas). RESULTS: The average DSC was 78% +/- 8%. Factors that were statistically significantly correlated with DSC included body mass index (r=0.34, p < 0.01), visceral abdominal fat (r=0.51, p < 0.0001), volume of the pancreas (r=0.41, p=0.001), standard deviation of CT attenuation within the pancreas (r=0.30, p=0.01), and median and average CT attenuation in the immediate neighborhood of the pancreas (r = -0.53, p < 0.0001 and r=-0.52, p < 0.0001). There were no significant correlations between the DSC and the height, gender, or mean CT attenuation of the pancreas. CONCLUSION: Increased visceral abdominal fat and accumulation of fat within or around the pancreas are major factors associated with more accurate segmentation of the pancreas. Potential applications of our findings include assessment of pancreas segmentation difficulty of a particular scan or dataset and identification of methods that work better for more challenging pancreas segmentations.

Virtual clinical trial for task-based evaluation of a deep learning synthetic mammography algorithm

  • Badal, Andreu
  • Cha, Kenny H.
  • Divel, Sarah E.
  • Graff, Christian G.
  • Zeng, Rongping
  • Badano, Aldo
2019 Conference Proceedings, cited 0 times
Website
Image processing algorithms based on deep learning techniques are being developed for a wide range of medical applications. Processed medical images are typically evaluated with the same kind of image similarity metrics used for natural scenes, disregarding the medical task for which the images are intended. We propose a com- putational framework to estimate the clinical performance of image processing algorithms using virtual clinical trials. The proposed framework may provide an alternative method for regulatory evaluation of non-linear image processing algorithms. To illustrate this application of virtual clinical trials, we evaluated three algorithms to compute synthetic mammograms from digital breast tomosynthesis (DBT) scans based on convolutional neural networks previously used for denoising low dose computed tomography scans. The inputs to the networks were one or more noisy DBT projections, and the networks were trained to minimize the difference between the output and the corresponding high dose mammogram. DBT and mammography images simulated with the Monte Carlo code MC-GPU using realistic breast phantoms were used for network training and validation. The denoising algorithms were tested in a virtual clinical trial by generating 3000 synthetic mammograms from the public VICTRE dataset of simulated DBT scans. The detectability of a calcification cluster and a spiculated mass present in the images was calculated using an ensemble of 30 computational channelized Hotelling observers. The signal detectability results, which took into account anatomic and image reader variability, showed that the visibility of the mass was not affected by the post-processing algorithm, but that the resulting slight blurring of the images severely impacted the visibility of the calcification cluster. The evaluation of the algorithms using the pixel-based metrics peak signal to noise ratio and structural similarity in image patches was not able to predict the reduction in performance in the detectability of calcifications. These two metrics are computed over the whole image and do not consider any particular task, and might not be adequate to estimate the diagnostic performance of the post-processed images.

Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method.

  • Astaraki, Mehdi
  • Wang, Chunliang
  • Buizza, Giulia
  • Toma-Dasu, Iuliana
  • Lazzeroni, Marta
  • Smedby, Orjan
Physica Medica 2019 Journal Article, cited 0 times
Website
PURPOSE: To explore prognostic and predictive values of a novel quantitative feature set describing intra-tumor heterogeneity in patients with lung cancer treated with concurrent and sequential chemoradiotherapy. METHODS: Longitudinal PET-CT images of 30 patients with non-small cell lung cancer were analysed. To describe tumor cell heterogeneity, the tumors were partitioned into one to ten concentric regions depending on their sizes, and, for each region, the change in average intensity between the two scans was calculated for PET and CT images separately to form the proposed feature set. To validate the prognostic value of the proposed method, radiomics analysis was performed and a combination of the proposed novel feature set and the classic radiomic features was evaluated. A feature selection algorithm was utilized to identify the optimal features, and a linear support vector machine was trained for the task of overall survival prediction in terms of area under the receiver operating characteristic curve (AUROC). RESULTS: The proposed novel feature set was found to be prognostic and even outperformed the radiomics approach with a significant difference (AUROCSALoP=0.90 vs. AUROCradiomic=0.71) when feature selection was not employed, whereas with feature selection, a combination of the novel feature set and radiomics led to the highest prognostic values. CONCLUSION: A novel feature set designed for capturing intra-tumor heterogeneity was introduced. Judging by their prognostic power, the proposed features have a promising potential for early survival prediction.

Fusion of CT and MR Liver Images by SURF-Based Registration

  • Aslan, Muhammet Fatih
  • Durdu, Akif
International Journal of Intelligent Systems and Applications in Engineering 2019 Journal Article, cited 3 times
Website

Morphological and multi-level geometrical descriptor analysis in CT and MRI volumes for automatic pancreas segmentation

  • Asaturyan, Hykoush
  • Gligorievski, Antonio
  • Villarini, Barbara
Computerized Medical Imaging and Graphics 2019 Journal Article, cited 3 times
Website
Automatic pancreas segmentation in 3D radiological scans is a critical, yet challenging task. As a prerequisite for computer-aided diagnosis (CADx) systems, accurate pancreas segmentation could generate both quantitative and qualitative information towards establishing the severity of a condition, and thus provide additional guidance for therapy planning. Since the pancreas is an organ of high inter-patient anatomical variability, previous segmentation approaches report lower quantitative accuracy scores in comparison to abdominal organs such as the liver or kidneys. This paper presents a novel approach for automatic pancreas segmentation in magnetic resonance imaging (MRI) and computer tomography (CT) scans. This method exploits 3D segmentation that, when coupled with geometrical and morphological characteristics of abdominal tissue, classifies distinct contours in tight pixel-range proximity as “pancreas” or “non-pancreas”. There are three main stages to this approach: (1) identify a major pancreas region and apply contrast enhancement to differentiate between pancreatic and surrounding tissue; (2) perform 3D segmentation via continuous max-flow and min-cuts approach, structured forest edge detection, and a training dataset of annotated pancreata; (3) eliminate non-pancreatic contours from resultant segmentation via morphological operations on area, structure and connectivity between distinct contours. The proposed method is evaluated on a dataset containing 82 CT image volumes, achieving mean Dice Similarity coefficient (DSC) of 79.3 ± 4.4%. Two MRI datasets containing 216 and 132 image volumes are evaluated, achieving mean DSC 79.6 ± 5.7% and 81.6 ± 5.1% respectively. This approach is statistically stable, reflected by lower metrics in standard deviation in comparison to state-of-the-art approaches.

End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography

  • Ardila, D.
  • Kiraly, A. P.
  • Bharadwaj, S.
  • Choi, B.
  • Reicher, J. J.
  • Peng, L.
  • Tse, D.
  • Etemadi, M.
  • Ye, W.
  • Corrado, G.
  • Naidich, D. P.
  • Shetty, S.
Nat Med 2019 Journal Article, cited 1 times
Website
With an estimated 160,000 deaths in 2018, lung cancer is the most common cause of cancer death in the United States(1). Lung cancer screening using low-dose computed tomography has been shown to reduce mortality by 20-43% and is now included in US screening guidelines(1-6). Existing challenges include inter-grader variability and high false-positive and false-negative rates(7-10). We propose a deep learning algorithm that uses a patient's current and prior computed tomography volumes to predict the risk of lung cancer. Our model achieves a state-of-the-art performance (94.4% area under the curve) on 6,716 National Lung Cancer Screening Trial cases, and performs similarly on an independent clinical validation set of 1,139 cases. We conducted two reader studies. When prior computed tomography imaging was not available, our model outperformed all six radiologists with absolute reductions of 11% in false positives and 5% in false negatives. Where prior computed tomography imaging was available, the model performance was on-par with the same radiologists. This creates an opportunity to optimize the screening process via computer assistance and automation. While the vast majority of patients remain unscreened, we show the potential for deep learning models to increase the accuracy, consistency and adoption of lung cancer screening worldwide.

Medical Image Classification Algorithm Based on Weight Initialization-Sliding Window Fusion Convolutional Neural Network

  • An, Feng-Ping
Complexity 2019 Journal Article, cited 0 times
Website
Due to the complexity of medical images, traditional medical image classification methods have been unable to meet actual application needs. In recent years, the rapid development of deep learning theory has provided a technical approach for solving medical image classification tasks. However, deep learning has the following problems in medical image classification. First, it is impossible to construct a deep learning model hierarchy for medical image properties; second, the network initialization weights of deep learning models are not well optimized. Therefore, this paper starts from the perspective of network optimization and improves the nonlinear modeling ability of the network through optimization methods. A new network weight initialization method is proposed, which alleviates the problem that existing deep learning model initialization is limited by the type of the nonlinear unit adopted and increases the potential of the neural network to handle different visual tasks. Moreover, through an in-depth study of the multicolumn convolutional neural network framework, this paper finds that the number of features and the convolution kernel size at different levels of the convolutional neural network are different. In contrast, the proposed method can construct different convolutional neural network models that adapt better to the characteristics of the medical images of interest and thus can better train the resulting heterogeneous multicolumn convolutional neural networks. Finally, using the adaptive sliding